13
Research Article Multifocus Color Image Fusion Based on NSST and PCNN Xin Jin, Rencan Nie, Dongming Zhou, Quan Wang, and Kangjian He Information College, Yunnan University, Kunming 650091, China Correspondence should be addressed to Dongming Zhou; [email protected] Received 7 August 2015; Revised 29 October 2015; Accepted 5 November 2015 Academic Editor: Claudio Lugni Copyright © 2016 Xin Jin et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) and pulse coupled neural networks (PCNN); the algorithm can be used in different color spaces. In this paper, we take HSV color space as an example, H component is clustered by adaptive simplified PCNN (S-PCNN), and then the H component is fused according to oscillation frequency graph (OFG) of S-PCNN; at the same time, S and V components are decomposed by NSST, and different fusion rules are utilized to fuse the obtained results. Finally, inverse HSV transform is performed to get the RGB color image. e experimental results indicate that the proposed color image fusion algorithm is more efficient than other common color image fusion algorithms. 1. Introduction Video technology is one of the important technologies for coastal monitoring, and image fusion is the basis of video technology. Color images contain color information and brightness information, so the color images are more suitable for coastal monitoring than gray images [1]. Besides, the identifiable degree of human vision to color information is higher than the gray image [2]. e whole procedure of the image fusion is to extract the significant and representative information from the source images of the same scene, which may come from different types of image sensors or the same one acting in different modes, and then attempt to fuse it into the final composite image with a better description of the scene than any of the individual source images. us, the study of suitable fusion technology in multisensor image is necessary and valuable [3]. Color image is the combination of different brightness and colors. Because color image is comprised of several components and the fusion image is the fusion of each color space component, there are some common algorithms, such as average, Intensity, Hue and Saturation (HIS), and principal component analysis (PCA) [4, 5], which are easy to implement but the performances are not good. Recently, image fusion methods based on multiresolution analysis have been widely studied; the first step is image transform and then recombining the coefficients of the transformed image; at last the fused image can be obtained by inverse transform. According to the different ways of decomposition, these algo- rithms can be divided into pyramid transform, wavelet trans- form [6], curvelet [7], and contourlet [8]. In 2005, Labate et al. proposed a new multidimensional representation algorithm, which is called shearlet [9]. One advantage of this algorithm is that it can be constructed using generalized multiresolution analysis and efficiently implemented using a classical cascade algorithm. So shearlet has good performance in both time domain and frequency domain [10]. In order to combine the superiorities and overcome the defects of nonsubsam- pled contourlet transform (NSCT) and shearlet transform (ST), [11] proposed the theory of nonsubsampled shearlet transform (NSST) combining the nonsubsampled Laplacian pyramid transform with several different shearing filters. In comparison with current multiresolution geometric analysis (MGA) tools, NSST absorbs some recent developments in the MGA field and shows satisfactory fusion performance such as the better sparse representation ability and much lower computational costs. Besides, NSST also has the requirement of the shiſt-invariance property ST lacks. erefore, it is hoped that further research on the area of image fusion based on NSST domain is promising and competitive [12]. In recent years, the image fusion method based on PCNN is getting more and more attention by many experts and scholars with Hindawi Publishing Corporation Journal of Sensors Volume 2016, Article ID 8359602, 12 pages http://dx.doi.org/10.1155/2016/8359602

Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Research ArticleMultifocus Color Image Fusion Based on NSST and PCNN

Xin Jin Rencan Nie Dongming Zhou Quan Wang and Kangjian He

Information College Yunnan University Kunming 650091 China

Correspondence should be addressed to Dongming Zhou zhoudmynueducn

Received 7 August 2015 Revised 29 October 2015 Accepted 5 November 2015

Academic Editor Claudio Lugni

Copyright copy 2016 Xin Jin et al This is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper proposed an effective multifocus color image fusion algorithm based on nonsubsampled shearlet transform (NSST) andpulse coupled neural networks (PCNN) the algorithm can be used in different color spaces In this paper we take HSV color spaceas an example H component is clustered by adaptive simplified PCNN (S-PCNN) and then the H component is fused accordingto oscillation frequency graph (OFG) of S-PCNN at the same time S and V components are decomposed by NSST and differentfusion rules are utilized to fuse the obtained results Finally inverse HSV transform is performed to get the RGB color image Theexperimental results indicate that the proposed color image fusion algorithm is more efficient than other common color imagefusion algorithms

1 Introduction

Video technology is one of the important technologies forcoastal monitoring and image fusion is the basis of videotechnology Color images contain color information andbrightness information so the color images are more suitablefor coastal monitoring than gray images [1] Besides theidentifiable degree of human vision to color information ishigher than the gray image [2] The whole procedure of theimage fusion is to extract the significant and representativeinformation from the source images of the same scene whichmay come from different types of image sensors or the sameone acting in different modes and then attempt to fuse itinto the final composite image with a better description ofthe scene than any of the individual source images Thus thestudy of suitable fusion technology in multisensor image isnecessary and valuable [3]

Color image is the combination of different brightnessand colors Because color image is comprised of severalcomponents and the fusion image is the fusion of eachcolor space component there are some common algorithmssuch as average Intensity Hue and Saturation (HIS) andprincipal component analysis (PCA) [4 5] which are easyto implement but the performances are not good Recentlyimage fusionmethods based onmultiresolution analysis havebeen widely studied the first step is image transform and

then recombining the coefficients of the transformed imageat last the fused image can be obtained by inverse transformAccording to the different ways of decomposition these algo-rithms can be divided into pyramid transform wavelet trans-form [6] curvelet [7] and contourlet [8] In 2005 Labate et alproposed a new multidimensional representation algorithmwhich is called shearlet [9] One advantage of this algorithmis that it can be constructed using generalizedmultiresolutionanalysis and efficiently implemented using a classical cascadealgorithm So shearlet has good performance in both timedomain and frequency domain [10] In order to combinethe superiorities and overcome the defects of nonsubsam-pled contourlet transform (NSCT) and shearlet transform(ST) [11] proposed the theory of nonsubsampled shearlettransform (NSST) combining the nonsubsampled Laplacianpyramid transform with several different shearing filters Incomparison with current multiresolution geometric analysis(MGA) tools NSST absorbs some recent developments in theMGA field and shows satisfactory fusion performance suchas the better sparse representation ability and much lowercomputational costs Besides NSST also has the requirementof the shift-invariance property ST lacks Therefore it ishoped that further research on the area of image fusion basedonNSST domain is promising and competitive [12] In recentyears the image fusion method based on PCNN is gettingmore and more attention by many experts and scholars with

Hindawi Publishing CorporationJournal of SensorsVolume 2016 Article ID 8359602 12 pageshttpdxdoiorg10115520168359602

2 Journal of Sensors

PCNNrsquos characters in biological background Comparedwithother artificial neural networks PCNN has an incomparableadvantage over other traditional artificial neural networks SoPCNN has been widely used in image processing fields andshows extremely superior performances [12ndash15]

In this paper a new multifocus color image fusionalgorithm is proposed based on NSST and PCNNThe paperabsorbs some advantages of NSST and PCNN it firstlyconverts RGB color image to HSV color image and thenH component is input into adaptive simplified PCNN (S-PCNN)model to get oscillation frequency graph (OFG) of S-PCNN a new fused H component is obtained by comparingthe OFG S and V components are decomposed into lowfrequency subband and high frequency subband by NSSTand these subbands are fused by different methods to get newfused S and V components At last inverse HSV transformis performed to obtain a new fused RGB color image Theexperimental results indicate that the proposed algorithm ismore effective to save the color information of the sourcecolor images than other common algorithms and the fusedimage contains more edges texture and detail

This paper is arranged as follows Section 2 introducesrelated theories of NSST and PCNN model Section 3explains the proposed algorithm including framework andworkflow Section 4 presents the experimental results andanalysis Section 5 concludes this paper

2 Related Theories

21 PCNN PCNN model has three fundamental parts thereceptive field the modulation field and the pulse generator[13 14] In the receptive field which consists of 119871 and119865 channels and is described by (1) the neuron receivesneighboring neuronsrsquo coupling input119884 and external stimulusinput 119878 In 119871 and 119865 channels of the neuron the neuronlinks with its neighborhood neurons via the synaptic linkingweights119882 and119872 respectively the two channels accumulateinput and exponential decay changes at the same time thedecay exponentials are 120572

119871 and 120572119865 respectively while the

channel amplitudes are 119881119871 and 119881119865 respectively

119865119894119895 (119899) = 119881

119865sum

119896119897

119872119894119895119896119897

119884119896119897 (119899 minus 1) + 119890

minus120572119865

119865119894119895 (119899 minus 1) + 119878

119894119895

119871119894119895 (119899) = 119881

119871sum

119896119897

119882119894119895119896119897

119884119896119897 (119899 minus 1) + 119871

119894119895 (119899 minus 1) 119890minus120572119871

(1)

In the modulation field the linking input 119871119894119895(119899) made

by adding a bias to the linking then it is multiplied by thefeeding input the bias is unitary 120573 is the linking strengthand the total internal activity 119880 is the result of modulationwhich is described by

119880119894119895 (119899) = 119865

119894119895 (119899) [1 + 120573119871119894119895 (119899)] (2)

Pulse generator consists of a threshold adjuster a com-parison organ and a pulse generator which is described by(3) Its function is to generate the pulse output 119884 and 120579

119894119895

is adjustment threshold 119881120579119894119895is threshold coefficient When

the internal state 119880 is larger than the threshold 120579 that isthe neuron satisfies the condition 119880

119894119895(119899) gt 120579

119894119895(119899) a pulse

would be produced by the neuron we call an ignition whichis described by (4)

120579119894119895 (119899) = 119890

minus120572120579

120579119894119895 (119899 minus 1) + 119881

120579

119894119895119884119894119895 (119899 minus 1) (3)

119884119894119895 (119899) =

1 119880119894119895 (119899) gt 120579

119894119895 (119899)

0 otherwise(4)

where the subscripts 119894 and 119895 represent the neuron locationin PCNN and 119899 denotes the current iteration (discrete timestep) where 119899 varies from 1 to 119873 (119873 is the total numberof iterations) In particular ldquoa neuron ignitionrdquo means aPCNNrsquos neuron generates a pulseThe total times of ignitionsrepresent image information of the corresponding codesequences after119873 iterations

When PCNN is used for image processing a pixel isconnected to unique neuron The number of neurons in thenetwork is equal to the pixel number of the input imagenamely there exists one-to-one correspondence between theimage and neurons network and the pixel value is taken asthe external input stimulus of the neuron in 119865 channel Aneuron outputs results in two states namely pulse (status 1)and nonpulse (status 0) so the output status of neuronscomposes a binary image More information about PCNNwill be found in [12ndash15]

22 S-PCNN Simplified PCNN (S-PCNN) model [15] iscomposed the same as the original PCNN model but theinput of 119865 channel is only related to image gray value andhas no relationship with external coupling and exponentialdecay characteristics and its parameters are less than originalPCNN model and the input channel of the receptive fieldis simple and effective In S-PCNN model the variables of aneuron119873

119894119895satisfy the following

119865119894119895 (119899) = 119878

119894119895

119871119894119895 (119899) = 119881

119871

119894119895sum

119896119897

119882119894119895119896119897

119884119896119897 (119899 minus 1) + 119871

119894119895 (119899 minus 1) 119890minus120572119871

119880119894119895 (119899) = 119865

119894119895 (119899) [1 + 120573119871119894119895 (119899)]

120579119894119895 (119899) = 119890

minus120572120579

120579119894119895 (119899 minus 1) + 119881

120579

119894119895119884119894119895 (119899 minus 1)

119884119894119895 (119899) =

1 119880119894119895 (119899) gt 120579

119894119895 (119899)

0 otherwise

(5)

23TheOFGof PCNN Capture characters of PCNNneuronswill cause a similar brightness to the surrounding neuronsto capture the ignition The capture characters can be auto-matically coupled to transmit information In this paper weuse PCNN to extract image features PCNN also can extractthe information of the imagersquos texture edge and regionaldistribution and has a good effect on image processingIn an iteration of PCNN a binary image will be obtained

Journal of Sensors 3

(a) (b) (c) (d) (e)

Figure 1 An illustration of the binary map and OFG (a) Source image (b) Binary map at 4th time iteration (c) Binary map at 8th timeiteration (d) OFG at 10th time iteration (e) OFG at 15th time iteration

by recording the neuron fires or not The binary imageseffectively express the features of the image such as textureedge and regional distribution the binary map and OFG areshown in Figures 1(b) and 1(c) After the global statistics of thebinary image of the neurons we get an oscillation frequencygraph (OFG) which is shown in (6) and Figures 1(d) and 1(e)

OFG119873 (119894 119895) =119873

sum

119899=1

119884119894119895 (119899) (6)

where 119873 denotes the iteration times 119884119894119895denotes the pulse

output of the neuron (119894 119895) and 119899 is the current iteration

24 Nonsubsampled Shearlet Transform Shearlet transformwas proposed based on wavelet by Labate et al [9 11] Indimension 119899 = 2 affine system

120595119860119878

(120595) = 120595119895119897119896 (119909) = |det119860|

1198952120595 (119878119897119860119895119909 minus 119896) 119897 119895

isin 119885 119896 isin 1198852

(7)

where 120595 is a collection of basis functions and satisfies120595 isin 119871

2(1198772) 119860 represents anisotropy matrix for multiscale

partitions 119878 is a shear matrix for directional analysis 119895 119897 and119896 are scale direction and shift parameter respectively119860 and119878 are both 2times2 invertiblematrices and | det 119878| = 1 For forall119886 gt 0

and 119904 isin 119877 the matrices of 119860 and 119878 are given by

119860 = (

119886 0

0 radic119886)

119878 = (

1 119904

0 1)

(8)

Let 119886 = 4 and 119904 = 1 from (8) be further modified

119860 = (

4 0

0 2)

119878 = (

1 1

0 1)

(9)

For forall120585 = (1205851 1205852) isin

2 1205851

= 0 the mathematical expressionof basic function

(0) for shearlet transform can be givenaccording to [11]

(0)

(120585) = (0)

(1205851 1205851) = 1(1205851) 2(1205852

1205851

) (10)

where is the Fourier transform of 120595 1

isin 119862infin(119877) and

2

isin 119862infin(119877) are both wavelets and supplement 120595

2sub

[minus12 minus116] cup [116 12] supp 2

sub [minus1 1] It impliesthat

0= 119862infin(119877) and compactly supported with supp120595

0sub

[minus12 12]2 In addition we assume that

sum

119895ge0

100381610038161003816100381610038161(2minus2119895

120596)10038161003816100381610038161003816

2

= 1 |120596| ge1

8 (11)

and for each 119895 ge 0 2satisfies

2119895minus1

sum

119897=minus2119895

100381610038161003816100381610038162(2119895120596 minus 119897)

10038161003816100381610038161003816= 1 |120596| le 1 (12)

From the conditions on the support of 1 and

2 one can

obtain that function 120595119895119897119896

has the frequency support listed in

supp 0

119895119897119896sub 1205851 1205852

10038161003816100381610038161003816100381610038161003816

1205851isin [minus2

2119895minus1 minus22119895minus4

]

cup [minus22119895minus4

minus22119895minus1

]

10038161003816100381610038161003816100381610038161003816

1205852

1205851

+ 1198972minus11989510038161003816100381610038161003816

le 2minus119895

(13)

That is each element 119895119897119896

is supported on a pair oftrapeziform zones whose sizes all approximate to 2

2119895times 2119895

The tiling of the frequency by shearlet and the size of thefrequency support 120595

119895119897119896

In NSST algorithm in order to remove the influenceof upsampling and subsampling nonsubsampled Laplacianpyramid filters are used as a substitute in the shearlettransform so it has excellent performance in terms ofshift-invariance multiscale and multidirectional propertiesThe discretization process of NSST has two phases multi-scale factorization and multi-directional factorization Non-subsampled Laplacian pyramid filters complete multiscalefactorization The first phase uses 119896 classes two-channelnonsubsampled filter to get one low frequency image and

4 Journal of Sensors

(a) (b)

(c)

(d)

Figure 2 An illustration of NSST (a) Source image (b) The approximate NSST coefficients (c) Images of the detailed coefficients at level 1(d) Images of the detailed coefficients at level 2

119896 high frequency images The multidirectional factorizationin NSST is realized via improved shearlet transform Thesefilters are formed by avoiding the subsampling to satisfythe property of shift-invariance Shearlet transform allowsthe direction decomposition with 119897 stages in high frequencyimages from nonsubsampled Laplacian pyramid at each leveland produces 2119897 directional subimages with the same size asthe source image [16 17] Figure 2 shows the two-level NSSTdecomposition of an image

3 The Proposed Algorithm

In this section the proposed multifocus color image fusionalgorithm is presented in detail The framework of theproposed algorithm is shown in Figure 3 In this algorithmthe RGB color images are transformed into HSV color space

[18] NSST is used to decompose the image and PCNN is usedto extract the features and fuse these features using differentrules Besides it is important to note that two kinds of PCNNmodel are used in the algorithm adaptive S-PCNN is used tofuse H component and original PCNN is used to fuse highfrequency coefficients of NSST

31 RGB Color Image Transform to HSV RGB color imagecontains almost all basic colors that can be perceived byhuman vision however the correlation among the compo-nents is very strong it is shown in Figures 4(b) 4(c) and 4(d)thismakes it difficult for RGB color image to deal with the factthat the color of the image will be changed if a componentchanges HSV image can be obtained by the RGB transformThe values of R G and B correspond to unique H S and Vvalues as the values ofH S andV components depend on the

Journal of Sensors 5

Image A

Image BNSST NSST

Inverse

PCNNfuse

Rule

Rule

SF

S-PCNN

S-PCNN

AG

OFGAverage

SF

S-PCNN

S-PCNN

AG

Average

InverseHSV

H

S

V

S

V

H

OFG

OFG

AOFG

OFG

AOFG

120573

120573

120573

120573

SAVB_low

SAVB_low

SAVB_high

SAVB_high

Figure 3 The schematic diagram of the proposed image fusion method

values of R G and B in RGB color spaceThis color system iscloser than the RGB color system to human experience andperception of the color this is shown in Figures 4(e) 4(f) and4(g)

The distribution of H component is concentrated andits pixel value generally is small see Figure 4(h) So its edgeis obvious and S-PCNN is sensitive to edge and regionaldistribution of the image In this algorithm H componentis partitioned into blocks and input into adaptive S-PCNNto get OFG of the H component we fuse the H componentaccording to the OFG The distribution of S and V compo-nents is dispersive see Figures 4(i) and 4(j) which containlots of details of the image in different grayscale so S and Vcomponents are decomposed into multiscale and multidirec-tional subband by NSST to get different detailed informationof the images and then according to the characters of new Sand V components new S and V components are fused usingdifferent rules

32 H Component Fusion Using Adaptive S-PCNN 120573 is thelinking strength of S-PCNN and it is a key determinant tothe ignition behavior of S-PCNN Spatial frequency (SF) andaverage gradient (AG) are very important indicators of theimage definition which represent the quality of the imageSo S-PCNN should be adaptively adjusted by SF and AGto make it work well In S-PCNN model H component isdivided into several blocks and then SF and AG of the blocksare calculated as linking strength 120573 of S-PCNN respectivelyso this can adaptively adjust 120573 of S-PCNN see (14) and (15)The block images are input into S-PCNN to get two kinds ofOFG according to different 120573 which can effectively expressthe quality of the block images The two types of indexes willchange with the content of the image and produce a certainrange of changes The index itself and its change range are

suitable for adjusting the parameter 120573 of S-PCNN whichcan affect the OFG of S-PCNN so that the two types ofOFG can reflect different details of the image If one type ofOFG is only used as the size of OFG for different qualityof pixel is the same this cannot select appropriate pixelsHowever the combination of two types OFG can reflect theimage information from different angles thus this reducesthe possibility of different pixels with the same OFG It canmake the algorithm more effective

SF and AG of the image are described in (16) to (19)

1205731= 1205781lowast SF (14)

1205732= 1205782lowast AG (15)

where 120578 is adjusted factor Consider

SF = radicRF2 + CF2 (16)

RF = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (17)

CF = radic1

119872 times 119873

119873

sum

119894=1

119872

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (18)

SF is composed of row frequency (RF) and columnfrequency (CF) where 119872 is the row of the image 119873 is thecolumn of the image and 119865(119894 119895) is grey level of the image 119865

at pixel (119894 119895)

AG =1

119872 times 119873

sdot

119872

sum

119894=1

119873

sum

119895=1

radic1

2((119865 (119894 119895) minus 119865 (119894 + 1 119895))

2+ (119865 (119894 119895) minus 119865 (119894 119895 + 1))

2)

(19)

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 2: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

2 Journal of Sensors

PCNNrsquos characters in biological background Comparedwithother artificial neural networks PCNN has an incomparableadvantage over other traditional artificial neural networks SoPCNN has been widely used in image processing fields andshows extremely superior performances [12ndash15]

In this paper a new multifocus color image fusionalgorithm is proposed based on NSST and PCNNThe paperabsorbs some advantages of NSST and PCNN it firstlyconverts RGB color image to HSV color image and thenH component is input into adaptive simplified PCNN (S-PCNN)model to get oscillation frequency graph (OFG) of S-PCNN a new fused H component is obtained by comparingthe OFG S and V components are decomposed into lowfrequency subband and high frequency subband by NSSTand these subbands are fused by different methods to get newfused S and V components At last inverse HSV transformis performed to obtain a new fused RGB color image Theexperimental results indicate that the proposed algorithm ismore effective to save the color information of the sourcecolor images than other common algorithms and the fusedimage contains more edges texture and detail

This paper is arranged as follows Section 2 introducesrelated theories of NSST and PCNN model Section 3explains the proposed algorithm including framework andworkflow Section 4 presents the experimental results andanalysis Section 5 concludes this paper

2 Related Theories

21 PCNN PCNN model has three fundamental parts thereceptive field the modulation field and the pulse generator[13 14] In the receptive field which consists of 119871 and119865 channels and is described by (1) the neuron receivesneighboring neuronsrsquo coupling input119884 and external stimulusinput 119878 In 119871 and 119865 channels of the neuron the neuronlinks with its neighborhood neurons via the synaptic linkingweights119882 and119872 respectively the two channels accumulateinput and exponential decay changes at the same time thedecay exponentials are 120572

119871 and 120572119865 respectively while the

channel amplitudes are 119881119871 and 119881119865 respectively

119865119894119895 (119899) = 119881

119865sum

119896119897

119872119894119895119896119897

119884119896119897 (119899 minus 1) + 119890

minus120572119865

119865119894119895 (119899 minus 1) + 119878

119894119895

119871119894119895 (119899) = 119881

119871sum

119896119897

119882119894119895119896119897

119884119896119897 (119899 minus 1) + 119871

119894119895 (119899 minus 1) 119890minus120572119871

(1)

In the modulation field the linking input 119871119894119895(119899) made

by adding a bias to the linking then it is multiplied by thefeeding input the bias is unitary 120573 is the linking strengthand the total internal activity 119880 is the result of modulationwhich is described by

119880119894119895 (119899) = 119865

119894119895 (119899) [1 + 120573119871119894119895 (119899)] (2)

Pulse generator consists of a threshold adjuster a com-parison organ and a pulse generator which is described by(3) Its function is to generate the pulse output 119884 and 120579

119894119895

is adjustment threshold 119881120579119894119895is threshold coefficient When

the internal state 119880 is larger than the threshold 120579 that isthe neuron satisfies the condition 119880

119894119895(119899) gt 120579

119894119895(119899) a pulse

would be produced by the neuron we call an ignition whichis described by (4)

120579119894119895 (119899) = 119890

minus120572120579

120579119894119895 (119899 minus 1) + 119881

120579

119894119895119884119894119895 (119899 minus 1) (3)

119884119894119895 (119899) =

1 119880119894119895 (119899) gt 120579

119894119895 (119899)

0 otherwise(4)

where the subscripts 119894 and 119895 represent the neuron locationin PCNN and 119899 denotes the current iteration (discrete timestep) where 119899 varies from 1 to 119873 (119873 is the total numberof iterations) In particular ldquoa neuron ignitionrdquo means aPCNNrsquos neuron generates a pulseThe total times of ignitionsrepresent image information of the corresponding codesequences after119873 iterations

When PCNN is used for image processing a pixel isconnected to unique neuron The number of neurons in thenetwork is equal to the pixel number of the input imagenamely there exists one-to-one correspondence between theimage and neurons network and the pixel value is taken asthe external input stimulus of the neuron in 119865 channel Aneuron outputs results in two states namely pulse (status 1)and nonpulse (status 0) so the output status of neuronscomposes a binary image More information about PCNNwill be found in [12ndash15]

22 S-PCNN Simplified PCNN (S-PCNN) model [15] iscomposed the same as the original PCNN model but theinput of 119865 channel is only related to image gray value andhas no relationship with external coupling and exponentialdecay characteristics and its parameters are less than originalPCNN model and the input channel of the receptive fieldis simple and effective In S-PCNN model the variables of aneuron119873

119894119895satisfy the following

119865119894119895 (119899) = 119878

119894119895

119871119894119895 (119899) = 119881

119871

119894119895sum

119896119897

119882119894119895119896119897

119884119896119897 (119899 minus 1) + 119871

119894119895 (119899 minus 1) 119890minus120572119871

119880119894119895 (119899) = 119865

119894119895 (119899) [1 + 120573119871119894119895 (119899)]

120579119894119895 (119899) = 119890

minus120572120579

120579119894119895 (119899 minus 1) + 119881

120579

119894119895119884119894119895 (119899 minus 1)

119884119894119895 (119899) =

1 119880119894119895 (119899) gt 120579

119894119895 (119899)

0 otherwise

(5)

23TheOFGof PCNN Capture characters of PCNNneuronswill cause a similar brightness to the surrounding neuronsto capture the ignition The capture characters can be auto-matically coupled to transmit information In this paper weuse PCNN to extract image features PCNN also can extractthe information of the imagersquos texture edge and regionaldistribution and has a good effect on image processingIn an iteration of PCNN a binary image will be obtained

Journal of Sensors 3

(a) (b) (c) (d) (e)

Figure 1 An illustration of the binary map and OFG (a) Source image (b) Binary map at 4th time iteration (c) Binary map at 8th timeiteration (d) OFG at 10th time iteration (e) OFG at 15th time iteration

by recording the neuron fires or not The binary imageseffectively express the features of the image such as textureedge and regional distribution the binary map and OFG areshown in Figures 1(b) and 1(c) After the global statistics of thebinary image of the neurons we get an oscillation frequencygraph (OFG) which is shown in (6) and Figures 1(d) and 1(e)

OFG119873 (119894 119895) =119873

sum

119899=1

119884119894119895 (119899) (6)

where 119873 denotes the iteration times 119884119894119895denotes the pulse

output of the neuron (119894 119895) and 119899 is the current iteration

24 Nonsubsampled Shearlet Transform Shearlet transformwas proposed based on wavelet by Labate et al [9 11] Indimension 119899 = 2 affine system

120595119860119878

(120595) = 120595119895119897119896 (119909) = |det119860|

1198952120595 (119878119897119860119895119909 minus 119896) 119897 119895

isin 119885 119896 isin 1198852

(7)

where 120595 is a collection of basis functions and satisfies120595 isin 119871

2(1198772) 119860 represents anisotropy matrix for multiscale

partitions 119878 is a shear matrix for directional analysis 119895 119897 and119896 are scale direction and shift parameter respectively119860 and119878 are both 2times2 invertiblematrices and | det 119878| = 1 For forall119886 gt 0

and 119904 isin 119877 the matrices of 119860 and 119878 are given by

119860 = (

119886 0

0 radic119886)

119878 = (

1 119904

0 1)

(8)

Let 119886 = 4 and 119904 = 1 from (8) be further modified

119860 = (

4 0

0 2)

119878 = (

1 1

0 1)

(9)

For forall120585 = (1205851 1205852) isin

2 1205851

= 0 the mathematical expressionof basic function

(0) for shearlet transform can be givenaccording to [11]

(0)

(120585) = (0)

(1205851 1205851) = 1(1205851) 2(1205852

1205851

) (10)

where is the Fourier transform of 120595 1

isin 119862infin(119877) and

2

isin 119862infin(119877) are both wavelets and supplement 120595

2sub

[minus12 minus116] cup [116 12] supp 2

sub [minus1 1] It impliesthat

0= 119862infin(119877) and compactly supported with supp120595

0sub

[minus12 12]2 In addition we assume that

sum

119895ge0

100381610038161003816100381610038161(2minus2119895

120596)10038161003816100381610038161003816

2

= 1 |120596| ge1

8 (11)

and for each 119895 ge 0 2satisfies

2119895minus1

sum

119897=minus2119895

100381610038161003816100381610038162(2119895120596 minus 119897)

10038161003816100381610038161003816= 1 |120596| le 1 (12)

From the conditions on the support of 1 and

2 one can

obtain that function 120595119895119897119896

has the frequency support listed in

supp 0

119895119897119896sub 1205851 1205852

10038161003816100381610038161003816100381610038161003816

1205851isin [minus2

2119895minus1 minus22119895minus4

]

cup [minus22119895minus4

minus22119895minus1

]

10038161003816100381610038161003816100381610038161003816

1205852

1205851

+ 1198972minus11989510038161003816100381610038161003816

le 2minus119895

(13)

That is each element 119895119897119896

is supported on a pair oftrapeziform zones whose sizes all approximate to 2

2119895times 2119895

The tiling of the frequency by shearlet and the size of thefrequency support 120595

119895119897119896

In NSST algorithm in order to remove the influenceof upsampling and subsampling nonsubsampled Laplacianpyramid filters are used as a substitute in the shearlettransform so it has excellent performance in terms ofshift-invariance multiscale and multidirectional propertiesThe discretization process of NSST has two phases multi-scale factorization and multi-directional factorization Non-subsampled Laplacian pyramid filters complete multiscalefactorization The first phase uses 119896 classes two-channelnonsubsampled filter to get one low frequency image and

4 Journal of Sensors

(a) (b)

(c)

(d)

Figure 2 An illustration of NSST (a) Source image (b) The approximate NSST coefficients (c) Images of the detailed coefficients at level 1(d) Images of the detailed coefficients at level 2

119896 high frequency images The multidirectional factorizationin NSST is realized via improved shearlet transform Thesefilters are formed by avoiding the subsampling to satisfythe property of shift-invariance Shearlet transform allowsthe direction decomposition with 119897 stages in high frequencyimages from nonsubsampled Laplacian pyramid at each leveland produces 2119897 directional subimages with the same size asthe source image [16 17] Figure 2 shows the two-level NSSTdecomposition of an image

3 The Proposed Algorithm

In this section the proposed multifocus color image fusionalgorithm is presented in detail The framework of theproposed algorithm is shown in Figure 3 In this algorithmthe RGB color images are transformed into HSV color space

[18] NSST is used to decompose the image and PCNN is usedto extract the features and fuse these features using differentrules Besides it is important to note that two kinds of PCNNmodel are used in the algorithm adaptive S-PCNN is used tofuse H component and original PCNN is used to fuse highfrequency coefficients of NSST

31 RGB Color Image Transform to HSV RGB color imagecontains almost all basic colors that can be perceived byhuman vision however the correlation among the compo-nents is very strong it is shown in Figures 4(b) 4(c) and 4(d)thismakes it difficult for RGB color image to deal with the factthat the color of the image will be changed if a componentchanges HSV image can be obtained by the RGB transformThe values of R G and B correspond to unique H S and Vvalues as the values ofH S andV components depend on the

Journal of Sensors 5

Image A

Image BNSST NSST

Inverse

PCNNfuse

Rule

Rule

SF

S-PCNN

S-PCNN

AG

OFGAverage

SF

S-PCNN

S-PCNN

AG

Average

InverseHSV

H

S

V

S

V

H

OFG

OFG

AOFG

OFG

AOFG

120573

120573

120573

120573

SAVB_low

SAVB_low

SAVB_high

SAVB_high

Figure 3 The schematic diagram of the proposed image fusion method

values of R G and B in RGB color spaceThis color system iscloser than the RGB color system to human experience andperception of the color this is shown in Figures 4(e) 4(f) and4(g)

The distribution of H component is concentrated andits pixel value generally is small see Figure 4(h) So its edgeis obvious and S-PCNN is sensitive to edge and regionaldistribution of the image In this algorithm H componentis partitioned into blocks and input into adaptive S-PCNNto get OFG of the H component we fuse the H componentaccording to the OFG The distribution of S and V compo-nents is dispersive see Figures 4(i) and 4(j) which containlots of details of the image in different grayscale so S and Vcomponents are decomposed into multiscale and multidirec-tional subband by NSST to get different detailed informationof the images and then according to the characters of new Sand V components new S and V components are fused usingdifferent rules

32 H Component Fusion Using Adaptive S-PCNN 120573 is thelinking strength of S-PCNN and it is a key determinant tothe ignition behavior of S-PCNN Spatial frequency (SF) andaverage gradient (AG) are very important indicators of theimage definition which represent the quality of the imageSo S-PCNN should be adaptively adjusted by SF and AGto make it work well In S-PCNN model H component isdivided into several blocks and then SF and AG of the blocksare calculated as linking strength 120573 of S-PCNN respectivelyso this can adaptively adjust 120573 of S-PCNN see (14) and (15)The block images are input into S-PCNN to get two kinds ofOFG according to different 120573 which can effectively expressthe quality of the block images The two types of indexes willchange with the content of the image and produce a certainrange of changes The index itself and its change range are

suitable for adjusting the parameter 120573 of S-PCNN whichcan affect the OFG of S-PCNN so that the two types ofOFG can reflect different details of the image If one type ofOFG is only used as the size of OFG for different qualityof pixel is the same this cannot select appropriate pixelsHowever the combination of two types OFG can reflect theimage information from different angles thus this reducesthe possibility of different pixels with the same OFG It canmake the algorithm more effective

SF and AG of the image are described in (16) to (19)

1205731= 1205781lowast SF (14)

1205732= 1205782lowast AG (15)

where 120578 is adjusted factor Consider

SF = radicRF2 + CF2 (16)

RF = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (17)

CF = radic1

119872 times 119873

119873

sum

119894=1

119872

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (18)

SF is composed of row frequency (RF) and columnfrequency (CF) where 119872 is the row of the image 119873 is thecolumn of the image and 119865(119894 119895) is grey level of the image 119865

at pixel (119894 119895)

AG =1

119872 times 119873

sdot

119872

sum

119894=1

119873

sum

119895=1

radic1

2((119865 (119894 119895) minus 119865 (119894 + 1 119895))

2+ (119865 (119894 119895) minus 119865 (119894 119895 + 1))

2)

(19)

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 3: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Journal of Sensors 3

(a) (b) (c) (d) (e)

Figure 1 An illustration of the binary map and OFG (a) Source image (b) Binary map at 4th time iteration (c) Binary map at 8th timeiteration (d) OFG at 10th time iteration (e) OFG at 15th time iteration

by recording the neuron fires or not The binary imageseffectively express the features of the image such as textureedge and regional distribution the binary map and OFG areshown in Figures 1(b) and 1(c) After the global statistics of thebinary image of the neurons we get an oscillation frequencygraph (OFG) which is shown in (6) and Figures 1(d) and 1(e)

OFG119873 (119894 119895) =119873

sum

119899=1

119884119894119895 (119899) (6)

where 119873 denotes the iteration times 119884119894119895denotes the pulse

output of the neuron (119894 119895) and 119899 is the current iteration

24 Nonsubsampled Shearlet Transform Shearlet transformwas proposed based on wavelet by Labate et al [9 11] Indimension 119899 = 2 affine system

120595119860119878

(120595) = 120595119895119897119896 (119909) = |det119860|

1198952120595 (119878119897119860119895119909 minus 119896) 119897 119895

isin 119885 119896 isin 1198852

(7)

where 120595 is a collection of basis functions and satisfies120595 isin 119871

2(1198772) 119860 represents anisotropy matrix for multiscale

partitions 119878 is a shear matrix for directional analysis 119895 119897 and119896 are scale direction and shift parameter respectively119860 and119878 are both 2times2 invertiblematrices and | det 119878| = 1 For forall119886 gt 0

and 119904 isin 119877 the matrices of 119860 and 119878 are given by

119860 = (

119886 0

0 radic119886)

119878 = (

1 119904

0 1)

(8)

Let 119886 = 4 and 119904 = 1 from (8) be further modified

119860 = (

4 0

0 2)

119878 = (

1 1

0 1)

(9)

For forall120585 = (1205851 1205852) isin

2 1205851

= 0 the mathematical expressionof basic function

(0) for shearlet transform can be givenaccording to [11]

(0)

(120585) = (0)

(1205851 1205851) = 1(1205851) 2(1205852

1205851

) (10)

where is the Fourier transform of 120595 1

isin 119862infin(119877) and

2

isin 119862infin(119877) are both wavelets and supplement 120595

2sub

[minus12 minus116] cup [116 12] supp 2

sub [minus1 1] It impliesthat

0= 119862infin(119877) and compactly supported with supp120595

0sub

[minus12 12]2 In addition we assume that

sum

119895ge0

100381610038161003816100381610038161(2minus2119895

120596)10038161003816100381610038161003816

2

= 1 |120596| ge1

8 (11)

and for each 119895 ge 0 2satisfies

2119895minus1

sum

119897=minus2119895

100381610038161003816100381610038162(2119895120596 minus 119897)

10038161003816100381610038161003816= 1 |120596| le 1 (12)

From the conditions on the support of 1 and

2 one can

obtain that function 120595119895119897119896

has the frequency support listed in

supp 0

119895119897119896sub 1205851 1205852

10038161003816100381610038161003816100381610038161003816

1205851isin [minus2

2119895minus1 minus22119895minus4

]

cup [minus22119895minus4

minus22119895minus1

]

10038161003816100381610038161003816100381610038161003816

1205852

1205851

+ 1198972minus11989510038161003816100381610038161003816

le 2minus119895

(13)

That is each element 119895119897119896

is supported on a pair oftrapeziform zones whose sizes all approximate to 2

2119895times 2119895

The tiling of the frequency by shearlet and the size of thefrequency support 120595

119895119897119896

In NSST algorithm in order to remove the influenceof upsampling and subsampling nonsubsampled Laplacianpyramid filters are used as a substitute in the shearlettransform so it has excellent performance in terms ofshift-invariance multiscale and multidirectional propertiesThe discretization process of NSST has two phases multi-scale factorization and multi-directional factorization Non-subsampled Laplacian pyramid filters complete multiscalefactorization The first phase uses 119896 classes two-channelnonsubsampled filter to get one low frequency image and

4 Journal of Sensors

(a) (b)

(c)

(d)

Figure 2 An illustration of NSST (a) Source image (b) The approximate NSST coefficients (c) Images of the detailed coefficients at level 1(d) Images of the detailed coefficients at level 2

119896 high frequency images The multidirectional factorizationin NSST is realized via improved shearlet transform Thesefilters are formed by avoiding the subsampling to satisfythe property of shift-invariance Shearlet transform allowsthe direction decomposition with 119897 stages in high frequencyimages from nonsubsampled Laplacian pyramid at each leveland produces 2119897 directional subimages with the same size asthe source image [16 17] Figure 2 shows the two-level NSSTdecomposition of an image

3 The Proposed Algorithm

In this section the proposed multifocus color image fusionalgorithm is presented in detail The framework of theproposed algorithm is shown in Figure 3 In this algorithmthe RGB color images are transformed into HSV color space

[18] NSST is used to decompose the image and PCNN is usedto extract the features and fuse these features using differentrules Besides it is important to note that two kinds of PCNNmodel are used in the algorithm adaptive S-PCNN is used tofuse H component and original PCNN is used to fuse highfrequency coefficients of NSST

31 RGB Color Image Transform to HSV RGB color imagecontains almost all basic colors that can be perceived byhuman vision however the correlation among the compo-nents is very strong it is shown in Figures 4(b) 4(c) and 4(d)thismakes it difficult for RGB color image to deal with the factthat the color of the image will be changed if a componentchanges HSV image can be obtained by the RGB transformThe values of R G and B correspond to unique H S and Vvalues as the values ofH S andV components depend on the

Journal of Sensors 5

Image A

Image BNSST NSST

Inverse

PCNNfuse

Rule

Rule

SF

S-PCNN

S-PCNN

AG

OFGAverage

SF

S-PCNN

S-PCNN

AG

Average

InverseHSV

H

S

V

S

V

H

OFG

OFG

AOFG

OFG

AOFG

120573

120573

120573

120573

SAVB_low

SAVB_low

SAVB_high

SAVB_high

Figure 3 The schematic diagram of the proposed image fusion method

values of R G and B in RGB color spaceThis color system iscloser than the RGB color system to human experience andperception of the color this is shown in Figures 4(e) 4(f) and4(g)

The distribution of H component is concentrated andits pixel value generally is small see Figure 4(h) So its edgeis obvious and S-PCNN is sensitive to edge and regionaldistribution of the image In this algorithm H componentis partitioned into blocks and input into adaptive S-PCNNto get OFG of the H component we fuse the H componentaccording to the OFG The distribution of S and V compo-nents is dispersive see Figures 4(i) and 4(j) which containlots of details of the image in different grayscale so S and Vcomponents are decomposed into multiscale and multidirec-tional subband by NSST to get different detailed informationof the images and then according to the characters of new Sand V components new S and V components are fused usingdifferent rules

32 H Component Fusion Using Adaptive S-PCNN 120573 is thelinking strength of S-PCNN and it is a key determinant tothe ignition behavior of S-PCNN Spatial frequency (SF) andaverage gradient (AG) are very important indicators of theimage definition which represent the quality of the imageSo S-PCNN should be adaptively adjusted by SF and AGto make it work well In S-PCNN model H component isdivided into several blocks and then SF and AG of the blocksare calculated as linking strength 120573 of S-PCNN respectivelyso this can adaptively adjust 120573 of S-PCNN see (14) and (15)The block images are input into S-PCNN to get two kinds ofOFG according to different 120573 which can effectively expressthe quality of the block images The two types of indexes willchange with the content of the image and produce a certainrange of changes The index itself and its change range are

suitable for adjusting the parameter 120573 of S-PCNN whichcan affect the OFG of S-PCNN so that the two types ofOFG can reflect different details of the image If one type ofOFG is only used as the size of OFG for different qualityof pixel is the same this cannot select appropriate pixelsHowever the combination of two types OFG can reflect theimage information from different angles thus this reducesthe possibility of different pixels with the same OFG It canmake the algorithm more effective

SF and AG of the image are described in (16) to (19)

1205731= 1205781lowast SF (14)

1205732= 1205782lowast AG (15)

where 120578 is adjusted factor Consider

SF = radicRF2 + CF2 (16)

RF = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (17)

CF = radic1

119872 times 119873

119873

sum

119894=1

119872

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (18)

SF is composed of row frequency (RF) and columnfrequency (CF) where 119872 is the row of the image 119873 is thecolumn of the image and 119865(119894 119895) is grey level of the image 119865

at pixel (119894 119895)

AG =1

119872 times 119873

sdot

119872

sum

119894=1

119873

sum

119895=1

radic1

2((119865 (119894 119895) minus 119865 (119894 + 1 119895))

2+ (119865 (119894 119895) minus 119865 (119894 119895 + 1))

2)

(19)

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 4: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

4 Journal of Sensors

(a) (b)

(c)

(d)

Figure 2 An illustration of NSST (a) Source image (b) The approximate NSST coefficients (c) Images of the detailed coefficients at level 1(d) Images of the detailed coefficients at level 2

119896 high frequency images The multidirectional factorizationin NSST is realized via improved shearlet transform Thesefilters are formed by avoiding the subsampling to satisfythe property of shift-invariance Shearlet transform allowsthe direction decomposition with 119897 stages in high frequencyimages from nonsubsampled Laplacian pyramid at each leveland produces 2119897 directional subimages with the same size asthe source image [16 17] Figure 2 shows the two-level NSSTdecomposition of an image

3 The Proposed Algorithm

In this section the proposed multifocus color image fusionalgorithm is presented in detail The framework of theproposed algorithm is shown in Figure 3 In this algorithmthe RGB color images are transformed into HSV color space

[18] NSST is used to decompose the image and PCNN is usedto extract the features and fuse these features using differentrules Besides it is important to note that two kinds of PCNNmodel are used in the algorithm adaptive S-PCNN is used tofuse H component and original PCNN is used to fuse highfrequency coefficients of NSST

31 RGB Color Image Transform to HSV RGB color imagecontains almost all basic colors that can be perceived byhuman vision however the correlation among the compo-nents is very strong it is shown in Figures 4(b) 4(c) and 4(d)thismakes it difficult for RGB color image to deal with the factthat the color of the image will be changed if a componentchanges HSV image can be obtained by the RGB transformThe values of R G and B correspond to unique H S and Vvalues as the values ofH S andV components depend on the

Journal of Sensors 5

Image A

Image BNSST NSST

Inverse

PCNNfuse

Rule

Rule

SF

S-PCNN

S-PCNN

AG

OFGAverage

SF

S-PCNN

S-PCNN

AG

Average

InverseHSV

H

S

V

S

V

H

OFG

OFG

AOFG

OFG

AOFG

120573

120573

120573

120573

SAVB_low

SAVB_low

SAVB_high

SAVB_high

Figure 3 The schematic diagram of the proposed image fusion method

values of R G and B in RGB color spaceThis color system iscloser than the RGB color system to human experience andperception of the color this is shown in Figures 4(e) 4(f) and4(g)

The distribution of H component is concentrated andits pixel value generally is small see Figure 4(h) So its edgeis obvious and S-PCNN is sensitive to edge and regionaldistribution of the image In this algorithm H componentis partitioned into blocks and input into adaptive S-PCNNto get OFG of the H component we fuse the H componentaccording to the OFG The distribution of S and V compo-nents is dispersive see Figures 4(i) and 4(j) which containlots of details of the image in different grayscale so S and Vcomponents are decomposed into multiscale and multidirec-tional subband by NSST to get different detailed informationof the images and then according to the characters of new Sand V components new S and V components are fused usingdifferent rules

32 H Component Fusion Using Adaptive S-PCNN 120573 is thelinking strength of S-PCNN and it is a key determinant tothe ignition behavior of S-PCNN Spatial frequency (SF) andaverage gradient (AG) are very important indicators of theimage definition which represent the quality of the imageSo S-PCNN should be adaptively adjusted by SF and AGto make it work well In S-PCNN model H component isdivided into several blocks and then SF and AG of the blocksare calculated as linking strength 120573 of S-PCNN respectivelyso this can adaptively adjust 120573 of S-PCNN see (14) and (15)The block images are input into S-PCNN to get two kinds ofOFG according to different 120573 which can effectively expressthe quality of the block images The two types of indexes willchange with the content of the image and produce a certainrange of changes The index itself and its change range are

suitable for adjusting the parameter 120573 of S-PCNN whichcan affect the OFG of S-PCNN so that the two types ofOFG can reflect different details of the image If one type ofOFG is only used as the size of OFG for different qualityof pixel is the same this cannot select appropriate pixelsHowever the combination of two types OFG can reflect theimage information from different angles thus this reducesthe possibility of different pixels with the same OFG It canmake the algorithm more effective

SF and AG of the image are described in (16) to (19)

1205731= 1205781lowast SF (14)

1205732= 1205782lowast AG (15)

where 120578 is adjusted factor Consider

SF = radicRF2 + CF2 (16)

RF = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (17)

CF = radic1

119872 times 119873

119873

sum

119894=1

119872

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (18)

SF is composed of row frequency (RF) and columnfrequency (CF) where 119872 is the row of the image 119873 is thecolumn of the image and 119865(119894 119895) is grey level of the image 119865

at pixel (119894 119895)

AG =1

119872 times 119873

sdot

119872

sum

119894=1

119873

sum

119895=1

radic1

2((119865 (119894 119895) minus 119865 (119894 + 1 119895))

2+ (119865 (119894 119895) minus 119865 (119894 119895 + 1))

2)

(19)

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 5: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Journal of Sensors 5

Image A

Image BNSST NSST

Inverse

PCNNfuse

Rule

Rule

SF

S-PCNN

S-PCNN

AG

OFGAverage

SF

S-PCNN

S-PCNN

AG

Average

InverseHSV

H

S

V

S

V

H

OFG

OFG

AOFG

OFG

AOFG

120573

120573

120573

120573

SAVB_low

SAVB_low

SAVB_high

SAVB_high

Figure 3 The schematic diagram of the proposed image fusion method

values of R G and B in RGB color spaceThis color system iscloser than the RGB color system to human experience andperception of the color this is shown in Figures 4(e) 4(f) and4(g)

The distribution of H component is concentrated andits pixel value generally is small see Figure 4(h) So its edgeis obvious and S-PCNN is sensitive to edge and regionaldistribution of the image In this algorithm H componentis partitioned into blocks and input into adaptive S-PCNNto get OFG of the H component we fuse the H componentaccording to the OFG The distribution of S and V compo-nents is dispersive see Figures 4(i) and 4(j) which containlots of details of the image in different grayscale so S and Vcomponents are decomposed into multiscale and multidirec-tional subband by NSST to get different detailed informationof the images and then according to the characters of new Sand V components new S and V components are fused usingdifferent rules

32 H Component Fusion Using Adaptive S-PCNN 120573 is thelinking strength of S-PCNN and it is a key determinant tothe ignition behavior of S-PCNN Spatial frequency (SF) andaverage gradient (AG) are very important indicators of theimage definition which represent the quality of the imageSo S-PCNN should be adaptively adjusted by SF and AGto make it work well In S-PCNN model H component isdivided into several blocks and then SF and AG of the blocksare calculated as linking strength 120573 of S-PCNN respectivelyso this can adaptively adjust 120573 of S-PCNN see (14) and (15)The block images are input into S-PCNN to get two kinds ofOFG according to different 120573 which can effectively expressthe quality of the block images The two types of indexes willchange with the content of the image and produce a certainrange of changes The index itself and its change range are

suitable for adjusting the parameter 120573 of S-PCNN whichcan affect the OFG of S-PCNN so that the two types ofOFG can reflect different details of the image If one type ofOFG is only used as the size of OFG for different qualityof pixel is the same this cannot select appropriate pixelsHowever the combination of two types OFG can reflect theimage information from different angles thus this reducesthe possibility of different pixels with the same OFG It canmake the algorithm more effective

SF and AG of the image are described in (16) to (19)

1205731= 1205781lowast SF (14)

1205732= 1205782lowast AG (15)

where 120578 is adjusted factor Consider

SF = radicRF2 + CF2 (16)

RF = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (17)

CF = radic1

119872 times 119873

119873

sum

119894=1

119872

sum

119895=2

[119865 (119894 119895) minus 119865 (119894 119895 minus 1)]2 (18)

SF is composed of row frequency (RF) and columnfrequency (CF) where 119872 is the row of the image 119873 is thecolumn of the image and 119865(119894 119895) is grey level of the image 119865

at pixel (119894 119895)

AG =1

119872 times 119873

sdot

119872

sum

119894=1

119873

sum

119895=1

radic1

2((119865 (119894 119895) minus 119865 (119894 + 1 119895))

2+ (119865 (119894 119895) minus 119865 (119894 119895 + 1))

2)

(19)

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 6: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

6 Journal of Sensors

(a) (b) (c) (d) (e) (f)

(g)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

500

Gray value

50

100

150

200

250

300

350

400

450

500

Num

ber

(h)

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

450

Gray value

Num

ber

50

100

150

200

250

300

350

400

450

500

(i)

Num

ber

0 01 02 03 04 05 06 07 08 09 10

50

100

150

200

250

300

350

400

Gray value

50

100

150

200

250

300

350

400

450

500

(j)

Figure 4 An illustration of HSV (a) Source image (b) R component (c) G component (d) B component (e) H component (f) S component(g) V component (h) Histogram of H (i) Histogram of S (j) Histogram of V

33 S and V Components Fusion Using NSST and PCNNS and V components contain lots of details of the imageand the gray values of S and V components are dispersiverespectively S and V components are decomposed by NSSTto make it easy to extract and then we will get one lowfrequency subband and several high frequency subbandsin multiscale and multidirection The higher values of thelow frequency coefficients are used into the new fused lowfrequency subband The high frequency subbands containabundant detailed information of the S and V componentsthese components are input into original PCNN model toget OFG which shows the statistics of the ignition times ofthe pixels Compared with the OFG of the high frequency

subbands from different S and V components we will get thenew fused high frequency subbands At last inverse NSST isperformed to get the fused S and V components according tothe new fused low frequency subbands and high frequencysubbands

34 Algorithm Steps The proposed fusion algorithm pro-cesses are shown in Figure 3 and the steps are as follows

Step 0 Given source images 119860 and 119861

Step 1 The color image in RGB color space is converted toHSV color space to get three components H S and V

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 7: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Journal of Sensors 7

(a) (b)

Figure 5 Source images (a) Source image 119860 (b) Source image 119861

Step 2 PCNN is utilized to deal with H component

(a) The H components of images 119860119894119895and 119861

119894119895are divided

into subblocks and then the SF and AG of thesubblocks are calculated using (16) to (19) the linkingstrengths 120573

1and 120573

2are gotten using (14) and (15) 120573

1

and 1205732acted as 120573 value of S-PCNN respectively

(b) The subblocks are input into S-PCNN model twicewith different120573 to get twoOFGsof theHcomponents

(c) Get the average value of two OFGs using (20) andthen 119865

119894119895of the fused H component can be decided by

(21)

AOFG =

(OFG1205731

+OFG1205732)

2 (20)

119865119894119895=

119860119894119895 (AOFG

119860119894119895gt AOFG

119861119894119895)

119861119894119895 (AOFG

119860119894119895lt AOFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (AOFG

119860119894119895= AOFG

119861119894119895)

(21)

where AOFG119860119894119895

and AOFG119861119894119895

are the average OFG ofimages 119860 and 119861

Step 3 Performdecomposition of source S andV componentsof image 119860

119894119895and 119861

119894119895using NSST to obtain the low-pass

subband coefficients and the high-pass directional subbandcoefficients different rules are utilized to deal with the S andV components

(a) The fused low-pass subband coefficients 119865119862119894119895can be

decided by

119865119862119894119895=

119862119860119894119895

(119862119860119894119895

gt 119862119861119894119895)

119862119861119894119895 (119862

119860119894119895lt 119862119861119894119895)

(119862119860119894119895

+ 119862119861119894119895)

2 (119862

119860119894119895= 119862119861119894119895)

(22)

where 119862119860119894119895

and 119862119861119894119895

is the low-pass subband coeffi-cients of images 119860 and 119861

(b) The high-pass directional subband coefficients areinput into PCNN model to get corresponding OFGs(OFG

119860119894119895 OFG

119861119894119895) according to the statistics of the

ignition times of the pixels the fusion rule of S andH components can be decided by

119865119894119895=

119860119894119895 (OFG

119860119894119895gt OFG

119861119894119895)

119861119894119895 (OFG

119860119894119895lt OFG

119861119894119895)

(119860119894119895+ 119861119894119895)

2 (OFG

119860119894119895= OFG

119861119894119895)

(23)

Step 4 Reconstruct the fused HSV image by an inverse NSST

Step 5 Obtain the fused RGB color image by inverse HSV

4 Experimental Results and Analysis

To verify the validity of the algorithm presented in this paperwe take several groups of experimental color images withdifferent focus position test The first group of color image iscups which is shown in Figure 5 Image 119860 focuses on the leftand image 119861 focuses on the right and there are many wordsas details

41 Evaluation Index System In order to verify the effective-ness of this method we consider the quantitative assessmentof the fused images For evaluation of the proposed fusionmethod we have considered several common fusion perfor-mance metrics defined in this section The final fused colorimages are composed of three components such as R G andB Each component can be regarded as a grayscale image andthe quality of the fused color image strongly depends on thegrayscale image quality In this paper we take the average ofthe three componentrsquos evaluation indexes as the final colorimage evaluation index which is the basic indicators of image

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 8: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

8 Journal of Sensors

Table 1 Fusion quality using different methods

Method SF AG EN MV SD MI 119876119860119861119865

Weighted 127859 42004 74416 1667102 651893 54941 05449PCA 130306 42311 74408 1665136 653166 55144 05525PCNN 168538 49683 74449 1673324 658106 55454 05986PCNN +WT 177381 52384 72349 1430606 576579 51952 06696PCNN + LP 188702 54816 73187 1506954 611962 52172 06795This paper (HSV) 203181 58530 74487 1754014 616361 56377 06886

Table 2 Fusion quality under different color spaces

Color spaces SF AG EN MV SD MI 119876119860119861119865

NTSC 191768 55980 74530 1711844 661808 58862 06547YUV 191937 55983 74382 1711177 662103 59109 06555YCbCr 191805 56076 74743 1716032 654123 57668 06496HSV 203181 58530 74487 1754014 616361 56377 06886HIS 204130 58660 73967 1720959 682812 58896 06989LAB 205100 59280 742820 1724920 676792 60364 07018

fusion quality Tables 1 and 2 show the evaluation of the fusedimage quality with space frequency (SF) average gradient(AG) entropy (EN) mean value (MV) standard deviation(SD) and mutual information (MI) and 119876

119860119861119865 indicates howmuch edge information is reserved in the fused image [19ndash23]

The space frequency (SF) is defined by (16) to (18) andthe average gradient is defined by (19)

411 Entropy The entropy (EN) of an image is defined by

119867 = minus

119871minus1

sum

119894=0

119875119894log2119875119894 (24)

where 119875119894is the probability of the gray level 119894 in the image and

119871 is the gray level of the image from 0 to 255

412 Mean Value The mean value (MV) of the image isdefined by

120583 =1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

119865 (119894 119895) (25)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) MV represents the average brightness of thewhole image

413 Standard Deviation The standard deviation (SD) of animage is defined by

SD = radic1

119872 times 119873

119872

sum

119894=1

119873

sum

119895=1

(119865 (119894 119895) minus 120583)2 (26)

where 119865(119894 119895) is the pixel value of the fused image at theposition (119894 119895) and 120583 is the mean value of the imageThe largerthe SD is the better the result is

414 Mutual Information The mutual information MIbetween the source images 119860 and 119861 and the fused image 119865

is defined by

MI =119871minus1

sum

119894=0

119871minus1

sum

119895=0

119871minus1

sum

119896=0

119875119860119861119865

(119894 119895 119896) log2

119875119860119861119865

(119894 119895 119896)

119875AB (119894 119895) 119875119865 (119896) (27)

where 119875119860119861119865

(119894 119895 119896) is the normalization union grey level his-togramof images119860 and119861 and119865119875

119860119861(119894 119895) is the normalization

union grey level histogram of images 119860 and 119861 119875119865(119896) is the

normalization grey level histogram of 119865 119871 is the gray level ofthe image and (119894 119895 119896) represents the pixel value of images 119860and 119861 and 119865 respectively

415 Edge Based on Similarity Measure The 119876119860119861119865 metric

evaluates the sum of edge information preservation valuesand is defined by

119876119860119861119865

= (

119872

sum

119894=1

119873

sum

119895=1

(119876119860119865

(119894 119895) times 120596119860(119894 119895) + 119876

119861119865(119894 119895) times 120596

119861(119894 119895)))

sdot (

119872

sum

119894=1

119873

sum

119895=1

(120596119860(119894 119895) + 120596

119861(119894 119895)))

minus1

(28)

where 119876119860119865(119894 119895) = 119876119860119865

120573(119894 119895)119876

119860119865

120572(119894 119895) 119876119860119865

120573(119894 119895) 119876119860119865

120573(119894 119895) and

119876119860119865

120572(119894 119895) are the edge strength and orientation preservation

values respectively 119876119861119865

(119894 119895) is similar to 119876119860119865

120573(119894 119895) and

120596119860(119894 119895) and 120596

119861(119894 119895) are weights to measure the importance

of 119876119860119865(119894 119895) and 119876119861119865

(119894 119895) respectively The dynamic range119876119860119861119865 is [0 1] and it should be close to 1 as far as possible

for the best fusion as 119876119860119861119865 = 1 In addition (119894 119895) representsthe pixel location and 119872 and 119873 are the sizes of imagesrespectively

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 9: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Journal of Sensors 9

(a) (b) (c)

(d) (e) (f)

Figure 6 Fusion images from different methods (a) Average method (b) PCA method (c) PCNN method (d) PCNN +WT (e) PCNN +LP (f) This paper

42 Experiment One For comparison this paper proposesthe fusion scheme of the algorithm and presents several com-mon fusion algorithms for instance take the average of thesource images pixel by pixel (average) principal componentanalysis (PCA) pulse coupled neural network (PCNN) pulsecoupled neural network and Laplacian pyramid transform(PCNN+LP) and pulse coupled neural network and discretewavelet transform (PCNN + WT) The fusion images usingdifferent methods are shown in Figure 6 According thecontrast experiments the fusion image of this paper is betterthan the others see Figure 6The algorithmof this paper doeswell in extracting the characteristics of the source images andthe fused image is closer to the natural color which containsmore edges texture and detail so it is the closest to the sourceimages We can conclude that the method in this paper is aneffective method

From Table 1 the fusion image of this paper containsmuch more information SF AG MV and119876

119860119861119865 of this paperare larger than other methods EN and MI are slightly betterthan other methods and only SD is less than other methods

43 Experiment Two In other color spaces such as NTSCYUV YCbCr HIS and LAB all have the similar histogram

distribution as the distribution of H S and V componentsin HSV color space so all can use the proposed algorithmfor color image fusion In order to confirm the method thatworks best under the color space of fusion we carry outthe following experimentsThe fusion images under differentcolor space are shown in Figure 7 all can achieve the goal ofthe image fusion but the fusion effects onHSVHIS and LABcolor spaces are better than others

From Table 2 it is indicated that the fusion image of thispaper in LAB color space contains more information andSF AG MI and 119876

119860119861119865 are better than others Overall theevaluation indexes in HSV HIS and LAB are better thanothers In practical application we suggest that the colorimage fusion of the proposed algorithm should focus onHSVHIS and LAB color spaces

44 Experiment Three More experimental results of coastalimages in HSV color space are shown in Figure 8 the sourceimages with different focus position and there are a lot oftextures in the source images

It can be seen in Figure 8 that the edge of the fusionimage is clear and it retains most of the textures in the sourceimages besides the details are also well preserved This

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 10: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

10 Journal of Sensors

(a) (b) (c)

(d) (e) (f)

Figure 7 Fusion images under different color spaces (a) NTSC (b) YUV (c) YCbCr (d) HSV (e) HIS (f) LAB

method can extract themain features from the source imagesit shows that the method in this paper also achieved effectiveresults in these groups of color images

The same conclusion as in Figure 6 can be concludedOverall the method presented in this paper is better thanthe traditional methods obviously Compared with the othermethods this method reflects better performance and visualeffect in terms of definition and detail

5 Conclusions

We propose an effective multisensor color image fusionalgorithm based on NSST and PCNN PCNNrsquos neuronscapture character will cause the similar brightness of thesurrounding neurons to capture the ignition This charactercan be automatically coupled and transmitted informationNonsubsampled Laplacian pyramid filters are used in NSSTto remove the influence of upsampling and subsamplingNSST has excellent performance in terms of shift-invariancemultiscale and multidirectional properties In the proposedalgorithm RGB color image is converted into HSV colorimage H component is fused by adaptive S-PCNN S andV components are decomposed into different frequencysubbands according to different scales and direction by

NSST and are fused by different rules The experimentalresults show that the proposed color image fusion algorithmcan fuse different focus position of the color images andthe fused image contains more information about colortexture and detail Compared with the traditional algo-rithms this method embodies better fusion performancein many aspects The paper also discusses the effect ofthe proposed algorithm on other color spaces and theexperiments show that the algorithm achieved better effectsunder HSV HIS and LAB color spaces and we recommendthree kinds of color spaces as the practical application colorspace

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

The authors thank the editors and the anonymous review-ers for their careful works and valuable suggestions forthis paper The authors wish to express their gratitude

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 11: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

Journal of Sensors 11

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

(j) (k) (l)

Figure 8 The fusion results of the coastal images (a) Source image 119860 (b) Source image 119861 (c) Fusion image (d) Source image 119860 (e) Sourceimage 119861 (f) Fusion image (g) Source image119860 (h) Source image 119861 (i) Fusion image (j) Source image119860 (k) Source image 119861 (l) Fusion image

to Zijun Wang Shanshan Du Jingxue Xu and ShujuanZhang for providing coastal images The authorsrsquo work issupported by the National Natural Science Foundation ofChina (no 61365001 and no 61463052) the Natural Sci-ence Foundation of Yunnan Province (no 2012FD003) andthe Science and Technology Plan of Yunnan Province (no2014AB016)

References

[1] V Petrovic and V Dimitrijevic ldquoFocused pooling for imagefusion evaluationrdquo Information Fusion vol 22 pp 119ndash126 2015

[2] Y Xia and Sh R Qu ldquoColor image fusion framework based onimproved (2D)2PCArdquoActa Optica Sinica vol 34 no 10 ArticleID 1010001 2014 (Chinese)

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 12: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

12 Journal of Sensors

[3] R H Miao J L Tang and X Q Chen ldquoClassification offarmland images based on color featuresrdquoThe Journal of VisualCommunication and Image Representation vol 29 pp 138ndash1462015

[4] S Daneshvar and H Ghassemian ldquoMRI and PET image fusionby combining IHS and retina-inspired modelsrdquo InformationFusion vol 11 no 2 pp 114ndash123 2010

[5] M Gonzalez-Audıcana J L Saleta R G Catalan and RGarcıa ldquoFusion of multispectral and panchromatic imagesusing improved IHS and PCA mergers based on waveletdecompositionrdquo IEEE Transactions on Geoscience and RemoteSensing vol 42 no 6 pp 1291ndash1299 2004

[6] Y Xin and L Deng ldquoAn improved remote sensing image fusionmethod based on wavelet transformrdquo Laser amp OptoelectronicsProgress vol 50 no 2 2013

[7] E J Candes and D L Donoho ldquoCurvelets a surprisinglyeffective nonadaptive representation of objects with edgesrdquo inCurves and Surfaces Fitting pp 105ndash120 Vanderbilt UniversityPress Nashville Tenn USA 2000

[8] M N Do and M Vetterli Contourlets Beyond Wavelets editedby G V Welland Academic Press 2003

[9] D Labate W-Q Lim G Kutyniok and G Weiss ldquoSparsemultidimensional representation using shearletsrdquo in WaveletsXI vol 5914 of Proceedings of SPIE pp 254ndash262 San DiegoCalif USA September 2005

[10] Z-Y Fan Q-S Sun Z-X Ji and K Hu ldquoAn image filter basedon multiobjective genetic algorithm and shearlet transforma-tionrdquo Mathematical Problems in Engineering vol 2013 ArticleID 463760 7 pages 2013

[11] G Easley D Labate and W-Q Lim ldquoSparse directional imagerepresentations using the discrete shearlet transformrdquo Appliedand Computational Harmonic Analysis vol 25 no 1 pp 25ndash462008

[12] W Kong L Zhang and Y Lei ldquoNovel fusion method for visiblelight and infrared images based on NSST-SF-PCNNrdquo InfraredPhysics and Technology vol 65 pp 103ndash112 2014

[13] R C Nie D M Zhou M He X Jin and J Yu ldquoFacial featureextraction using frequency map series in PCNNrdquo Journal ofSensors In press

[14] M M Subashini and S K Sahoo ldquoPulse coupled neuralnetworks and its applicationsrdquoExpert SystemswithApplicationsvol 41 no 8 pp 3965ndash3974 2014

[15] C H Zhao G F Shao L J Ma and X Zhang ldquoImage fusionalgorithm based on redundant-lifting NSWMDA and adaptivePCNNrdquo Optik vol 125 no 20 pp 6247ndash6255 2014

[16] W W Kong L J Zhang and Y Lei ldquoNovel fusion method forvisible light and infrared images based on NSST-SF-PCNNrdquoInfrared Physics amp Technology vol 65 pp 103ndash112 2014

[17] MYuan B X Yang YDMa J Zhang R Zhang andC ZhangldquoCompressed sensing MRI reconstruction from highly under-sampled 119896-space data using nonsubsampled shearlet transformsparsity priorrdquoMathematical Problems in Engineering vol 2015Article ID 615439 18 pages 2015

[18] J-F Pekel C Vancutsem L Bastin et al ldquoA near real-timewater surface detection method based on HSV transformationof MODIS multi-spectral time series datardquo Remote Sensing ofEnvironment vol 140 pp 704ndash716 2014

[19] L M Dong Q X Yang H Y Wu H Xiao and M XuldquoHigh quality multi-spectral and panchromatic image fusiontechnologies based on Curvelet transformrdquo Neurocomputingvol 159 pp 268ndash274 2015

[20] M J Kim D K Han and H Ko ldquoJoint patch clustering-baseddictionary learning for multimodal image fusionrdquo InformationFusion vol 27 pp 198ndash214 2016

[21] P Zhang C Fei Z M Peng et al ldquoMultifocus image fusionusing biogeography-based optimizationrdquo Mathematical Prob-lems in Engineering vol 2015 Article ID 340675 14 pages 2015

[22] S T Li J T Kwok and Y N Wang ldquoCombination of imageswith diverse focuses using the spatial frequencyrdquo InformationFusion vol 2 no 3 pp 169ndash176 2001

[23] J Cheng H J Liu T Liu F Wang and H Li ldquoRemote sensingimage fusion via wavelet transform and sparse representationrdquoISPRS Journal of Photogrammetry and Remote Sensing vol 104pp 158ndash173 2015

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of

Page 13: Research Article Multifocus Color Image Fusion Based on ...Research Article Multifocus Color Image Fusion Based on NSST and PCNN XinJin,RencanNie,DongmingZhou,QuanWang,andKangjianHe

International Journal of

AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation httpwwwhindawicom

Journal ofEngineeringVolume 2014

Submit your manuscripts athttpwwwhindawicom

VLSI Design

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation httpwwwhindawicom

Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

DistributedSensor Networks

International Journal of