Multi Focus Image Fusion Base on Redundant Wavelet

Embed Size (px)

Citation preview

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    1/11

    Published in IET Image Processing

    Received on 29th December 2008

    Revised on 8th April 2009

    doi: 10.1049/iet-ipr.2008.0259

    In Special Section on VIE 2008

    ISSN 1751-9659

    Multifocus image fusion based onredundant wavelet transformX. Li

    1,2M. He

    1M. Roux

    2

    1School of Electronics and Information, Northwestern Polytechnical University, Xian 710072, Peoples Republic of China

    2

    Institute TELECOM, Telecom ParisTech, Paris 75013, FranceE-mail: [email protected]

    Abstract: Image fusion is a process of integrating complementary information from multiple images of the same

    scene such that the resultant image contains a more accurate description of the scene than any of the individual

    source images. A method for fusion of multifocus images is presented. It combines the traditional pixel-level

    fusion with some aspects of feature-level fusion. First, multifocus images are decomposed using a redundant

    wavelet transform (RWT). Then the edge features are extracted to guide coefficient combination. Finally, the

    fused image is reconstructed by performing the inverse RWT. The experimental results on several pairs of

    multifocus images show that the proposed method can achieve good results and exhibit clear advantages

    over the gradient pyramid transform and discrete wavelet transform techniques.

    1 Introduction

    Image fusion is a branch of data fusion, which is the processof combining information from two or more source images ofa scene into a single composite image that is moreinformative and is more suitable for visual perception orcomputer processing. Recently, image fusion is widely usedin many fields such as remote sensing, medical imaging,microscopic imaging and robotics. For example, a goodfusion mechanism can extract the spatial information froma panchromatic image while preserving the spectral

    signature in a multispectral image to produce a spatiallyenhanced multispectral image, called pan-sharpening, asshown in Fig. 1 (see [1]), or it can extract the focused partsfrom each multifocus image and produce one with equalclarity, as shown in Fig. 2. The technique for the latterapplication is known as multifocus image fusion.

    In practice, the fusion process can take place at the pixel,feature and symbol level, although indeed these levels canbe combined by themselves [25]. Pixel-level fusion meansfusion at the lowest processing level referring to themerging of measured physical parameters [6]. It generates a

    fused image in which each pixel is determined from a set ofpixels in various sources and serves to increase the usefulinformation content of a scene such that the performance

    of image processing tasks, such as segmentation and featureextraction, can be improved [7]. Feature-level fusion firstemploys feature extraction, for example, by segmentationprocedures, separately on each source image and thenperforms the fusion based on the extracted features [8, 9].

    Those features can be identified by characteristics such ascontrast, shape, size and texture. The fusion is then basedon those features with higher confidence. Symbol-levelfusion allows the information from multiple images to beeffectively used at the highest level of abstraction [10, 11].

    The input images are usually processed individually for

    information extraction and classification. Examples ofsymbolic-level fusion methods include weighed decisionmethods (voting techniques), classical inference, Bayesianinference, Dempster-Shafers method, etc. The selection ofthe appropriate level depends on many different factorssuch as data sources, applications and available tools.

    Many multifocus image fusion techniques have beenreported so far. The simplest fusion method just takes thepixel-by-pixel gray-level average of the source images.However, this often leads to undesirable side effects such asreduced contrast [12, 13]. A proper fusion algorithm must

    ensure that all the important visual information found inthe input images is transferred into the fused imagewithout the introduction of any artefacts or inconsistencies,

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 283

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    2/11

    and also should be reliable and robust to imperfections suchas noise and misregistration [14, 15]. To improve the qualityof the fused image, the multiresolution analysis (MRA)technique, which is very useful for analysing theinformation content of images for fusion purposes, hasbegun to receive considerable attention. The generic schemeof the MRA-based fusion is to first perform an MRAdecomposition on each source image, then integrate all thesedecompositions to form a composite representation andfinally reconstruct the fused image by taking an inverseMRA transform. The approach was first introduced as amodel for binocular fusion in human stereo vision [16]. Thisimplementation used a Laplacian pyramid and a maximum

    selection rule that, at each sample position in the pyramid,copied the source pyramid coefficient with the maximum

    value to the composite pyramid. Similar to a Laplacianpyramid, the ratio-of-low-pass (ROLP) pyramid introducedby Toet [1719] used the maximum contrast information inthe ROLP pyramids to determine which features are salient(important) in the images to be fused. Burt and Kolczynski[20] presented another MRA fusion method based on agradient pyramid (GP), which can be obtained by applying a

    gradient operator to each level of the Gaussian pyramidrepresentation. The image can be completely represented bya set of four such GPs with different directions, in whichthe activity measure of each pixel was calculated by takingthe variance of 3 3 or 5 5 window centred at that pixel.Compared to Toets method, it offers potential for betternoise reduction, instead of just picking some maximum

    values. It also allows the low contrast details to be preservedif they are salient features. Owing to the disadvantages ofpyramid-based techniques, which include blocking effectsand lack of flexibility [21], the discrete wavelet transform(DWT) has been used by many authors [2225]. Li et al.[23] argued that the method in [20], which applied both

    linear (Laplacian) and non-linear (variance) filtering, had noclear physical meaning and proposed a better fusion method.In their method, the image decomposition is based onDWT and the absolute maximum value within the windowassociated with a given pixel is used as the activity measure.In this way, a high activity value indicates the presence of adominant feature in the local area. In addition, area-basedconsistency verification is applied on each activity measure toensure that the centre pixel is selected from the same input

    Figure 1 Application of image fusion

    a Panchromatic imageb Multispectral imagec Fused result [1]

    Figure 2 Application of image fusion

    a Focus in leftb Focus in rightc Fused image

    284 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293

    & The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    3/11

    image as most of its surrounding pixels so that block effects canbe reduced. Santos et al. [24] developed improved methodsbased on the computation of local and global gradients,

    which take into account the grey-level differences from pointto area in the decomposed subimages.

    While considerable work has been done at pixel-levelimage fusion, less work has been done at the feature level.Feature-based algorithms are usually less sensitive to signal-level noise [9, 26]. Furthermore, one drawback of theDWT and, also to a lesser extent of the pyramid transform,is that it generally yields a shift-variant signalrepresentation. This means that a simple shift of the inputsignal may lead to completely different transformcoefficients [4]. This is particularly undesirable when thesource images are with noises or cannot be perfectlyregistered.

    In this paper, we propose an effective multifocus imagefusion algorithm based on the redundant wavelet transform(RWT), which combines aspects of both pixel-level andfeature-level fusion. The edge features are separatelyextracted from each input images wavelet planes and thenthe decision map is built based on the features of edgeinformation, representing salience or activity to guide thefusion process in the RWT domain. Since edges of objectsand parts of objects carry information of interest, it isreasonable to focus them in the fusion algorithm. The

    visual and quantitative analyses of the different fusionresults prove that the proposed method improves the fusionquality and outperforms some existing pixel-based methods.

    2 Redundant wavelet transform

    Generally, the DWT, which is referred to as Mallatsalgorithm [27], is based on the orthogonal decompositionof the image onto a wavelet basis in order to avoid theredundancy of information in the pyramid at each level ofresolution. However, redundancy of information is alwayshelpful for an analysis problem. This fact remains true forimage fusion since any fusion rule essentially reduces to aproblem of analysing the images to fuse and then select thedominant features that are important in a particular sense

    [28]. Consequently, an RWT, which avoids imagedecimation, has been developed for some image processingapplications such as denoising [29], texture classification[30] and fusion [3133]. Its advantage lies in the pixelwiseanalysis, without decimation, for the characterisation offeatures, and corresponds to an overcomplete representation.

    This fundamental property can help to develop fusionprocedures based on the following intuitive idea: when adominant or significant feature appears at one level, it shouldappear at successive levels as well. In contrast, the non-significant features, such as the noise, do not appear in nextlevels. Thus, the dominant feature is tied to its presence or

    duplication at successive levels. This important propertyconstitutes the basic idea for the implementation of theproposed method. The discrete implementation of the

    RWT can be accomplished by using the a` trous (withholes) algorithm, which presents interesting properties as[28, 34]

    The evolution of the wavelet decomposition can befollowed from level to level.

    A single wavelet coefficient plane is produced at each levelof decomposition.

    The wavelet coefficients are computed for each locationallowing a better detection of dominant feature.

    It is easily implemented.

    The a` trous wavelet transform is a non-orthogonalmultiresolution decomposition [34], which separates thelow-frequency information (approximation) from high-

    frequency information (wavelet coefficients). Such aseparation uses a low-pass filter h(n), associated with thescale function w(x), to obtain successive approximations ofa signal through scales as follows

    aj(k) =

    n

    h(n)aj1(k+ n2j1), j= 1, . . . , N (1)

    where a0(k) corresponds to the original discrete signal s(k); jand N are the scale index and the number of scales,respectively.

    The wavelet coefficients are extracted by using a high-pass

    filterg(n), associated with the wavelet function c(x), throughthe following filtering operation

    wj(k) =

    n

    g(n)aj1(k+ n2j1) (2)

    The perfect reconstruction (PR) of data is performed byintroducing two dual filters hr(n) and gr(n) that shouldsatisfy the quadrature mirror filter condition [35]

    n

    hr(n)h(l n)+gr(n)g(l n) = d(l) (3)

    where d(l) is the Dirac function.

    A simple choice consists in considering hr(n) and gr(n)filters as equal to Dirac function (hr(n) gr(n) d(n)).

    Therefore g(n) is deduced from (3) as

    g(n) = d(n) h(n) (4)

    Hence, the wavelet coefficients are obtained by a simpledifference between two successive approximations as follows

    wj(k) = aj1(k) aj(k) (5)

    To construct the sequence, this algorithm performssuccessive convolutions with a filter obtained from an

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 285

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    4/11

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    5/11

    3.3 Fusion rule

    The quality of the fusion is tied to the particular choice of anappropriate fusion rule. In this new method, the edgefeatures, EED, are extracted and obtained from each sourceimage by using the a` trous wavelet transform. Since theEED simply superimposes these corresponding coefficientsthrough the wavelet planes, it just emphasises the thickeredges. Some important fine details, such as thin lines or

    weak edges, will be neglected. Due to the fact that thecoefficients of each wavelet plane fluctuate around the zero

    with a mean value of about zero, the same can be achieved

    by the EED. Therefore the Laplacian operator, a second-order derivative, is introduced to enhance such grey-level

    variations, particularly around the edges. The Laplacianoperator generally has a strong response to fine detail and ismore suitable for image enhancement than the gradientoperator[38].

    3.3.1 Activity measure: The degree to which eachsample in the image is salient will be expressed by the so-called activity. Computation of the activity depends on thenature of the source images as well as on the particularfusion algorithm.

    Here, we define the activity from the feature level, that is,EED, for the characterisation of the dominant information.

    At each location p in image X (or Y), the activity can bemeasured by the Laplacian operator, which is computed asfollows

    LEEDX(p) =q[Rq=p

    [EEDX(q) EEDX(p)] (11)

    where R is a local area surrounding p in image X and q is alocation within the area R. Considering more information,a smooth and more robust activity function LA is proposedto compute the average value in a region as follows

    LAX(p) =1

    nW

    q[W

    LEEDX(q)

    (12)

    where Wis a region of size m n centred at location p, qarethe coefficients belonging to W and nW is the number ofcoefficients in W. In this paper, the region has the size of5 5 around p, hence nW 25.

    3.3.2 Decision map: The construction of the decision

    map (DM) is a key point since its output governs thecombination map. Therefore the decision map actuallydetermines the combination of the various wavelet

    Figure 4 Test image and its EED

    a Test imageb Level-1 decomposition d1c Level-2 decomposition d2d Level-3 decomposition d3e The residual image A3f The EED of the test image

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 287

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    6/11

    decompositions, and hence the construction of thecomposite.

    In our case, a decision map of the same size of the waveletplane is created to record the activity comparison resultsaccording to a selection rule

    DM(p) =1 if LAX(p) . LAY(p)1 if LAX(p) , LAY(p)0 if LAX(p) = LAY(p)

    (13)

    The decision map built from (13) is preliminary, becausethe decision is just taken for each coefficient withoutreference to the neighbouring ones. One may assume thatspatially close samples are likely to belong to the sameimage feature and thus should be treated in the same way.

    When comparing the corresponding image features inmultiple source images, considering the dependencies

    between the transform coefficients may lead to a morerobust fusion strategy. Li et al. [23] applied consistency

    verification to refine the decision map by using a majorityfilter. Specifically, if the centre composite coefficient comesfrom image X whereas the majority of the surroundingcoefficients come from image Y, the centre sample is thenchanged to come from image Y. We refine the preliminarydecision map with consistency verification to obtain a newdecision map (NDM). Thus, the composite image Z isfinally obtained based on the NDM as

    dj,Z(p) = dj,X(p), j= 1, . . . , JA

    j,Z(p) = A

    j,X(p), j= J

    if NDM(p) = 1(14)

    dj,Z(p) = dj,Y(p), j= 1, . . . , JAj,Z(p) = Aj,Y(p), j= J

    if NDM(p) = 1

    (15)

    dj,Z(p)= [dj,X(p)+dj,Y(p)]/2, j= 1, . . . , J

    AJ,Z(p)= [AJ,X(p)+AJ,Y(p)]/2, j=Jif

    NDM(p)

    = 0 (16)

    Since the decision map is constructed based on the edgefeatures, this decision method attempts to exploit the factthat significant image features, that is, edges, tend to bestable with respect to variations in space and scale. Oncethe decision map is determined, the mapping isdetermined for all the wavelet coefficients. In this way,all the corresponding samples are fused in the samedecision.

    The proposed multifocus image fusion is illustrated inFig. 5 and the fusion process is accomplished by thefollowing steps:

    Step1: Decompose the source images X and Y by a` trouswavelet transform at resolution level 5.

    Step2: Extract features from the wavelet planes to form theedge images: EEDX and EEDY.

    Step3: Measure and compare the activities of the two edgeimages to create a decision map.

    Step4: Refine the decision map with consistency verificationto construct the composite image.

    Step5: Perform the IRWT to obtain the fused image.

    4 Experimental results

    The proposed method has been tested on several pairs ofmultifocus images. Three examples are given here toillustrate the performance of the fusion process. In all cases,the grey values of the pixels are scaled between 0 and 255.

    The source images are assumed to be registered and nopre-processing is performed.

    The first example is shown in Fig. 6, which contains nineimages. Figs. 6a and b are two multifocus images withdifferent distances towards the camera, and only one clock

    in either image is in focus. The decision map shown inFig. 6c displays how the wavelet coefficients are generatedfrom the two input sources. The bright pixels indicate thatcoefficients from the image in Fig. 6a are selected, whereasthe black pixels indicate that coefficients from the image inFig. 6b are selected. Fig. 6d is the fusion result by using theproposed method. Figs. 6eg are the fused images by usingthe gradient pyramid transform (GPT) method [20], theDWT method [24] and the CTDWT [39], respectively.

    To make better comparisons, the difference imagesbetween the fused image and the source image are given inFigs. 6h k. For the focused regions, the difference between

    the source image and the fused image should be zero. Forexample, in Fig. 6a the left clock is clear, and in Fig. 6hthe difference between Figs. 6d and a in the left clock

    Figure 5 Schematic diagram of the proposed image fusion

    method

    288 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293

    & The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    7/11

    Figure 6 Example1

    a Focus on the leftb Focus on the rightc Decision mapd Fused image using the proposed methode Fused image using GPT methodf Fused image using DWT methodg Fused image using CTDWT methodh Difference between d and ai Difference between e and a

    j Difference between f and ak Difference between g and a

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 289

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    8/11

    region is less. This demonstrates that the whole focused areais contained in the fused image successfully. However, thedifferences in the same regions shown in Figs. 6ik aregreater, which show that the fused results using GPT,DWT and CTDWT are worse than that of our proposedmethod. In Figs. 7 and 8, the same conclusion can be

    drawn that our proposed method outperforms the otherthree approaches.

    For further comparison, two objective criteria are used tocompare the fusion results. The first criterion is mutualinformation (MI) [26, 40]. It is a metric defined as thesum of MI between each source image and the fusedimage. Considering the two source images X and Y, and afused image Z

    IZ,X(z, x) =z,xPZ,X(z, x)log

    PZ,X(z, x)

    PZ(z)PX(x)(17)

    IZ,Y(z, y) =

    z,x

    PZ,Y(z, y)logPZ,Y(z, y)

    PZ(z)PY(y)(18)

    where PX, PY and PZ are the probability density function inthe images X, Y and Z, respectively. PZ,X and PZ,Y are the

    joint probability density functions. Thus the image fusionperformance measure can be defined as

    MI = IZ,X(z, x)+IZ,Y(z, y) (19)

    The second criterion is the spatial frequency (SF) [39, 41],which measures the overall activity level of an image andreflects detailed differences and texture changes. For anm n image T, the SF is defined as follows

    SF=

    (RF)2 + (CF)2

    (20)

    where RF and CF are row frequency and column frequency,respectively

    RF= 1

    mn

    m

    i

    n

    j

    [T(i, j) T(i, j 1)]2 (21)

    CF=

    1

    mn

    nj

    mi

    [T(i, j) T(i 1, j)]2

    (22)

    For both criteria, the larger the value, the better the fusionresult.

    The values of MI and SF ofFigs. 6 8 are listed in Table 1.As can be readily ascertained, the proposed method providesbetter performance and outperforms the other three

    approaches in terms of MI and SF. By combining the visual inspection and the quantitative results, it can beconcluded that the proposed fusion method is more effective.

    Figure 7 Example2

    a Focus on the frontb Focus on the rearc Fused image using the proposed methodd Fused image using GPT methode Fused image using DWT method

    f Fused image using CTDWT methodg Difference between c and ah Difference between d and ai Difference between e and a

    j Difference between f and a

    290 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293

    & The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    9/11

    5 ConclusionsIn this paper, a new method for multifocus image fusionbased on the RWT, which combines the traditional pixel-level fusion with some aspects of feature-level fusion, ispresented. The underlying advantages include: (1) RWT isshift-invariant and the a` trous algorithm has lesscomputational complexities, which make it easier toimplement than the other MRA tools; (2) some of theproblems existing in pixel-level fusion methods such assensitivity to noises, blurring effects and misregistrationhave been effectively overcome; and (3) using features to

    represent the image information not only reduces thecomplexity of the procedure but also increases the reliabilityof fusion results. The basic idea of our proposed method is

    to decompose the input images by using the a` trouswavelet transform, and then use the edge features extractedfrom the wavelet planes to guide the combination of thecoefficients. The experimental results on several pairs ofmultifocus images have demonstrated the superiorperformance of the proposed fusion scheme.

    6 Acknowledgments

    This work is partially supported by the National NaturalScience Foundation of China under project numbers60572097 and 60736007, Chinese Scholarship Council

    and NPU fundamental research program. The authors would like to thank the anonymous reviewers for theirhelpful comments.

    Figure 8 Example3

    a Focus on the Pepsib Focus on the testing cardc Fused image using the proposed methodd Fused image using GPT methode Fused image using DWT methodf Fused image using CTDWT method

    Table 1 Performance of different fusion methods

    Source images MI SF

    GPT DWT CTDWT Proposed method GPT DWT CTDWT Proposed method

    Fig. 6 2.03 2.49 1.87 2.63 4.73 5.34 5.28 5.45

    Fig. 7 1.73 2.21 1.57 2.39 7.46 8.23 7.84 8.51

    Fig. 8 1.95 2.53 1.87 2.56 9.23 9.39 9.34 9.58

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 291

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    10/11

    7 References

    [1] TU T.M., CHENG W.C., CHANG C.P., HUANG P.S., CHANG J.C.:

    Best tradeoff for high-resolution image fusion

    to pre se rv e s pa tial deta ils a nd m inim ize color

    distortion, IEEE Geosci. Remote Sens. Lett., 2007, 4, (2),

    pp. 302306

    [2] POHL C., GENDEREN J.L.: Multisensor image fusion in

    remote sensing: concept, methods and applications,

    Int. J. Remote Sens., 1998, 19, (5), pp. 823854

    [3] WALD L.: Some terms of reference in data fusion,

    IEEE Trans. Geosci. Remote Sens., 1 99 9, 37, (3),

    pp. 11901193

    [4] PIELLA G.: A general framework for multiresolution

    image fusion: from pixels to regions, Inf. Fusion, 2003, 4,

    (4), pp. 259280

    [5] G OSHTA SB Y A .A ., N IKO LO V S .: Image fusion:

    advances in the state of the art, Inf. Fusion, 2007, 8, (2),

    pp. 114118

    [6] DANNIEL M .M ., WILL SKY A .S.: A multiresolution

    methodology for signal-level fusion and data assimilation

    with application to remote sensing, Proc. IEEE, 1997, 85,

    (1), pp. 164180

    [7] ZHANG Z., BLUM R.S.: Multisensor image fusion using a

    region-based wavelet transform approach. Proc. DARPAIUW, 1997, pp. 14471451

    [8] HALL D.L., LLINAS J.: An introduction to multisensor data

    fusion, Proc. IEEE, 1997, 85, (1), pp. 623

    [9] ZHANG Z., BLUM R.S.: A categorization of multiscale-

    decomposition-based image fusion schemes with a

    performance study for a digital camera application, Proc.

    IEEE, 1999, 87, (8), pp. 13151326

    [10] DASARATHY B.V.: Decision fusion (IEEE Computer Society

    Press, 1993)

    [11] JEON B., LANDGREBE D.A.: Decision fusion approach for

    multitemporal classification, IEEE Trans. Geosci. Remote

    Sens., 1999, 37, (3), pp. 12271233

    [12] AGGARWAL J.K.: Multisensor fusion for computer vision

    (Springer-Verlag, 1993)

    [13] SEALES W.B., DUTTA S.: Everywhere-in-focus image fusion

    using controllable cameras, Proc. SPIE, 1996, 2905,

    pp. 227234

    [14] R OC KI NG ER O .: Pixel-level fusion of imagesequences using wavelet frames. Proc. 16th Leeds

    Applied Shape Research Workshop, Leeds, UK, 1996,

    pp. 149154

    [15] STUBBINGS T.C., NIKOLOV S.G., HUTTER H.: Fusion of 2-D SIMS

    images using the wavelet transform, Mikrochimica Acta,

    2000, 133, pp. 273278

    [16] BURT P.J.: The pyramid as a structure for efficient

    computation in multiresolution image processing and

    analysis (Springer, 1984)

    [17] TOET A.: Image fusion by a ratio of low pass pyramid,

    Pattern Recognit. Lett., 1989, 9, (4), pp. 245253

    [18] TOET A.: Hierarchical image fusion, Mach. Vis. Appl.,

    1990, 3, (1), pp. 111

    [19] TOET A .: Multiscale contrast enhancement with

    application to image fusion, Opt. Eng., 1992, 31, (5),pp. 10261039

    [20] BURT P.J., KOLCZYNSKI R.J.: Enhanced image capture

    through fusion. Proc. Fourth Int. Conf. on Computer

    Vision, Berlin, Germany, May 1993, pp. 173182

    [21] WILSON T.A., ROGERS S.K., MYERS L.R.: Perceptual based

    hyperspectral image fusion using multi-resolution

    analysis, Opt. Eng., 1995, 34, (11), pp. 31543164

    [22] YOCKY D.: Image merging and data fusion by means of

    the discrete two-dimensional wavelet transform, J. Opt.

    Soc. Am., 1995, 12, (9), pp. 18341845

    [23] LI H., MANJUNATH B.S., MITRA S.K.: Multisensor image fusion

    using the wavelet transform, Graph. Models Image

    Process., 1995, 57, (3), pp. 235245

    [24] SANTOS M., PAJARES G., PORTELA M., CRUZ J.M.: A new wavelet

    image fusion strategy, Lecture Notes Comput. Sci., 2003,

    2652, pp. 919926

    [25] PA JA RES G ., C RU Z J .M .: A wavelet-based image

    fusion tutorial, Pattern Recognit., 2 00 4, 37, (9),

    pp. 18551872

    [26] LI S.T., YANG B.: Multifocus image fusion using region

    segmentation and spatial frequency, Image Vis. Comput.,

    2008, 26, (7), pp. 971979

    [27] MALLAT S.: A theory for multiresolution signal

    decomposition: the wavelet representation, IEEE

    Trans. Pattern Anal. Mach. Intell., 1 98 9, 11, (7 ),

    pp. 674693

    [28] CHIBANI Y., HOUACINE A.: Redundant versus orthogonal

    wavelet decomposition for multisensor image fusion,Pattern Recognit., 2003, 36, (4), pp. 879887

    292 IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283293

    & The Institution of Engineering and Technology 2010 doi: 10.1049/iet-ipr.2008.0259

    www.ietdl.org

  • 8/6/2019 Multi Focus Image Fusion Base on Redundant Wavelet

    11/11

    [29] MALFAIT M., ROOSE D.: Wavelet-based image denoising

    using a Markov random field a priori model, IEEE Trans.

    Image Process., 1997, 6, (4), pp. 549565

    [30] UNSER M.: Texture classification and segmentation

    using wavelet frames, IEEE Trans. Image Process., 1995,

    4, (11), pp. 15491560

    [31] NUNEZ J., OTAZU X., FORS O., PRADES A., PALA V., ARBIOL R.:

    Multiresolution-based image fusion with additive wavelet

    decomposition, IEEE Trans. Geosci. Remote Sens., 1999,

    37, (3), pp. 12041211

    [32] AIAZZI B., ALPARONE L., BARONTI S., GARZELLI A.: Context-driven

    fusion of high spatial and spectral resolution images based

    on oversampled multiresolution analysis, IEEE Trans.

    Geosci. Remote Sens., 2002, 40, (10), pp. 23002312

    [33] CHIBANI Y.: Additive integration of SAR features intomultispectral SPOT images by means of the a trous

    wavelet decomposition, ISPRS J. Photogramm. Remote

    Sens., 2006, 60, pp. 306314

    [34] STARCK J.L., MURTAGH F.: Image restoration with noise

    suppression using the wavelet transform, Astron.

    Astrophys., 1994, 288, (1), pp. 342348

    [35] SHENSA M.J.: The discrete wavelet transform: wedding

    the a trous and Mallat algorithm, IEEE Trans. Signal

    Process., 1992, 40, (10), pp. 24642482

    [36] MARR D., HILDRETH E.: Theory of edge detection,

    Proc. R. Soc., 1980, 207, pp. 187217

    [37] MALLAT S., ZHONG S.: Characterization of signals from

    multiscale edges, IEEE Trans. Pattern Anal. Mach. Intell. ,

    1992, 14, (7), pp. 710732

    [38] GONZALEZ R.C., WOODS R.E.: Digital image processing

    (Prentice-Hall, 2002)

    [39] LI S.T., YANG B.: Multifocus image fusion by combine

    curvelet and wavelet transform, Pattern Recognit. Lett.,

    2008, 29, (9), pp. 12951301

    [40] QU G.H., ZHANG D.L., YAN P.F.: Information measure forperformance of image fusion, Electron. Lett., 2002, 38,

    (7), pp. 313315

    [41] ZHENG Y., ESSOCK E.A., HANSEN B.C., HAUN A.M.: A new metric

    based on extended spatial frequency and its application to

    DWT based fusion algorithm, Inf. Fusion, 2007, 8,

    pp. 177192

    IET Image Process., 2010, Vol. 4, Iss. 4, pp. 283 293 293

    doi: 10.1049/iet-ipr.2008.0259 & The Institution of Engineering and Technology 2010

    www.ietdl.org