Comparison of image fusion methods

Preview:

DESCRIPTION

 

Citation preview

A COMPARATIVE ANALYSIS OF IMAGE FUSION METHODS

Prepared By : AMR NASR

Introduction

• Developments in the field of sensing technology

• Multi-sensor systems in many applications such as remote sensing , medical imaging , military , etc.

• Result is increase of data available• Can we reduce increasing volume of

information simultaneously extracting all useful information

Basics of image fusion

Aim of image fusion is to • Reduce amount of data• Retain important information and• Create new image that is more suitable for

the purposes of human/machine perception or for further processing tasks.

Single Sensor image fusion system

• Sequence of images are taken by the sensor• Then they are fused in an image• It has some limitations due to capability of

sensor

Multi-sensor image fusion

• Images are taken by more than one sensor• Then they are fused in an image• It overcomes limitations of single sensor

system

Fusion Camera used in Avatar

DARPA Unveils Gigapixel Camera• The gigapixel camera, in a

manner similar to a parallel-processor supercomputer, uses between 100 and 150 micro cameras to build a wide-field panoramic image. These small cameras' local aberration and focus provide extremely high resolutions, combined with smaller system volume and less distortion than traditional wide-field lens systems.

360 degrees panoramic camera for the police

Fusion Categories

Multi-view fusion • Images are taken from different viewports to

make 3D View• Multi-modal fusion• Multi-focus fusion

Multi-modal fusion

Multi-focus fusion

System level consideration

• Three key non-fusion processes:• Image registration• Image pre-processing• Image post-processing

• Post processing stage depends on the type of display, fusion system is being used and the personal preference of a human operator

• Pre-processing makes images best suited for fusion algorithm

• Image registration is the process of aligning images so that their detail overlap accurately.

Methodology

Feature detection• Algorithm should be able to detect the same

features • Feature matching Correspondence between the features detected in the sensed image and those detected in the reference image is established Image resampling and transformation The sensed image is transformed

Methods of image fusion

Classification Spatial domain fusion

Weighted pixel averaging Brovey method Principal component analysis Intensity Hue SaturationTransform domain fusion Laplacian pyramid Curvelet transform Discrete wavelet transform (DWT)

Weighted pixel averaging

• Simplest image fusion technique• F(x,y)=Wa*A(x,y)+Wb*B(x,y)• Where Wa, Wb are scalars

• It has an advantage of suppressing any noise in the source imagery.

• It also suppresses salient image features,inevitably producing a low contrast fused image with a ‘washed-out’ appearance.

Pyramidal method

• Produce sharp , high-contrast images that are clearly more appealing and have greater information content than simpler ratio-based schemes.

• Image pyramid is essentially a data structure consisting of a series of low-pass or band-pass copies of an image, each representing pattern information of a different scale.

Flow of pyramidal method

Discrete wavelet transform method

• It represents any arbitrary function x(t) as a superposition of a set of such wavelets or basis functions –mother wavelet by dilation or contractions (scaling) and translational (shifts)

APPLICATIONS OF IMAGE FUSION

Medical image fusion

• Helps physicians to extract features from multi-modal images.

• Two types- structural (MRI, CT) & functional (PET, SPECT)

Objectives of image fusion in remote sensing

• Improve the spatial resolution • Improve the geometric precision• Enhanced the capabilities of feature display• Improve classification accuracy• Enhance the capability of change detection.• Replace or repair the defect of image data.• Enhance the visual interpretation.

Dual resolution images in satellites

Several commercial earth observation satellites carry dual-resolution sensors of this kind, which provide high-resolution panchromatic images (HRPIs) and low-resolution multispectral images (LRMIs).

For example, the first commercial high-resolution satellite, IKONOS, launched on September 24, 1999, produces 1-m HRPIs and 4-m LRMIs.

PRINCIPLES OF SEVERAL EXISTINGIMAGE FUSION METHODS USED IN REMOTE SENSING

Multi resolution Analysis-Based Intensity ModulationÀ Trous Algorithm-Based Wavelet TransformPrincipal Component AnalysisHigh-Pass ModulationHigh-Pass FilteringBrovey TransformIHS Transform

Relationship between low-resolution pixel and the corresponding high-resolution pixels

each low-resolution pixel value (or radiance) can be treated as a weighted average of high-resolution pixel values

Brovey transform.The BT is based on the chromaticity transformIt is a simple method for combining data from different sensors,with the limitation that only three bands are involved. Itspurpose is to normalize the three multispectral bands used forRGB display and to multiply the result by any other desired data

to add the intensity or brightness component to the image.

IHS Transform

The IHS technique is a standard procedure in image fusion, with the major limitation that only three bands are involved . Originally, it was based on the RGB true color space.

High-Pass FilteringThe principle of HPF is to add the high-frequency informationfrom the HRPI to the LRMIs to get the HRMIs .The high-frequency information is computed by filtering theHRPI with a high-pass filter or taking the original HRPI andsubtracting the LRPI, which is the low-pass filtered HRPI. Thismethod preserves a high percentage of the spectral characteristics,since the spatial information is associated with the high-frequencyinformation of the HRMIs, which is from the HRPI, andthe spectral information is associated with the low-frequency information of the HRMIs, which is from the LRMIs. The mathematical model is

High-Pass ModulationThe principle of HPM is to transfer the high-frequency informationof the HRMI to the LRMIs, with modulation coefficients , which equal the ratio between the LRMIs and the LRPI . The LRPI is obtained by low-pass filtering the HRPI. The equivalentmathematical model is

Principal Component Analysis

The PCA method is similar to the IHS method, with the main advantage that an arbitrary number of bands can be used . The input LRMIs are first transformed into the same number of uncorrelated principal components.

Then, similar to the IHS method, the first principal component (PC1) is replaced by the HRPI, which is first stretched to have the same mean and variance as PC1. As a last step, the HRMIs are determined by performing theinverse PCA transform.

where the transformation matrix

À Trous Algorithm-Based Wavelet Transform

It is based on wavelet transform and is particularly suitable for signal processing since it is isotropic and shift-invariant and does not create artifacts when used in image processing. Its application to image fusion is reported in and .

The ATW method is given by

Multiresolution Analysis-Based Intensity Modulation

MRAIM was proposed by Wang. It follows the GIF method, with the major advantage that it can be used for the fusion case in which the ratio is an arbitrary integer M , with a very simple scheme. The mathematical model is

Comparisons

1-The IHS, BT, and PCA methods use a linear combination of the LRMIs to compute the LRPIs, with different coefficients.2- The HPF, HPM, ATW, and MRAIM methods compute the LRPIs by low-pass filtering the original HRPI with different filters.3- The BT, HPM, and MRAIM methods use the modulation coefficients as the ratios between the LRMIs and the LRPI, whereas the IHS, HPF, ATW, and PCA methods simplify the modulation coefficients to constant values for all pixels of each band.

It is obvious that the IHSand PCA methods belong to class 1, the BT method belongs to class 2, the HPF and ATW methods belong to class 3, and the HPM and MRAIM methods belong to class 4. The performance of each image fusion method is determined by two factors: how the LRPI is computed and how the modulation coefficients are defined.

EXPERIMENTS AND RESULTS

1-The IHS and BT can only have 3 bands.2-In order to evaluate the NIR band as well, we selected the red–green–blue combination for true natural color and the NIR–red–green combination for false color3-In comparison the NIR can be used with other components of red,green and blue.

Results

Original HRPI (panchromatic band)

Original LRMIs (RGB) (resampled at 1-m pixel

size).

Result of the IHS method

Result of the BT method

Result of the PCA method

Result of the HPF method

Result of the HPM method

Result of the ATW method

Result of the MRAIM method

4-MRAIM looks better than the other methods.5-MRAIM looks better than HPM method in spatial quality.6-the correlation coefficient (CC) is the most popular similarity metric in image fusion . However, CC is insensitive to a constant gain and bias between two images and does not allow subtle discrimination of possible fusion artifacts.

Recently, a universal image quality index (UIQI) has been used to measure the similarity between two images. In this experiment, we used the UIQI to measure similarity.The UIQI is designed by modeling any image distortion as a combination of three factors: loss of correlation, radiometric distortion, and contrast distortion. It is defined as follows:

UIQI MEASUREMENT OF SIMILARITY BETWEEN THE DEGRADED FUSEDIMAGE AND THE ORIGINAL IMAGE AT 4-m RESOLUTION LEVEL

UIQIS FOR THE RESULTANT IMAGES AND THE ORIGINALLRMIS AT 4 m. (FUSION AT THE INFERIOR LEVEL)

This maybe because all the methods provide good results in the NIR band, so the difference is very small , while the spatial degradation process will influence the final result differently for different fusion methods.

Subscenes of the original LRMIs and the fused resulting HRMIs by different methods (double zoom). (Left to right sequence, row by row) Original

LRMIs, IHS, BT, PCA, HPF, HPM, ATW, and MRAIM.

ConclusionThe performance of each method is determined by two factors: how the LRPI is computed and how the modulation coefficients are defined. If the LRPI is approximated from the LRMIs, it usually has a weak correlation with the HRPI, leading to color distortion in the fused image. If the LRPI is a low-pass filtered HRPI, it usually shows less spectral distortion.

By combination of the visual inspection results and the quantitative results, it is possible to see that the experimental results are in conformity with the theoretical analysis and that the MRAIM method produces the synthesized images closest to those the corresponding multi sensors would observe at the high-resolution level.

THANK YOU

Recommended