6
a Copyright © 2013 IJECCE, All right reserved 75 International Journal of Electronics Communication and Computer Engineering Volume 4, Issue 1, ISSN (Online): 2249071X, ISSN (Print): 22784209 Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded Vision Devices B. Mahesh Asst. Prof, Sagar Institute of Technology, Chevella.India K. Venkatesh PG Student, Prakasham Engineering College S. Ravi Kumar PG Student, REC, Warangal K. Koteswara Rao Asso.Prof, PEC B. Prabhakar Rao Professor and HOD, Deptt. of ECE, Sagar Institute of Technology, Chevella, RR, Hyd, AP, India C. Raja Rao Dy. Director of Cstern Region, (Under MHRD, Dept. of Higher Education, Govt. of India), Kolkata Abstract Most digital cameras use a color filter array to capture the colors of the scene. Sub-sampled (Down sampled) versions of the red, green, and blue components are acquired using Single Sensor Embedded vision devices with the help of Color Filter Array (CFA) [1] . Hence Interpolation of the missing color samples is necessary to reconstruct a full color image. This method of interpolation is called as Demosaicing (Demosaicking). Least-Square LumaChroma demulti- plexing algorithm for Bayer demosaicking [2] is the most effective and efficient demosaicking technique available in the literature. As almost all companies of commercial cameras make use of this cost effective way for interpolating the missing colors and reconstructing the original image, the demosaicking arena has become a vital domain of research of embedded color vision devices [3] .Hence, in this paper ,the authors aim is to analyze ,implement and evaluate the relative performance of the best known algorithms. Objective empirical value prove that LSLCDA is superior in performance. Keywords Luminance (Green) Channel, Chrominance (Redandblue) Demosaicking, Bayer Pattern, Least Square, MSE, PSNR. I. INTRODUCTION Digital color Imaging and processing has become vital because-color image contains more information than grey scale image, and significant use of digital images over internet, and publishing and visualization. Digital color imaging is used in extracting features of interest in an image, and it simplifies object identification. Moreover, if the image analysis is manual, the significant factor is that the humans can discern thousands of color shades and intensities, compared to about only two dozen shades of gray. Thus Digital imaging devices have gained importance over traditional film cameras. Images are formed in a camera in a manner similar to image formation in the eye. However accommodation to image closer objects is done differently in the eye and camera. Human Visual System is the best model and basis for all Vision systems [2]. Embedded color vision devices are different from film cameras. These are preferred to film cameras because they are better, faster, and cheaper. II. REVIEW OF DEMOSAICKING ALGORITHMS The performance of a demosaicking algorithm is of utmost importance to how good a digital camera can perform. A lot of existing demosaicking algorithms have been developed. What problems do these demosaicking algorithms try solve? How different are these algorithms in terms of implementation and performance? These are some of the first questions any engineer who wishes to design a novel-demosaicking algorithm, or choose an algorithm to use, have to try to answer. Besides, computational cost does also matter. In attempt to answer part of these questions and thus to arrive at a conclusion in choosing an efficient, easy to implement and cost effective demosaicking algorithm, five different demosaicking algorithms: 1.Bilinear Interpolation 2. Edge Sensing Interpolation-I, Edge Sensing Interpolation-II 3.Color Interpolation Using Alternating Projection 4.High Quality Linear Interpolation Algorithm are considered. 5.Least Square Algorithm. These five demosaicking algorithms are studied qualitatively and implemented for quantitative results. MATLAB is used for implementation. “Demosaicking” is the process of translating the Bayer array of primary colors into a final full-color image. The minimum number of cells is 2X2, but this reduces resolution. To correct this variety of image processing algorithms perform colour reconstruction by estimating color using neighboring pixels[5]. A. Bilinear Interpolation R11 G12 R13 G14 R15 G21 B22 G23 B24 G25 R31 G32 R33 G34 R35 G41 B42 G43 B44 G45 R51 G52 R53 G54 R55 Fig.1. Sample Bayer CFA From Fig.1, At a Blue (B) center, we need to estimate the Green (G) and Red(R) components. Consider pixel B 44 at which only B is measured; we need to determine G 44 . One estimate for G 44 is given by 4 G54) G45 G43 (G34 G44 (1) To determine R 44 , given R 33, R 35, R 53, R 55 the estimation for R 44 is 4 R55) R53 R35 (R33 R44 (2)

Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved75

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

Relative Performance Evaluation of Single Chip CFAColor Reconstruction Algorithms Used in Embedded

Vision DevicesB. Mahesh

Asst. Prof, Sagar Institute ofTechnology, Chevella.India

K. VenkateshPG Student,

Prakasham Engineering College

S. Ravi KumarPG Student,

REC, Warangal

K. Koteswara RaoAsso.Prof,

PEC

B. Prabhakar RaoProfessor and HOD, Deptt. of ECE,

Sagar Institute of Technology,Chevella, RR, Hyd, AP, India

C. Raja RaoDy. Director of Cstern Region,(Under MHRD, Dept. of Higher

Education, Govt. of India), Kolkata

Abstract – Most digital cameras use a color filter array tocapture the colors of the scene. Sub-sampled (Down sampled)versions of the red, green, and blue components are acquiredusing Single Sensor Embedded vision devices with the help ofColor Filter Array (CFA)[1]. Hence Interpolation of themissing color samples is necessary to reconstruct a full colorimage. This method of interpolation is called as Demosaicing(Demosaicking). Least-Square Luma–Chroma demulti-plexing algorithm for Bayer demosaicking [2] is the mosteffective and efficient demosaicking technique available inthe literature. As almost all companies of commercialcameras make use of this cost effective way for interpolatingthe missing colors and reconstructing the original image, thedemosaicking arena has become a vital domain of research ofembedded color vision devices[3].Hence, in this paper ,theauthors aim is to analyze ,implement and evaluate therelative performance of the best known algorithms. Objectiveempirical value prove that LSLCDA is superior inperformance.

Keywords – Luminance (Green) Channel, Chrominance(Redandblue) Demosaicking, Bayer Pattern, Least Square,MSE, PSNR.

I. INTRODUCTION

Digital color Imaging and processing has become vitalbecause-color image contains more information than greyscale image, and significant use of digital images overinternet, and publishing and visualization. Digital colorimaging is used in extracting features of interest in animage, and it simplifies object identification. Moreover, ifthe image analysis is manual, the significant factor is thatthe humans can discern thousands of color shades andintensities, compared to about only two dozen shades ofgray. Thus Digital imaging devices have gainedimportance over traditional film cameras. Images areformed in a camera in a manner similar to imageformation in the eye. However accommodation to imagecloser objects is done differently in the eye and camera.Human Visual System is the best model and basis for allVision systems [2].

Embedded color vision devices are different from filmcameras. These are preferred to film cameras because theyare better, faster, and cheaper.

II. REVIEW OF DEMOSAICKING ALGORITHMS

The performance of a demosaicking algorithm is ofutmost importance to how good a digital camera canperform. A lot of existing demosaicking algorithms havebeen developed. What problems do these demosaickingalgorithms try solve? How different are these algorithmsin terms of implementation and performance? These aresome of the first questions any engineer who wishes todesign a novel-demosaicking algorithm, or choose analgorithm to use, have to try to answer. Besides,computational cost does also matter. In attempt to answerpart of these questions and thus to arrive at a conclusion inchoosing an efficient, easy to implement and cost effectivedemosaicking algorithm, five different demosaickingalgorithms: 1.Bilinear Interpolation 2. Edge SensingInterpolation-I, Edge Sensing Interpolation-II 3.ColorInterpolation Using Alternating Projection 4.High QualityLinear Interpolation Algorithm are considered. 5.LeastSquare Algorithm. These five demosaicking algorithmsare studied qualitatively and implemented for quantitativeresults. MATLAB is used for implementation.

“Demosaicking” is the process of translating the Bayerarray of primary colors into a final full-color image. Theminimum number of cells is 2X2, but this reducesresolution. To correct this variety of image processingalgorithms perform colour reconstruction by estimatingcolor using neighboring pixels[5].A. Bilinear Interpolation

R11 G12 R13 G14 R15G21 B22 G23 B24 G25R31 G32 R33 G34 R35G41 B42 G43 B44 G45R51 G52 R53 G54 R55

Fig.1. Sample Bayer CFA

From Fig.1, At a Blue (B) center, we need to estimate theGreen (G) and Red(R) components. Consider pixel B44 atwhich only B is measured; we need to determine G44. Oneestimate for G44 is given by

4

G54)G45G43(G34G44

(1)To determine R44, given R33, R35, R53, R55 the estimation forR44 is

4

R55)R53R35(R33R44

(2)

Page 2: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved76

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

And at a Red center, we would estimate the Blue andGreen accordingly. Estimating Blue and Green samples atR33 Repeating the process at each photo-site (location onthe CCD), we can obtain three color. Determining B33

given B22, B24 B42 and B44 given by

4

B44)B42B24(B22B33

(3)

Similarly estimating G33, given G23, G32, G34 and G43

given by

4

G43)G34G32(G23G33

(4)

Planes for the scene, which would give us one possibledemosaicked form of the scene. This type of interpolationis a low pass filter process. The band-limiting nature ofthis interpolator smoothen edges, which show up in colorimages as fringes (referred to as the zipper effect)[6].B. Edge Sensing Interpolation:1. Edge Sensing Interpolation Algorithm I

It is observed that in the earlier algorithms most of thecolor interpolation is done by averaging neighboringpixels indiscriminately. This causes an artifact -- the"zipper effect" in the interpolated image. To combat withthis artifact, it is natural to derive an algorithm that candetect local spatial features present in the pixelneighborhood and then makes effective choices as towhich predictor to use that neighborhood. The result is areduction or elimination of "zipper-type" artifacts. Andalgorithms that involve this kind of "intelligent" detectionand decision process are referred as adaptive colorinterpolation algorithms [7].With reference toInterpolation of green pixels: First, define two gradients,one in horizontal direction, and the other in verticaldirection, for each blue/red position. For instance, considerB8: define two gradients as

G24G22ΔH (5)

G33G13ΔV (6)

Where | . | denotes absolute value. Define somethreshold value T. The algorithm then can be described asfollows:If H < T and V >T (7)

2

G24G22G23

(8)Else if H >T and V < T (9)

2

G33G13G23

(10)

Else

4

G33G24G22G13G23

(11)End

The choice of T depends on the images and can havedifferent optimum values from different neighborhoods. A

particular choice of T is2

ΔVΔHT

(12)In this case, the algorithm becomes:

If H < V ,

2

G24G22G23

(13)

Else if H > V ,

2

331323

GGG

(14)Else

4

G33G24G22G13G23

(15 )End

2. Edge Sensing Interpolation Algorithm II:A slightly different edge sensing interpolation algorithm

is described as. Interpolation of green pixels: Same as inedge sensing interpolation algorithm-I except thathorizontal and vertical gradients are defined differently.From fig.1 to estimate G44 at B44, we define gradients as:

2

B46B42B44ΔH

(16)

2

B64B24B44ΔV

(17)

The actual algorithm follows as in the edge sensinginterpolation algorithm I. Interpolation of red/blue pixels:Similar to bilinear interpolation algorithm, except that,colour difference is interpolated instead of color itself.C. Colour Interpolation Using AlternatingProjections:

In digital cameras that use the Bayer Pattern filter array,the green channel is sampled at higher frequencies than thered and blue channels. Therefore, details in the greenchannel are better preserved than in the red and bluechannels since the green channel is less likely to bealiased. Interpolation of the red and blue channels thusbecomes the limiting factor in performance. In particular,colour artifacts caused by aliasing in the red and bluechannels are very severe at high frequency regions such asedges. The objective of this algorithm is to reduce theamount of red and blue channel aliasing by using analternating-projection scheme that uses inter-channelcorrelation effectively. The block diagram of the algorithmis shown below, and details of the algorithm are explained[8].D. High Quality Linear Interpolation Algorithm:

Classical bilinear interpolation methods use only thecolor information in the channel to be interpolated. Forexample, when a green pixel is to be estimated, classicalmethods usually use only information in the greenchannel. In this high-quality linear interpolation method itcombines bilinear interpolation with a gradient-correctiongain and turns out a better estimation of the missing colorinformation [9]. Specifically, to interpolate G values at anR location, use the formula:

j)(i,αΔj)(i,gj)(i,g RB (18)

Where Bg is the bilinear interpolation and R is the

gradient of R computed by

(2,0)}2,0),((0,2),2),{(0,n)(m,

R n)jm,r(i4

1j)r(i,j)(i,Δ (19)

For interpolating G at blue pixels, the same formula is

used, but corrected by ).,( jiB For interpolating R at

green pixels, use the formula

Page 3: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved77

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

j)(i,βΔj)(i,rj)(i,r GB (20)

With ),( jiG determined by a 9-point region. For

interpolating R at blue pixels, use the formula

j)(i,γΔj)(i,rj)(i,r BB (21)

With ),( jiB computed on a 5-point region. The

formulas for interpolating B are similar, by symmetry. Todetermine appropriate values for the gain parameters{α,β,γ}, we used a Wiener approach; that is, we computedthe values that lead to minimum mean-square errorinterpolation, given second order statistics computed froma good data set. We then approximated the optimal Wienercoefficients by integer multiples of small powers of 1/2,with the final result α = 1/2, β = 5/8, and γ = 3/4. From thevalues of {α,β,γ} we can compute the equivalent linearFIR filter coefficients for each interpolation case. Theresulting coefficient values make the filters quite close(within 5% in terms of mean-square error) to the optimalWiener filters for a 5×5 region of support. This sub-sampling approach is not really representative of digitalcameras, which usually employ careful lens design toeffectively perform a small amount of low-pass filtering toreduce the aliasing due to the Bayer pattern sub-sampling.However, since all papers in the references perform justsub-sampling, with no low-pass filtering, we did the sameso we could compare results. We has also tested allinterpolation methods with small amounts of Gaussianlow-pass filtering before Bayer sub-sampling, and foundthat the relative performances of the methods are roughlythe same, with or without filtering. The improvement inpeak-signal-to-noise ratio (PSNR) over bilinearinterpolation.E. Least-Squares Luma–Chroma DemultiplexingAlgorithm for Bayer Demosaicking:

The algorithm for Bayer demosaicking by adaptiveluma–chroma demultiplexing used in this paper isprecisely the one described in ; it is summarized here forcompleteness. We assume that an underlying color imagewith RGB components fR,fG and fB is sampled on therectangular integer lattice Λ=Z2 with the upper left pointof the image at coordinate (0,0). The unit of length used inthis paper is the vertical spacing between sample elementsin the CFA signal, denoted 1 px. The standard spatialmultiplexing model of the BayerCFAsignal. n ,n =f n ,n m n ,n + f n ,n m n ,n +f n ,n m n ,n (22)= [ 1, 2](1 − (−1) (1 + (−1) ) +[ 1, 2](1 + (−1) + [ 1, 2](1 +(−1) (1 − (−1) (23)

This expression offers a different interpretation to thespatial representation of the Bayer CFA signal.Specifically, the CFA is treated as the multiplexing of onebaseband signal and two modulated difference signals.The baseband signal fL identifies an achromatic lumacomponent and the two modulated signals fC1 and fC2

identify two separate chromatic color difference

components, referred to here as chroma components.Substituting for -1=e j in equation , one obtainsn ,n = f n ,n + f n ,n e ( ) +f n ,n (e − e ) ≜ f n ,n + f n ,n+f n ,n + f n ,n (24)With Fourier transformfCFA(u, v) = FL(u, v) + Fc1(u − 0.5, v − 0.5) +Fc2(u, v − 0.5) (25)Where, frequencies are expressed in c/px.Least-Squares Filter Design:

We have seen that the estimate for component X ∈{C1m, C2ma, C2mb} is obtained by the spatial filteringoperation[10] fX = fCFA*hX

’ where x ∈{1,2a, 2b} respectively. Suppose we have a model for theoriginal signal fX so that the difference between fX and fX

can be expressed as a stationary random field. Then, asuitable design criterion is to minimize the expectedsquared error, resulting in thefilter h = arg min E(f [n , n ] − (f ∗ h)[n , n ]) ]which is independent of (n1; n2) due to stationarity.Because good models for fX do not exist yet,we caninstead compile a set of training images from typical colorimages and compute filters that minimize squared errorsover the training set; these filters are the solution tostandard least-squares problems[11] . Assume that wehave chosen a training set of original RGB color images.Thus, we also have access to the signals fC1m, fC2ma andfC2mb, which are respectively the original baseband signalsfC1 and fC2 modulated to the appropriate centeringfrequencies. Let us first consider C1 and filter h1. Recallthat the estimate f(i)

C1m for the ith training image is obtainedby f(i)

C1m = f(i)CFA *h1. If h1 has region of support B that is

a subset of the ith image sampling raster Λi, then( ) = ∑ ℎ1[ 1, 2] ( )[ 1 − 1, 2 − 2 ][ , ]∊⋀ (26)

Fig.2. Block diagram of adaptive luma-chromademultiplexing algorithm for the Bayer CFA structure.

We define the total squared error (TSE) on C1 overevery pixel in a training set of K images by1 = ∑ (f ( )c [n1, n2] − f ( )c [n1, n2](27)[ , ]∈∆

The least-squares filter h1* that minimizes the estimation

error on C1 is the solution to the least-squaresproblemℎ1∗ = arg min TSE c1

We can reformulate the least-squares problem usingmatrices. Let NB = |β| be the number of h1 filtercoefficients and let NW =|Λi| be the number of pixels in theith training image. Assume for now that NW is the same for

Page 4: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved78

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

every training image. We may reshape f(i)C1m into a NW×1

column vector f(i)C1m by scanning f(i)

C1m column-by-columnover Λi. Now, reshape h1 into a NB×1 column vector h1 byscanning h1 column-by-column over B. Finally, construct aNW×NB matrix A(i) by scanning f(i)

CFA in alignment with h1

such that each entry of the matrix product A(i)h1 realizesequation . The result of A(i)h1 is the NW×1 column vector f(i)

C1m aligned pixel-wise with f (i)C1m. These matrices

reformulate equation into

1

( )* ( ) 2

1 |11

a r g m i n | | | |ik

ii C mC mh

i

h f f

(28)

1

( ) ( ) 21 1 |

1

a r g m i n | | | |k

i ih C m

hi

A f

(29)

Which is a standard least-squares problem with solution1

* ( ) ( ) ( ) ( )1 1

1 1

T Tk k

i i i ic m

i i

h A A A f

(30)

Finally, we reshape *ih back onto support B to get the

least squares filter *ih .The same framework is used on C2

to obtain the least squares filters h2a and h2b defined oversupports D’ and D0 (where D0 is the transpose of D’).Here, we have

2( )

( )2 22

1 [ 1 , 2

1, 2 1 , 2i

iki

c C mC mi n n A

T S E f n n f n n

(31)The set of weighting coefficients wi is obtained in the

same manner described previously. The sets wi and (1−wi)are modulated accordingly to match the centeringfrequencies of fC2ma and fC2mb respectively. As before, wecast the least-squares problem into matrix form.Furthermore, we can simultaneously find the least squares

filters *2ah and *

2bh by temporarily merging the two

filter kernels. Once again, let NW =|Λi| be the number ofpixels in the ith training image and assume NW to be thesame for all images. Let ND = |D| be the number of h2a (orh2b) filter coefficients. First, reshape f(i)

C2m into a NW ×1column vector f(i)

C2m by scanning f(i)C2m column-by-

column over Λi. Next, reshape h2a and h2b into two ND×1column vectors h2a and h2b respectively by scanningcolumn-by-column and then stack h2a over h2b to form the2ND ×1 column vectorℎ = [ℎℎ ] (32)

Finally, construct a NW×2ND matrix B(i) by scanning theproduct values of f(i)

CFA and modulated weightingcoefficients in alignment with h2 such that each entry ofthe matrix product B(i)

h2 realizes equation . The matrix

product is the NW×1 column vector( )

2

i

C mf

aligned pixel-

wise with f(i)C2m. With these matrices we can express the

standard least-squares problem on C2 as( )

22

22

* ( ) 22 2

1

( )( ) 2

21

a r g m i n | | | |

a r g m i n | | | |

i

C m

C m

ki

C mh

i

k ii

hh

i

h f f

B f

(33)

1

* ( ) ( ) ( ) ( )2 2

1 1

T Tk k

i i i iC m

i i

h B B B f

(34)

Now extract h*2a from the first ND entries of h*2 andh*2b from remaining entries and then reshape h*2a and h*2b

separately back onto supports D and D1 to get the least-squares filters h*2a and h*2b. We have relied on theassumption that each training image has the same numberof pixels. Although this may not be true in general, we canenforce this assumption by dividing each training imageinto sub-images of the same dimensions. Then, the sub-image size NW is constant for each piece and we train overall sub-images instead. In the sub-image window hasdimensions 96 pixels×96 pixels giving NW = 9216pixels[12]. The choice of NW has negligible effect on thedemosaicking results.

III. RESULTS AND DISCUSSIONS

In this section, we are applying five different algorithmsto five different images as shown in fig.3.The results canbe calculated using MATLAB simulation with bothsubjective & objective measures MSE & PSNR. Thefollowing figure & tables will give the results. Table 1represents the MSE values of all five images & Table 2represents the PSNR values of all five images. The fig4,5represents the corresponding MSE&PSNR and averagesof image1.similarily,for images2,3,4,5 correspondingfigures can be represented as same as below.

Fig.3 1, 2, 3, 4, 5: Original Data Set. Fig.3a A, B, C, D, E:Bilinear InterpolationFig.3b F, G, H, I, J: Edge Sensing Algorithm.Fig.3c K, L, M, N,O: High Quality Linear InterpolationAlgorithm.Fig.3d P, Q, R, S, T: Projection Using Color Interpolation.Fig.3e U, V, W, X, Y: Least Square Algorithm.

Page 5: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved79

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

RESULT TABLES

Table 1: Mean Square Error for 5 Set of Images

Table 2: Peak Signal to Noise Ratio for 5 Set of ImageBILIN : Bilinear Interpolation POCS : Projection on Convex Sets,ES : Edge Sensing Algorithm LS : Least Square AlgorithmHQLIP : High Quality Interpolation

Algorithm MSE AverageBILIN 30.81493

ES 13.08058HQLIP 12.32548POCS 7.678256LS 7.537509

Table 3: MSE Average of RGB

Algorithm PSNR AverageBILIN 16.88495ES 34.75796HQLIP 35.55286POCS 40.08376LS 40.66864Table 4: PSNR Average of RGB.

Fig.4. Graph of MSE Average of RGB

Fig.5. Graph of PSNR Average

IV. CONCLUSION

In this study, we described five color interpolationalgorithms used for single sensor Embedded color visiondevices. We selectively implemented all of them usingMATLAB and finally did comparative performanceanalysis from the aspects of the algorithm. The followingare the objective and subjective interpretation based on themeasures using MSE(Mean Square Error ) & PSNR (PeakSignal to Noise Ratio) tabled at Table: 1,2.It reveal that forefficient algorithm the PSNR should be more and MSEshould be less .These are calculated by using different wellknown demosaicking algorithms. Least-Squares Algorithmfor Colour Filter Array Interpolation is performing withbetter performance with low Mean Square Error andMaximum Peak Signal to Noise Ratio in comparison to theother algorithms.The best advice for Embedded colorvision devices would be to stick with Least-SquaresLuma–Chroma Demultiplexing Algorithm for BayerDemosaicking.

REFERENCES

[1] Interpolation of Missing Color Samples for Single Chip ColorFilter Array Digital Camera , B.Prabhakar Rao, C.Raja Rao ,S.S. Kumar.

[2] Linear demosaicing inspired by the human visual systemAlleysson, David; Süsstrunk, Sabine; Hérault, Jeanny,IEEETransactions on Image Processing, vol. 14, num. 4, p. 439-449.

[3] A Color Filter Array Interpolation Algorithm Based on ColorGradients V.Roop Kumar, C.Raja.Rao, K.Veeraswami,B.Prabhakar Rao, RTCTV 2010, VCE,HYD,A.P.

[4] D.Cok, “Signal Processsing method and apparatus for sampledImage Signals”, USPatent 4630307,1986

[5] L. Zhang and X. Wu, “Color demosaicking via directional linearminimum mean square-error estimation,” IEEE Trans. on ImageProcessing, vol. 14, pp. 2167-2178, Dec. 2005.

[6] Nai-Xiang Lian, Student Member, IEEE, Lanlan Chang, Yap-Peng Tan, Senior Member, IEEE, and Vitali Zagorodnov,Member, IEEE, “Adaptive Filtering for Color Filter ArrayDemosaicking” IEEE Transactions on Image Processing, Vol.16, NO. 10, October 2007 2515.

[7] Wenmain Lu and Yap-peng Tan, “Color filter arraydemosaicing: new method and performance measures”IEEETrans. on Image Proc., vol. 12, no. 10, pp. 1194-1210, Oct. 2003.

02040

AVARAGE

AVARAGE

0

50

AVARAGE

AVARAGE

Page 6: Relative Performance Evaluation of Single Chip CFA Color … · 2014. 11. 29. · Relative Performance Evaluation of Single Chip CFA Color Reconstruction Algorithms Used in Embedded

a

Copyright © 2013 IJECCE, All right reserved80

International Journal of Electronics Communication and Computer EngineeringVolume 4, Issue 1, ISSN (Online): 2249–071X, ISSN (Print): 2278–4209

[8] D. Menon, S. Andriani, and G. Calvagno, , "Demosaicing withDirectional Filtering and a Posteriori Decision", IEEE Trans. onImage Processing, vol. 16 no. 1, Jan. 2007.

[9] B. K. Gunturk, Y. Altunbasak and R. M. Mersereau, ``Colorplane interpolation using alternating projections ,'' {\em IEEETrans. on Image Proc.}, vol. 11, no. 9, pp. 997-1013, Sep. 2002.

[10] X. Li, "Demosaicing by successive approximation", IEEETransactions on Image Processing, VOL. 14, NO. 3, MARCH2005

[11] H. S. Malvar, L.-W. He, and R. Cutler, “High-quality linearinterpolation for demosaicing of Bayer-patterned color images”,IEEE International Conference on Acoustics, Speech, and SignalProcessing, Montreal, Canada, May 2004.

[12] Glaude A. Laroche and Mark A. Prescott, “Apparatus andmethod for adaptively interpolating a full color image utilizechrominance gradients,” U.S. Patent 5,373,322, Eastman KodakCompany, 1994.

[13] Xiaolin Wu, Senior Member, IEEE, and Ning Zhang “Primary-Consistent Soft-Decision Color Demosaicking for DigitalCameras (Patent Pending)” IEEE Transactions on ImageProcessing, VOL. 13, NO. 9, Sept,2004, pp1263 to 1274

AUTHOR’S PROFILE

Mahesh. Breceived B.Tech.(ECE), M.Tech. (CSE) from JNTUHyderabad, India. At present he is an AssistantProfessor (CSE) at Sagar Group of Institutions,Hyderabad, India. His Research in Image Processingis Demosaicking, and Simplified image compressionalgorithm for Bayer filter array.

Venkatesh. Kreceived the B.Tech. (ECE) from Nimra college ofEngineering & Technology, Vijayawada. He iscurrently doing M.Tech in VLSI & EMBEDDEDSYSTEMS from Prakasam College of Engineering,kandukur, JNTUK.

Ravi Kumar. Sreceived B.Tech. (CSE), currently, pursuingM.Tech, from JNT University HYD. His researchinterests include image and Video processing,pattern recognition, computer vision, and colorimaging.

Koteswara Rao. Kreceived the B.E degree from Andhra UniversityCollege of Engineering in 2002. He worked asAssistant Professor at Prakasam College ofEngineering between 2002 to 2008.He receivedM.Tech from Jawarharlal Nehru Technology

University Hyderabad in 2008-2011. He is currently doing AssociateProfessor since 2011 at Prakasam College of Engineering, Kandukur.

Prabhakar Rao. BObtained B.Tech. (ECE) from Nagarjuna University,M.Tech. (I&CS) from University College ofEngineering, JNTUK and M.Tech (EmbeddedSystems) from JNTUH, at present a Professor andHOD, Dept., of ECE, Sagar Institute of Technology,

Chevella, Hyd, A.P, India. So far he rendered his services in variouscapacities as Asst. Prof, Asso. Prof and Vice Principal in various Engg.,Institutions. His areas of interest include Machine Vision, Bionic Eye,VLSI Implementation of IP Algorithms, and Cost Effective ImageReconstruction.

Shri. C. Raja Raoobtained his B.E in ECE from Andhra University in1995 and M.Tech in DSCE from JNTU Hyderabad in2001. At present he is working as Deputy Directorof Training in Board of Practical Training (EasternRegion) Kolkata (An Autonomous body underMinistry of HRD, Dept. of Higher Education, Govt.

of India). He started his career as Academic Assistant in ECE, JNTUH,in 1998 and continued in teaching in various Engineering colleges asAsst. Professor, Assoc. Professor, and Professor of ECE Dept. up to 2009December. His current research interests are Digital Image Processing,Digital Filter Design and Device Modeling and Analog VLSI.