12
Residual interpolation for division of focal plane polarization image sensors ASHFAQ AHMED, 1 XIAOJIN ZHAO, 2,* VIKTOR GRUEV, 3 JUNCHAO ZHANG, 4 AND AMINE BERMAK 1,5 1 Department of Bioengineering, Hong Kong University of Science and Technology, Hong Kong, China 2 College of Electronics Science and Technology, Shenzhen University, Shenzhen 518060, China 3 Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, USA 4 Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China 5 CSE College, Hamad Bin Khalifa University, Qatar * [email protected] Abstract: Division of focal plane (DoFP) polarization image sensors capture polarization properties of light at every imaging frame. However, these imaging sensors capture only partial polarization information, resulting in reduced spatial resolution output and a varying instantaneous field of overview (IFoV). Interpolation methods are used to reduce the drawbacks and recover the missing polarization information. In this paper, we propose residual interpolation as an alternative to normal interpolation for division of focal plane polarization image sensors, where the residual is the difference between an observed and a tentatively estimated pixel value. Our results validate that our proposed algorithm using residual interpolation can give state-of-the-art performance over several previously published interpolation methods, namely bilinear, bicubic, spline and gradient-based interpolation. Visual image evaluation as well as mean square error analysis is applied to test images. For an outdoor polarized image of a car, residual interpolation has less mean square error and better visual evaluation results. © 2017 Optical Society of America OCIS codes: (260.5430) Polarization; (230.5440) Polarization-selective devices; (110.5405) Polarimetric imaging. References and links 1. N. M. Garcia, I. de Erausquin, C. Edmiston, and V. Gruev, “Surface normal reconstruction using circularly polarized light,” Opt. Express 23(11), 14391–14406 (2015). 2. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. 56(4), 041303 (2016). 3. H. Zhan and D. G. Voelz, “Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation,” Opt. Eng. 55(12), 123103 (2016). 4. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46(30), 7527–7536 (2007). 5. B. Shen, P. Wang, R. Polson, and R. Menon, “Ultra-high-efficiency metamaterial polarizer,” Optica 1(5), 356– 360 (2014). 6. P. Terrier, V. Devlaminck, and J. M. Charbois, “Segmentation of rough surfaces using a polarization imaging system,” J. Opt. Soc. Am. A 25(2), 423–430 (2008). 7. O. Morel, R. Seulin, and D. Fofi, “Handy method to calibrate division-of-amplitude polarimeters for the first three Stokes parameters,” Opt. Express 24(12), 13634–13646 (2016). 8. M. W. Hyde 4th, J. D. Schmidt, M. J. Havrilla, and S. C. Cain, “Enhanced material classification using turbulence-degraded polarimetric imagery,” Opt. Lett. 35(21), 3601–3603 (2010). 9. S. Alali and A. Vitkin, “Polarized light imaging in biomedicine: emerging Mueller matrix methodologies for bulk tissue assessment,” J. Biomed. Opt. 20(6), 061104 (2015). 10. T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, D. Saha, N. W. Roberts, T. W. Cronin, J. Marshall, S. Achilefu, S. P. Lake, B. Raman, and V. Gruev, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications: analysis at the focal plane emulates nature’s method in sensors to image and diagnose with polarized light,” Proc IEEE Inst Electr Electron Eng 102(10), 1450–1469 (2014). 11. N. W. Roberts, M. J. How, M. L. Porter, S. E. Temple, R. L. Caldwell, S. B. Powell, V. Gruev, N. J. Marshall, and T. W. Cronin, “Animal polarization imaging and implications for optical processing,” Proc. IEEE 102(10), 1427–1434 (2014). #286440 https://doi.org/10.1364/OE.25.010651 Journal © 2017 Received 14 Feb 2017; revised 11 Apr 2017; accepted 18 Apr 2017; published 28 Apr 2017 Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10651

Residual interpolation for division of focal plane

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Residual interpolation for division of focal plane

Residual interpolation for division of focal plane polarization image sensors

ASHFAQ AHMED,1 XIAOJIN ZHAO,2,* VIKTOR GRUEV,3 JUNCHAO ZHANG,4 AND AMINE BERMAK

1,5 1Department of Bioengineering, Hong Kong University of Science and Technology, Hong Kong, China 2College of Electronics Science and Technology, Shenzhen University, Shenzhen 518060, China 3Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, USA 4Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China 5CSE College, Hamad Bin Khalifa University, Qatar *[email protected]

Abstract: Division of focal plane (DoFP) polarization image sensors capture polarization properties of light at every imaging frame. However, these imaging sensors capture only partial polarization information, resulting in reduced spatial resolution output and a varying instantaneous field of overview (IFoV). Interpolation methods are used to reduce the drawbacks and recover the missing polarization information. In this paper, we propose residual interpolation as an alternative to normal interpolation for division of focal plane polarization image sensors, where the residual is the difference between an observed and a tentatively estimated pixel value. Our results validate that our proposed algorithm using residual interpolation can give state-of-the-art performance over several previously published interpolation methods, namely bilinear, bicubic, spline and gradient-based interpolation. Visual image evaluation as well as mean square error analysis is applied to test images. For an outdoor polarized image of a car, residual interpolation has less mean square error and better visual evaluation results. © 2017 Optical Society of America

OCIS codes: (260.5430) Polarization; (230.5440) Polarization-selective devices; (110.5405) Polarimetric imaging.

References and links

1. N. M. Garcia, I. de Erausquin, C. Edmiston, and V. Gruev, “Surface normal reconstruction using circularly polarized light,” Opt. Express 23(11), 14391–14406 (2015).

2. D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. 56(4), 041303 (2016).

3. H. Zhan and D. G. Voelz, “Modified polarimetric bidirectional reflectance distribution function with diffuse scattering: surface parameter estimation,” Opt. Eng. 55(12), 123103 (2016).

4. V. Thilak, D. G. Voelz, and C. D. Creusere, “Polarization-based index of refraction and reflection angle estimation for remote sensing applications,” Appl. Opt. 46(30), 7527–7536 (2007).

5. B. Shen, P. Wang, R. Polson, and R. Menon, “Ultra-high-efficiency metamaterial polarizer,” Optica 1(5), 356–360 (2014).

6. P. Terrier, V. Devlaminck, and J. M. Charbois, “Segmentation of rough surfaces using a polarization imaging system,” J. Opt. Soc. Am. A 25(2), 423–430 (2008).

7. O. Morel, R. Seulin, and D. Fofi, “Handy method to calibrate division-of-amplitude polarimeters for the first three Stokes parameters,” Opt. Express 24(12), 13634–13646 (2016).

8. M. W. Hyde 4th, J. D. Schmidt, M. J. Havrilla, and S. C. Cain, “Enhanced material classification using turbulence-degraded polarimetric imagery,” Opt. Lett. 35(21), 3601–3603 (2010).

9. S. Alali and A. Vitkin, “Polarized light imaging in biomedicine: emerging Mueller matrix methodologies for bulk tissue assessment,” J. Biomed. Opt. 20(6), 061104 (2015).

10. T. York, S. B. Powell, S. Gao, L. Kahan, T. Charanya, D. Saha, N. W. Roberts, T. W. Cronin, J. Marshall, S. Achilefu, S. P. Lake, B. Raman, and V. Gruev, “Bioinspired polarization imaging sensors: from circuits and optics to signal processing algorithms and biomedical applications: analysis at the focal plane emulates nature’s method in sensors to image and diagnose with polarized light,” Proc IEEE Inst Electr Electron Eng 102(10), 1450–1469 (2014).

11. N. W. Roberts, M. J. How, M. L. Porter, S. E. Temple, R. L. Caldwell, S. B. Powell, V. Gruev, N. J. Marshall, and T. W. Cronin, “Animal polarization imaging and implications for optical processing,” Proc. IEEE 102(10), 1427–1434 (2014).

#286440 https://doi.org/10.1364/OE.25.010651 Journal © 2017 Received 14 Feb 2017; revised 11 Apr 2017; accepted 18 Apr 2017; published 28 Apr 2017

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10651

Page 2: Residual interpolation for division of focal plane

12. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45(22), 5453–5469 (2006).

13. X. Zhao, A. Bermak, F. Boussaid, and V. G. Chigrinov, “Liquid-crystal micropolarimeter array for full Stokes polarization imaging in visible spectrum,” Opt. Express 18(17), 17776–17787 (2010).

14. V. Gruev, “Fabrication of a dual-layer aluminum nanowires polarization filter array,” Opt. Express 19(24), 24361–24369 (2011).

15. V. Gruev and R. E. Cummings, “Implementation of steerable spatiotemporal image filters on the focal plane,” IEEE Trans. Circuits Syst. 49(4), 233–244 (2002).

16. X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin “guest-host” micropolarizer arrays for visible imaging polarimetry,” Opt. Express 19(6), 5565–5573 (2011).

17. V. Gruev, J. Van der Spiegel, and N. Engheta, “Dual-tier thin film polymer polarization imaging sensor,” Opt. Express 18(18), 19292–19303 (2010).

18. M. Kulkarni and V. Gruev, “Integrated spectral-polarization imaging sensor with aluminum nanowire polarization filters,” Opt. Express 20(21), 22997–23012 (2012).

19. C. K. Harnett and H. G. Craighead, “Liquid-crystal micropolarizer array for polarization-difference imaging,” Appl. Opt. 41(7), 1291–1296 (2002).

20. V. Gruev and R. E. Cummings, “A pipelined temporal difference imager,” IEEE J. Solid-State Circuits 39(3), 538–543 (2004).

21. Y. Liu, R. Njuguna, T. Matthews, W. J. Akers, G. P. Sudlow, S. Mondal, R. Tang, V. Gruev, and S. Achilefu, “Near-infrared fluorescence goggle system with complementary metal-oxide-semiconductor imaging sensor and see-through display,” J. Biomed. Opt. 18(10), 101303 (2013).

22. S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express 19(27), 26161–26173 (2011).

23. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Image interpolation for division of focal plane polarimeters with intensity correlation,” Opt. Express 24(18), 20799–20807 (2016).

24. R. Perkins and V. Gruev, “Signal-to-noise analysis of Stokes parameters in division of focal plane polarimeters,” Opt. Express 18(25), 25815–25824 (2010).

25. E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express 22(12), 15277–15291 (2014).

26. S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express 21(1), 1137–1151 (2013).

27. B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express 17(11), 9112–9125 (2009).

28. P. Thévenaz, T. Blu, and M. Unser, “Image interpolation and Resampling” in Handbook of Medical Imaging (SPIE Press, 2000), pp. 393–420.

29. D. H. Goldstein, Polarized Light, 3rd ed. (CPC Press, 2010). 30. M. W. Kudenov, L. J. Pezzaniti, and G. R. Gerhart, “Microbolometer-infrared imaging Stokes polarimeter,” Opt.

Eng. 48(6), 063201 (2009). 31. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Residual interpolation for color image demosaicking,” in

2013 IEEE International Conference on Image Processing, Melbourne, (IEEE, 2013), pp. 2304–2308. 32. Y. Monno, D. Kiku, S. Kikuchi, M. Tanaka, and M. Okutomi, “Multispectral demosaicking with novel guide

image generation and residual interpolation,” in IEEE International Conference on Image Processing (IEEE, 2014), pp. 645–649.

33. W. Ye and K. K. Ma, “Color image demosaicing using iterative residual interpolation,” IEEE Trans. Image Process. 24(12), 5879–5891 (2015).

34. D. Kiku, Y. Monno, M. Tanaka, and M. Okutomi, “Beyond color difference: Residual interpolation for color image demosaicking,” IEEE Trans. Image Process. 25(3), 1288–1300 (2016).

35. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013).

36. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001).

1. Introduction

1.1 Background

The vital physical parameters of light are intensity (I), wavelength (λ), and polarization (Vector E). In the past, polarization has been ignored by imaging technology, as the human eye is insensitive to the polarization factor of light. Polarization information provides orthogonal information compared to intensity and color, and it captures information about the target 3-D surface normals [1–4], material composition, roughness and ultra-high efficiency metamaterial polarizers [5–8]. In bioengineering research, polarization imaging is used to discriminate healthy from diseased tissue without the use of molecular markers [9–11].

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10652

Page 3: Residual interpolation for division of focal plane

Various techniques and instruments have been developed to record the polarization parameters of light [12]. With developments in nanofabrication technology, compact, inexpensive and high-resolution polarization sensors called division of focal plane (DoFP) polarization image sensors have been realized [13–18]. These developments in nanofabrication and nanomaterials allow for fabrication of pixelated nanowire filters on the top surface of the imaging sensor and help realize robust DoFP polarization imaging sensors. The imaging elements, i.e., photodetectors and micro polarization filter arrays, are included on the same substrate as the DoFP image sensor. The main advantage of DoFP image sensors over division of time (DoT) sensors is their capability of seizing polarization information at each frame and avoiding incorrect polarization information in moving targets [19]. DoFP sensors integrate pixelated polarization filters with an array of imaging elements and are organized in a super-pixel [20,21] configuration containing four distinct pixelated polarization filters with transmission axes oriented at 0°, 90°, 45° and 135°, respectively (see Fig. 1). The super pixel holds all the required information to obtain a useful polarized image, recording the first three (S0, S1, S2) or four Stokes (S3) parameters at every frame [22].

The image obtained from a DoFP sensor has a lower accuracy of polarization information as each individual pixel within the super pixel has a slightly different field of view. To reconstruct the polarization information, missing pixel information is estimated across the imaging array [23,24]. Normally DoFP polarization sensors lose spatial resolution and capture erroneous polarization information [22,23,25,26]. Due to the four spatially distributed pixelated polarization filters, the instantaneous fields of view for the neighboring pixels in a super-pixel configuration can be different from each other [24,27–30]. Therefore, the first three Stokes parameters (S0, S1, S2), angle of linear polarization (AoP) and degree of linear polarization (DoLP) will contain error and are different from the true polarization component. Such edge artifacts can be easily observed in AoP and DoLP images. These drawbacks need to be resolved to obtain the real-time advantage of DoFP image sensors.

The polarization-imaging sensor shares many similarities with color imaging using a Bayer color filter array [31]. The 2 × 2 super pixel of a color filter array consists of three wavelength channels of red, green and blue. The blue and red colors are down-sampled by a factor of 25% each, while green is down-sampled by a factor of 50%. As the color filters are placed on the imaging element array, spatial resolution is reduced in the different color channels during the above-mentioned down-sampling. Since the sensor for each channel only perceives partial information, interpolation algorithms are used to recover the lost spatial resolution with minimal artifacts.

In color image demosaicking, the G image is first interpolated, and then the tentatively estimated R image ( R ) is generated. The residuals are created between observed and tentatively estimated R pixels (R- R ) at the R pixels. Then the interpolated residuals are added to the tentatively estimated R to get the interpolated image [32–34]. The interpolation techniques used for a color filter array cannot be directly employed on the polarization domain due to the essential difference between these two modalities. We have borrowed the tentative estimation of pixels from the residual interpolation technique used for a color filter array. In DoFP, we apply the residual interpolation method to the ‘4’ polarized images separately before calculating the DoLP and AoP.

Interpolation methods are applied to recover some of the lost spatial resolution and improve the accuracy of the captured polarization information. The following methods have been traditionally used to interpolate polarization information: bilinear, bicubic, spline and gradient-based methods [22,23,25–28]. For each method, four polarization-filtered images are required to obtain the necessary polarization information, such as Stokes parameters and angle and degree of linear polarization. The bilinear, bicubic, and spline methods are essentially low-pass filters, which smooth out the intensity information obtained by the four polarization-filtered images and create saw tooth artifacts at the edges. In the case of multiple-object images against a background, their continuity for low-resolution images fails

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10653

Page 4: Residual interpolation for division of focal plane

and false polarization signatures are generated. In the gradient-based interpolation technique, interleaved gradients are used, and these will introduce nonconformities due to the instantaneous field of overview (IFoV). However, the errors can be reduced clearly if a proper interpolation technique is used to reduce incorrect polarization information. Therefore, a novel residual interpolation method with edge preservation and interpolation of residuals obtained by tentatively estimating and observing pixel difference is developed to provide more accuracy.

Fig. 1. Division of focal plane polarization imaging sensor array with a 4-polarizer filter array (0°, 45°, 90° 1 35° ) of charge coupled device (CCD) imaging elements.

In this paper, we propose residual interpolation for division of focal plane imaging sensors, where the interpolation is executed in a “residual” domain. We interpolated low-resolution polarized images, generated tentative estimates of 0°, 45°, 90° and 135° (

0 45 90 135, , ,i i i iq q q q ) and calculated their residuals, which are the differences between the

observed and the tentatively estimated pixel values (i.e., 0° - 0iq ,45° - 45 ,iq 90° - 90 ,iq and

135° - 135iq ). We used a guided filter for edge preservation and to accurately generate the

tentative estimates of the pixel values [35]. The advantage of the guided filter is that its computing time is independent of filter size. The performance of the residual interpolation method is compared with several previously published interpolation methods, the bilinear, bicubic, spline and gradient-based methods. Based on the results, it is clear that residual interpolation outperforms the others in terms of both mean square error and visual evaluation.

1.2 Linear polarization imaging calculations

A DoFP imaging sensor captures both the intensity and polarization information of a scene. The sensor samples the scene through 0°, 45°, 90° and 135° polarization filters, and registers the four sub-sampled images. The intensity and polarization are then worked out from images with 0°, 45°, 90° and 135° linear polarization filters. To observe polarization, two polarization properties are of most interest, DoLP, and AoP. The intensity, polarization difference, DoLP and AoP are computed via the following equations:

( )0 0 90 45 135 ). 1 2(Intensity S I I I I+ + += (1)

1 0 90( ).S I I= − (2)

2 45 135 ).(S I I= − (3)

2 2 21 2 0( .)D soLP s s+= (4)

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10654

Page 5: Residual interpolation for division of focal plane

12 11 2 tan ( ).SAoP S−= ∗ (5)

A linear polarization filter has been used to find the Stokes parameters; however, the Stokes parameter ( 3 S ) is not captured for the DoFP sensor shown in Fig. 1. The above

equations show that a polarization imaging sensor has to sample the images with four linear polarization filters offset by 45° [29].

2. Residual interpolation

In this section, the bilinear interpolation method is first briefly reviewed, followed by an overview of the proposed residual interpolation method. We used bilinear interpolation due to its low computational complexity. The basic principle of bilinear interpolation is to estimate the pixel values in two dimensions. The distance weighted average of the four nearest pixel values is used to estimate a new pixel value.

Fig. 2. Bilinear interpolation.

Based on the four neighboring pixel points (see Fig. 2), f (i, j), f (i + 1, j), f (i, j + 1) and f (i + 1, j + 1), of the interpolating point f (x, y), the mathematical formula for bilinear interpolation can be written as follows:

( ) ( ) ( ) ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( ) ( ), , . 1 . 1 , 1 . 1 .

1, 1 . . 1, . . 1 .

f x y f i j i x j y f i j i x y j

f i j x i y j f i j x i j y

= + − + − + + + − −

+ + + − − + + − + − (6)

We can calculate the pixel (2, 2) at 90° polarization orientation by bilinear interpolation for a 4 X 4 block as follows [22] (see Fig. 3):

45 45 451/ 2( (1,2) (3,2)).I I I= + (7)

135 135 451/ 2( (2,1) (2,3)).I I I= + (8)

0 0 0 0 01/ 4( (1,1) (1,3) (3,1) (3,3)).I I I= + + + (9)

Fig. 3. A 4 X 4 block in a DoFP image sensor.

We used a guided filter for edge-preserving smoothing of the images taken at 0 , 45 , 90 and1 35o o o o . The guided filter is a linear model between high-resolution guided

images ( 0I , 45 90 135, , I I I )and filtered images ( 0 45 90, 135, , int int int intI I I I ). The filter output ‘ iq ’ is a

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10655

Page 6: Residual interpolation for division of focal plane

linear transform of the guided image in window kw at pixel k, and this model is applied on

all four images:

0 , .i k k kq a b w= + ∀∈ (10)

Similarly, this model can be applied on 45 , 90 and1 35o o o to get filter output

45 90 135, and . wherei i iq q q ka and kb are the linear coefficients constant in window kw . These

coefficients can be determined by minimizing the cost function in window 0 0 for and k intw I I :

2 20 int 00( , ) (( ) ).k k k k k kE a b i w a I b I aε= ∈ + − + (11)

0 0 0

2

1

.k k Iint k

kk

i w I Intwa

μ

σ ε

∈ −=

+

(12)

0 .k k k kb Iint a μ= − (13)

Similarly, cost functions E45, E90 and E135 can be calculated for

45 90 135, and . where int int int kI I I μ and 2kσ are the mean and variance of 0I in kw , w is the

number of pixels and 0kInt is the mean of 0intI Similarly, k ka and b values can be calculated

for E45, E90 and E135, respectively. The filter output for all the tentatively estimated images of the four polarizers can be found as follows:

0

45

90

135

10 ( )

145 ( )

.1

90 ( )

1135 ( )

k k k

k k k

k k k

k k k

qi i w a I bw

qi i w a I bw

qi i w a I bw

qi i w a I bw

= ∈ +

= ∈ + = ∈ + = ∈ +

(14)

The guided filter provides the tentatively estimated pixel values for each 4-polarizer filter array ( 0 , 45 , 90 ,1 35o o o o ). The residuals (∆) can be calculated from the original and guided filter output pixel as follows:

0 1: , 1: 0

45 1: , 1: 45

90 1: , 1: 0

135 1: , 1: 0

( , ) ( ( , ) 0( , ))

( , ) ( ( , ) 45( , )).

( , ) ( ( , ) 90( , ))

( , ) ( ( , ) 135( , ))

i n j m

i n j m

i n j m

i n j m

I i j I i j qi i j

I i j I i j qi i j

I i j I i j qi i j

I i j I i j qi i j

= =

= =

= =

= =

Δ = −Δ = −

Δ = −

Δ = −

(15)

The residuals (∆) can be further interpolated and then added to the tentatively obtained pixel values. The ∆ intI for 0 , 45 , 90 ,and1 35o o o o is shown in the residual interpolated

difference block in Fig. 4(a). We can calculate the pixel ∆ 90intI at 90° polarization orientation

by bilinear interpolation as follows:

int 45 45 451/ 2( (1, 2) (3, 2)).I I IΔ = Δ + Δ (16)

int135 135 451/ 2( (2,1) (2,3)).I I IΔ = Δ + Δ (17)

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10656

Page 7: Residual interpolation for division of focal plane

int 0 0 0 0 01/ 2( (1,1) (1,3) (3,1) (3,3)).I I I I IΔ = Δ + Δ + Δ + Δ (18)

Fig. 4. A 4X4 residual interpolated difference and net residual interpolated block.

Fig. 5. Flow chart for residual interpolation.

The net residual interpolation is the addition of each pixel value of the residuals (∆), and the tentative estimates for each polarized image are shown in Fig. 4(b). This can be represented as follows:

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10657

Page 8: Residual interpolation for division of focal plane

0 1: , 1: int 0

45 1: , 1: int 45

90 1: , 1: int 90

135 1: , 1: int135

_ ( , ) ( ( , ) 0( , ))

_ ( , ) ( ( , ) 45( , ))

_ ( , ) ( ( , ) 90( , ))

_ ( , ) ( ( , ) 135( , ))

i n j m

i n j m

i n j m

i n j m

RI I i j I i j qi i j

RI I i j I i j qi i j

RI I i j I i j qi i j

RI I i j I i j qi i j

= =

= =

= =

= =

= Δ + = Δ +

= Δ +

= Δ +

.

(19)

The difference between the residual and bilinear interpolations is that bilinear estimates the new pixel from the four nearest neighbors and residual interpolates, using bilinear interpolation, the residuals obtained from the tentatively estimated and observed pixel values. The interpolated residuals are added to the tentatively estimated pixel values to get the net residual interpolation.

In Fig. 5, the flow chart of residual interpolation is presented. First, the low-resolution polarization images are up-sampled using bilinear interpolation to generate high-resolution images. With the guided-filter, the proposed algorithm can up sample the sparse data by using the above-mentioned interpolated images and the high-resolution guided images ( 0I

45 90 135, , I I I ). Therefore, the image structures of the interpolated images are preserved. We

generated the tentative estimation of the 0°, 45°, 90° and 135° images ( 0 45 90 135, , ,i i i iq q q q ) and

calculated the residuals, as presented in Eq. (15). The residuals were again interpolated using bilinear interpolation, and those tentatively estimated were augmented to get the residual interpolation, as shown in Eq. (19).

3. Modulation transfer function

The modulation transfer function (MTF) of an imaging system is a measure of the contrast being transferred by the lens. The MTF measures the magnitude response of an imaging system to a varying sinusoidal pattern at different spatial frequencies. Simply, it measures how a camera can see small things [36]. At a range of spatial frequency, the MTF can be calculated as the ratio of the contrast at the output to the input sinusoidal patterns.

The polarization image sensor captures the polarization information at each imaging frame. An input target image can be defined as varying sinusoidal patterns at different frequencies. We generated an artificial sinusoidal image in MATLAB at each frame, i.e, 0°, 45°, 90° and 135°, as follows [22]:

0

45

90

135

( , ) cos(2 2 ) 1

( , ) 2cos(2 2 ) 2.

( , ) cos(2 2 ) 1

( , ) 0

x y

x y

x y

I x y f x f y

I x y f x f y

I x y f x f y

I x y

π ππ π

π π

= + + = + + = + + =

(20)

where xf and yf are the spatial frequency components in the horizontal and vertical

directions. All patterns were down-sampled in order to check the accuracy of the interpolation algorithms. We interpolated the down-sampled polarized images to get the high resolution image according to the bilinear, bicubic, spline, gradient and residual interpolation algorithms. The 0 andS 2S parameters are variables based on a sine function, while

1S is constant .We then matched our imager as to how it would sample such a signal and

applied interpolation techniques. The ratio of the original sinusoidal signal to the interpolated signal gives us the MTF at one frequency. We changed the frequency, and for each frequency, we got another MTF point. In this way, at a range of frequencies swept from 0 to 0.5 cycles per pixel, the MTF curve was plotted, as shown in Fig. 6. In Fig. 6(a) to (e) the 3-D MTF chart of bilinear, bicubic, spline and gradient-based and residual interpolation is represented,

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10658

Page 9: Residual interpolation for division of focal plane

showing the MTF along xf and yf . The horizontal frequency xf and vertical frequency yf

were swept from 0 to 0.5 cycles per pixel. Figure 6(f) shows the MTF response of spline interpolation in cyan color, bilinear shown in green, bicubic in blue, gradient in yellow and residual interpolation in red along xf = yf . The ideal MTF, shown by the dotted purple line,

has unity gain for 0 to 0.5 and zero gain at higher frequencies. All interpolation algorithms other than residual interpolation give low gain below 0.25 cycles per second and zero gain at higher frequencies. The residual interpolation has a higher gain at low frequencies beyond 0.25 cycles per pixel than the other methods. At higher frequencies, greater than 0.375, residual interpolation again provides increased gain less than 0.5 cycles per pixel as compared to other interpolation methods.

Fig. 6. The MTF of intensity 0( )S for interpolation algorithms: (a) bilinear, (b) bicubic, (c)

spline, (d) gradient, (e) residual. (f) The MTF of 0S along xf = yf

4. Experimental setup

To assess the accuracy of different interpolation methods, the “true” high-resolution polarization image must be known beforehand, and with the DoFP polarization imaging sensors, we can only generate low-resolution images. Four images at 0°, 45°, 90° and 135° orientations were taken of a car in an outdoor environment with a CMLN-13S2M-CS CCD camera mounted with a linear polarization filter. These true high-resolution images were down-sampled by following the sampling pattern of the DoFP polarization imaging sensor, and four low-resolution images at 0°, 45°, 90° and 135° feature angles were obtained, like

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10659

Page 10: Residual interpolation for division of focal plane

those acquired from a DoFP sensor. After applying the interpolation algorithms, the final high-resolution interpolated images were compared against the true high-resolution images that were originally obtained. The images obtained at 0°, 45°, 90° and 135° orientations are grayscale images. A further four true high-resolution images of a car were captured in an outdoor environment. The intensity, DoLP and AoP images for the car are shown in Fig. 7. Potential error in the original high-resolution images due to optical misalignment is not an experimental concern. Our concern is only to test the interpolation algorithm in terms of mean square error and visual evaluation because the algorithms are tested on low-resolution images, and any original error will be the same in both the low and high resolution images. Our setup provides a fair comparison of the reconstruction error among the bilinear, bicubic, spline and gradient based and residual interpolation methods.

5. Performance estimation

In this section, we adopt mean square error (MSE) and visual evaluation to compare the performance of the different interpolation algorithms. The interpolation methods are used to get high-resolution images from the low-resolution images. The interpolated images are matched with the true high-resolution image, and the characteristics of polarization are explored on the images. In Section 5.1 and 5.2 the visual image evaluation and the MSE of both test images are given, respectively.

5.1 Visual image evaluation

In Fig. 7, the intensity, DoLP and AoP images computed from the high-resolution car image are shown. The row contains the true high-resolution polarization image, and this is used to visually compare the reconstruction accuracy of the different interpolation methods presented in Fig. 8 using small patches. In the first column of Fig. 8, the original intensity, DoLP and AoP are given, while the second to the sixth columns show bilinear, bicubic, spline, gradient, and residual interpolation images, respectively.

Fig. 7. The true high-resolution image of a car: (a) car-intensity, (b) car-DoLP and (c) car-AoP.

In Fig. 7, the DoLP values are lower in the red areas and higher on the car’s glass windows, with the light blue spot on the glass marked with a white oval showing medium DoLP. The AoP value is low, medium, and high in the red, light blue and purple areas, respectively.

In Fig. 8, small patches of the car image are shown. In the figure, the purple ovals on the original and residual intensity images show how many artifacts have been effectively recovered. The image artifacts and glitches are effectively reduced by residual interpolation, with the interpolated images near to the original images. DoLP and AoP patches of the car show the accuracy of residual interpolation as compared to the bilinear, bicubic, gradient and spline algorithms.

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10660

Page 11: Residual interpolation for division of focal plane

Fig. 8. The true high-resolution image and comparison of interpolation methods on the (a) intensity, (b) DoLP, and (c)AoP, showing the effect of interpolation on the artifacts in the patches.

We used parallel programming to speed up the processing to real-time. On our system Intel (R) core (TM) i5-3470 CPU @ 3.20GHz, 8-Gb RAM, bilinear interpolation computes the AoP image (960 x1280) in 40 milliseconds, bicubic in 47 milliseconds, gradient in 45 milliseconds, spline in 57 milliseconds and residual interpolation in 61 milliseconds. The most important is that in terms of the polarization information recovery, mean square error, visual evaluation and MTF, the residual interpolation performance is significantly better than other interpolation methods.

Table 1. MSE performance comparison for car image

Bilinear Bicubic Spline Gradient Our Method

I0 4.78E-04 4.50E-04 5.64E-04 4.65E-04 1.60E-38

I45 7.21E-04 6.94E-04 8.33E-04 7.03E-04 1.43E-39

I90 9.87E-04 9.56E-04 0.0011 9.74E-04 1.90E-39

I135 0.0023 0.0003 0.0043 0.0013 1.59E-40

S0 0.0057 0.0067 0.0078 0.0047 0.0043

S1 0.0026 0.0036 0.0088 0.0016 2.40E-04

S2 0.0029 0.0039 0.0042 0.0233 4.16E-04

DoLP 0.0333 0.0433 0.0583 0.0233 0.0133

AoP 0.064 0.074 0.0718 0.0557 0.0015

5.2. MSE comparison

The MSE for the different interpolation algorithms is found using the following equation:

211 1 ( ( , ) ( , )) .img imgMSE i M i N O i j i i j

MN= ≤ ≤ ≤ ≤ − (21)

where imgO (i, j) is the actual target pixel, imgi (i, j) is the interpolated actual intensity image

and M and N are the number of rows and columns in the image array, respectively. The mean square error results for the different interpolation methods are shown in Table 1 for the car image. The minimum MSE for the I(0°), I(45°), I(90°), I(135°), intensity, DoLP and AoP

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10661

Page 12: Residual interpolation for division of focal plane

images is obtained via the residual interpolation method. The spline interpolation method introduces the largest error, while the bicubic and gradient interpolation methods show similar error performance, with the latter being computationally efficient.

6. Conclusion

In this paper, we proposed the residual interpolation algorithm for a division of focal plane image sensor. We have compared the structure of the gradient, bilinear, bicubic and spline interpolation algorithms with residual interpolation. The performance was compared visually, by modulation transfer function (MTF) and with a MSE matrix through a CCD camera and a linear polarization filter turned around the sensor. The interpolation algorithms were applied on low-resolution images and compared statistically against the true high-resolution polarization images. We applied the algorithms on intensity (S0), angle of linear polarization (AoP) and degree of linear polarization (DoLP) to observe the accuracy of edge recovery and polarization information. The improvements in the reconstruction accuracy using the proposed residual interpolation method were shown both with the MSE and visually in comparison with the bilinear, bicubic, spline and gradient-based algorithms. This demonstrates that residual interpolation could bring a large improvement to the output quality in terms of edge artifacts for a real DoFP polarization image sensor. Most importantly, the residual method outperforms and shows its advantage over other leading methods.

Funding

The Qatar National Research Fund (NPRP9-421-2-170).

Acknowledgments

The authors would like to thank Neal Brock at 4D technology and Shengkui Gao at Apple, United States for their guidance about the polarization image sensors.

Vol. 25, No. 9 | 1 May 2017 | OPTICS EXPRESS 10662