11
Anaglyph image generation by matching color appearance attributes Songnan Li n , Lin Ma, King Ngi Ngan 1 Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, China article info Article history: Received 19 July 2012 Received in revised form 18 March 2013 Accepted 18 March 2013 Available online 4 April 2013 Keywords: Anaglyph CIELAB color space Color appearance attributes Subjective test abstract Anaglyph is an inexpensive three-dimensional (3D) displaying technique. It has deficien- cies like color distortions, retinal rivalry, ghosting effect, etc. In this paper, a new method is proposed for anaglyph image generation. In contrast to most existing studies which focus on solving part of the problems, the proposed method tries to minimize the color distortions, retinal rivalry, and ghosting effect simultaneously. It works in the CIELAB color space which is perceptually uniform. Rather than matching the L n a n b n values of the stereo image pair and the perceived anaglyph image as in an existing method (McAllister et al., 2010 [1]), the proposed algorithm is aimed at matching perceptual color appearance attributes which can accurately define the color perception of the human visual system (HVS). Rather than evaluating the performance based on typically 35 images as in most prior studies, subjective tests have been conducted, involving 25 stereo image pairs and 20 subjects, to compare four anaglyph generation methods. The proposed method demonstrates a very good performance for most of the images in the subjective test. & 2013 Elsevier B.V. All rights reserved. 1. Introduction Among many three-dimensional (3D) displaying techni- ques, e.g., glasses dependent or auto-stereoscopic [2], ana- glyph is the least expensive way to make the 3D visual experience achievable on ordinary monitors or even prints [3], with no special hardware but only cheap colored glasses. Despite of its many inherent deficiencies, its resurgence has been seen recently thanks to the abundance of 3D content which is more easily accessible nowadays than ever. Like other 3D displaying techniques, anaglyph can pro- vide a slightly different view to each of two eyes. From the disparity between the two views and other visual cues [4], the human visual system can generate the stereoscopic representation of spatial relationships in the scene. An anaglyph image is formed by superimposing two views (i.e., left and right images of a stereo image pair) in different colors. When perceived through colored glasses, the two images will be separated from the composite one to feed each eye. The separation is based on color filtering as has been elaborated in the literature [5,6]. Fig. 1 illustrates how anaglyph works using the most popular and simplest ana- glyph method, i.e., Color anaglyph. As shown in Fig. 1, a red- cyan Color anaglyph image is generated by combining the red channel fRl g 2 of the left image and the blue & green channels fGr ; Br g of the right image together, i.e., fRl ; Gr ; Br g. Wearing red-cyan colored glasses, the left eye perceives fRl g since the red lens blocks most of the green and blue colors. And the right eye perceives fGr ; Br g due to the opposite color filtering property of the cyan lens. Anaglyph images mainly suffer from three deficiencies, i.e., color distortions, retinal rivalry, and ghosting effect. Color distortions cause the colors perceived through the glasses to be quite different from those of the original scene. This can be observed from Fig. 1 by comparing the original stereo pair with the simulated perception of the two eyes. Retinal rivalry occurs when the same object appears to have different colors in the two eyes. This Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/image Signal Processing: Image Communication 0923-5965/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.image.2013.03.004 n Corresponding author. Tel. +86 91450996. E-mail address: [email protected] (S. Li). 1 Fellow, IEEE. 2 The primesymbol denotes gamma-corrected signals. Signal Processing: Image Communication 28 (2013) 597607

Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

Contents lists available at SciVerse ScienceDirect

Signal Processing: Image Communication

Signal Processing: Image Communication 28 (2013) 597–607

0923-59http://d

n CorrE-m1 Fe

journal homepage: www.elsevier.com/locate/image

Anaglyph image generation by matching colorappearance attributes

Songnan Li n, Lin Ma, King Ngi Ngan 1

Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong, China

a r t i c l e i n f o

Article history:Received 19 July 2012Received in revised form18 March 2013Accepted 18 March 2013Available online 4 April 2013

Keywords:AnaglyphCIELAB color spaceColor appearance attributesSubjective test

65/$ - see front matter & 2013 Elsevier B.V.x.doi.org/10.1016/j.image.2013.03.004

esponding author. Tel. +86 91450996.ail address: [email protected] (S. Li).llow, IEEE.

a b s t r a c t

Anaglyph is an inexpensive three-dimensional (3D) displaying technique. It has deficien-cies like color distortions, retinal rivalry, ghosting effect, etc. In this paper, a new methodis proposed for anaglyph image generation. In contrast to most existing studies whichfocus on solving part of the problems, the proposed method tries to minimize the colordistortions, retinal rivalry, and ghosting effect simultaneously. It works in the CIELAB colorspace which is perceptually uniform. Rather than matching the Lnanbn values of the stereoimage pair and the perceived anaglyph image as in an existing method (McAllister et al.,2010 [1]), the proposed algorithm is aimed at matching perceptual color appearanceattributes which can accurately define the color perception of the human visual system(HVS). Rather than evaluating the performance based on typically 3–5 images as in mostprior studies, subjective tests have been conducted, involving 25 stereo image pairs and20 subjects, to compare four anaglyph generation methods. The proposed methoddemonstrates a very good performance for most of the images in the subjective test.

& 2013 Elsevier B.V. All rights reserved.

1. Introduction

Among many three-dimensional (3D) displaying techni-ques, e.g., glasses dependent or auto-stereoscopic [2], ana-glyph is the least expensive way to make the 3D visualexperience achievable on ordinary monitors or even prints[3], with no special hardware but only cheap colored glasses.Despite of its many inherent deficiencies, its resurgence hasbeen seen recently thanks to the abundance of 3D contentwhich is more easily accessible nowadays than ever.

Like other 3D displaying techniques, anaglyph can pro-vide a slightly different view to each of two eyes. From thedisparity between the two views and other visual cues [4],the human visual system can generate the stereoscopicrepresentation of spatial relationships in the scene. Ananaglyph image is formed by superimposing two views(i.e., left and right images of a stereo image pair) in differentcolors. When perceived through colored glasses, the two

All rights reserved.

images will be separated from the composite one to feedeach eye. The separation is based on color filtering as hasbeen elaborated in the literature [5,6]. Fig. 1 illustrates howanaglyph works using the most popular and simplest ana-glyph method, i.e., Color anaglyph. As shown in Fig. 1, a red-cyan Color anaglyph image is generated by combining thered channel fR′lg2 of the left image and the blue & greenchannels fG′r ;B′rg of the right image together, i.e.,fR′l;G′r ;B′rg. Wearing red-cyan colored glasses, the left eyeperceives fR′lg since the red lens blocks most of the greenand blue colors. And the right eye perceives fG′r ;B′rg due tothe opposite color filtering property of the cyan lens.

Anaglyph images mainly suffer from three deficiencies,i.e., color distortions, retinal rivalry, and ghosting effect.Color distortions cause the colors perceived through theglasses to be quite different from those of the originalscene. This can be observed from Fig. 1 by comparing theoriginal stereo pair with the simulated perception of thetwo eyes. Retinal rivalry occurs when the same objectappears to have different colors in the two eyes. This

2 The ‘prime’ symbol denotes gamma-corrected signals.

Page 2: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

Right image (R)

Left image (L)

Color Anaglyph

Perceived L

Color glass filtering

Perceived R

r

g+b

Blue and Green channels of R

Red channel of L

Fig. 1. An illustration of anaglyph 3D displaying using Color anaglyph.

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607598

phenomenon can be observed from Fig. 1 by comparingthe simulated perception of the two eyes. Retinal rivalry isdistracting, and causes visual fatigue and other side effects[1]. As the color gamuts of our eyes behind the coloredglasses do not overlap [1], retinal rivalry is inevitable.However, by manipulating the lightness, retinal rivalry canbe significantly reduced. Shown as an example above,Color anaglyph does not attempt to control the retinalrivalry effect in its formation. It is evident from Fig. 1 thatfishes perceived by the left eye are much brighter thanthose of the right eye. Under this circumstance the retinalrivalry will be extremely severe. Ghosting is a commonproblem for most 3D displaying techniques [7,8]. Inanaglyph, ghosting is caused by the imperfect filtering ofthe light wavelengths, so that the unwanted image leaksthrough the colored glasses mixing with the intended one.It is often more obvious in regions where there is a strikingcontrast and a large binocular disparity.

In this paper, a new algorithm is proposed for anaglyphimage generation, with explicit considerations for lessen-ing color distortions, retinal rivalry, and ghosting effect.The study is based on liquid crystal displays (LCDs) andred-cyan colored glasses. But the principle can also beapplied to other types of displays, e.g., cathode ray tube(CRTs), plasma display panels (PDPs), digital light projec-tions (DLPs) etc., and other colored glasses, e.g., green-magenta (Trioscopic 3D) [9], yellow-blue (ColorCode 3D)[10], etc. The main reasons for investigating red-cyananaglyph are that (1) the technique is patent free and (2)the performance can be compared with existing work inthe literature most of which also studied red-cyan ana-glyph. Recently, alternatives to typical anaglyph colorshave been proposed. For example, in magenta-cyan ana-glyph [11] two primary colors are used to present eachview. According to [11], in comparison to red-cyan ana-glyph the additional blue color for the left view can help toreduce the retinal rivalry effect and preserve the coloraccuracy. White-white anaglyph (Infitec Dualcolor3D) [12]was developed using narrow-band interference filters. Thenew technique can deliver full colors to each eye similar topolarization or shutter glasses based methods. But itcannot be applied to standard display monitors. Thesealternative methods are beyond the scope of our study.

This paper is structured as follows. In Section 2, briefreviews are given on existing studies of anaglyph

generation algorithms. The motivation and novelty of theproposed method are also explained. Section 3 introducesa detailed implementation of the proposed method.Section 4 shows our experimental results. Finally, conclu-sion is given in Section 5.

2. Related work

2.1. Simple anaglyphs

In this paper, simple anaglyphs refer to those methodswhich disregard properties of display and colored glasses.Representatives include Color anaglyph, Half Color ana-glyph, Optimized anaglyph, etc., as formulated in [13].According to the performance comparison in [13] based onfive images of natural scenes, Color anaglyph seems todemonstrate the least color distortions, but on the otherhand produces severe retinal rivalry, similar to our obser-vation in Fig. 1. Half Color anaglyph and Optimizedanaglyph address the retinal rivalry problem at theexpense of color reproduction. For example, it has beenclaimed in [13] that Optimized anaglyph is almost free ofluminance retinal rivalry, whereas produces no red shades.Simple anaglyphs are easy to implement and may performwell in many cases. However, they are empirical methods,leaving us with no means to fine-tune their behaviors inaccordance with properties of the display and coloredglasses to achieve the superior performance.

2.2. XYZ and LAB anaglyphs

Dubois [5] proposed an anaglyph generation method,named as XYZ anaglyph in this paper, which takes intoaccount the spectral distributions of display primaries andtransmission functions of the colored glasses. In general,the principle of XYZ anaglyph is to minimize the colordistortions, i.e., differences between the perception of theoriginal stereo pair and that of the anaglyph imagethrough the colored glasses, in the CIE 1931 XYZ colorspace. It can be formulated as

minVA

∥C � V−CA � VA∥ ð1Þ

where�denotes matrix multiplication. In (1), V ¼½Rl;Gl;Bl;Rr ;Gr ;Br�T represents a pixel of the stereo

Page 3: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607 599

pair,3 VA ¼ ½RA;GA;BA�T is the corresponding pixel in theanaglyph image, C is a 6�6 matrix converting V from thedevice RGB color space into the CIE XYZ color space, and CAis a 6�3 matrix whose multiplication with VA serves twopurposes, i.e., simulating the filtering of the colored glassesand converting the passing lights into the CIE XYZ colorspace. Element values of matrices C and CA are determinedby spectral distributions of display primaries, and addi-tionally CA is also dependent on transmission functions ofthe glasses. Since the color space conversion from RGB toCIE XYZ is linear and the distance between two XYZ colorsis measured by the l2 norm (Euclidean distance), Eq. (1)has a closed form solution, i.e., VA ¼ ðCT

A � CAÞ−1 � CTA � V ,

which means that XYZ anaglyph actually is as efficient asthe simple anaglyphs introduced above, if disregarding thedisplay gamma transfer. It should be noted that VA calcu-lated in this way will fall out of the gamut of the deviceRGB color space. Hence, a scaling of VA is required,followed by a clipping to ensure that the final results lieinside the unit RGB cube. A similar approach was proposedin [14], which still calculates the color distortions in theCIE XYZ space but replaces the l2 norm in Eq. (1) with l∞norm (Chebychev distance). The resultant method requiressolving a linear program for each pixel of the anaglyphimage, thus is quite time-consuming. The authorsexploited color coherence and parallel processing to accel-erate the execution.

Recently, McAllister et al. [1] proposed an anaglyphmethod, namely LAB anaglyph, minimizing the colordistortions in the CIELAB color space. Compared to CIEXYZ, the CIELAB color space exhibits several major advan-tages: (a) it is regarded as a uniform color space, i.e., theEuclidean distance in the CIELAB color space correlateswell with the perceptual color distance; (b) it incorporateschromatic adaptation transform and non-linear responsecompression [15] to more accurately simulate the percep-tion of the human visual system (HVS); (c) it providesmeans for transforming tri-stimulus values to several colorappearance attributes, i.e., lightness, saturation, hue, etc. Inthe LAB anaglyph [1], V and VA in (1) are converted into theCIELAB color space. This color space conversion is non-linear, hence as in [14] the resultant formula needs to beoptimized iteratively.

2.3. Ghosting reduction methods

Woods et al. [6] discussed in detail the source of theanaglyph ghosting. In [16], three methods were proposedto reduce ghosting, i.e., stereo pair registration, colorcomponent blurring, and depth manipulation. In [17], theauthors proposed to inhibit ghosting by controlling theamount of saturation. In [18], ghosting reduction wasimplemented in a sequential process, i.e., analyzing differ-ences between the left and right images, detecting theghosting area, and eventually adjusting the intensities ofthe ghosting area. Tran [19] proposed ghosting reductionmethods which relied on explicit knowledge of properties

3 The RGB values have gone through the display gamma, and havebeen normalized to 1.

of the display device and the colored glasses. They canserve as post-processing components for existing anaglyphalgorithms. Recently, Sanftmann et al. [20] defined amodel to quantify the perceived luminance through thecolored glasses. Five parameters of the model can becaptured by simple subjective tests. Illustration was givenon how to use the model together with several simpleanaglyph methods to reduce the ghosting artifacts.

2.4. Motivation and novelty

The proposed method is mainly inspired by XYZ andLAB anaglyph methods. As aforementioned, XYZ and LABanaglyph methods try to minimize the Euclidean distancebetween original and perceived colors in the CIE XYZ andCIELAB color spaces, respectively. Similarly, the proposedmethod also aims at minimizing the perceptual differencesbetween the original and perceived colors. But instead ofusing the XYZ or Lnanbn values, our method is based onmatching the semantically meaningful color appearanceattributes, i.e., lightness, saturation, and hue. According to[15], color appearance attributes are deemed to be thecolor representations in the late stage of HVS. They can beused to accurately define the HVS color perception. There-fore, anaglyph methods based on color appearance attri-butes have the potential to lead to superior performance.

As the LAB anaglyph, the proposed method also worksin the CIELAB color space. Their differences include:

1.

LAB anaglyph targets minimizing the Lnanb differences,while our method tries to match the perceptual colorappearance attributes.

2.

LAB anaglyph method formulates the problem as anoptimization process. Our method can manipulate thelightness, saturation, and hue in a more controllablemanner. For example, since the lightness significantlyinfluences the retinal rivalry, we intentionally matchthe lightness of the left and right views to alleviate theretinal rivalry effect.

3.

LAB anaglyph method needs iterative optimization. Onthe other hand, our method has a closed form solution,hence, is less computationally intensive.

In addition, the proposed method explicitly takes ghostingreduction into account. As to be elaborated in Section 3,we employ a subtractive ghosting reduction method. It istime-efficient and according to experimental results, itdoes reduce the ghosting artifacts to certain extent.

3. The proposed method

The systematic framework of the proposed algorithm isshown in Fig. 2. In general, we convert the stereo pair fromthe device RGB color space to the CIELAB color space, andthen calculate three perceptual color appearance attri-butes for each pixel position, including its lightness,saturation, and hue. We try to match these color appear-ance attributes to those of the perceived anaglyph, andfinally convert them back into RGB. Detailed informationon each processing module will be given below.

Page 4: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

Fig. 2. Systematic framework of the proposed algorithm.

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607600

3.1. Gamma transfer

When visual signals are captured, gamma correction, i.e.,a non-linear transformation of the captured RGB values, istypically performed in order to compensate the non-linearity4 of the traditional CRT displays to achieve thecorrect color reproduction. Although modern displays showless or even no non-linearity, the same gamma correction forCRTs is still performed during the signal capture, because theother major and evenmore important purpose of the gammacorrection is to minimize the perceptual distortions [21]introduced by signal compression or during signal commu-nication. Therefore, the modern displays, such as LCDs, PDPs,DLPs, must simulate the gamma transfer function of CRTdisplays for the correct color reproduction. Therefore, in theproposed method we implement a gamma transfer functionfor CRTs. The 8-bit device RGB values fR′;G′;B′g of the stereopair are normalized by multiplying 1/255, and then trans-formed to fR;G;Bg, as shown in Fig. 2, using the sRGB gammatransfer function. The inverse gamma transfer (i.e., gammacorrection) is performed after the derivation of RGB values ofanaglyph images, and ultimately the gamma-correctedvalues will be scaled by multiplying 255. For the formulationof the forward and inverse sRGB gamma transfer functions,please refer to [21,22].

3.2. Converting RGB to CIELAB

After simulating the gamma transfer, RGB values of thestereo pair are converted into Lnanbn values of the CIELABcolor space. The conversion can be separated into twosteps. The first step is to project the RGB valuesfRl;Gl;Bl;Rr ;Gr ;Brg of the stereo pair5 into the CIE XYZcolor space

Cs ¼0:4243 0:3105 0:16570:2492 0:6419 0:10890:0265 0:1225 0:8614

264

375 ð2Þ

4 The non-linearity refers to the relationship between the device RGBvalues and the physical luminance of the display.

5 Subscripts ‘l’ and ‘r’ indicate variables related to the respective leftand right images of the stereo pair.

Xr

Yr

Zr

264

375¼ Cs �

Rr

Gr

Br

264

375;

Xl

Yl

Zl

264

375¼ Cs �

Rl

Gl

Bl

264

375 ð3Þ

The conversion matrix Cs is for typical LCD displays. In thesecond step, fXl;Yl; Zl;Xr ;Yr ; Zrg are converted into Lnanbn

values, i.e., fLl; al; bl; Lr ; ar ; brg, using equations below:

Li ¼ 116 � f Yi

YW

� �−16

ai ¼ 500 � fXi

XW

� �−f

Yi

YW

� �� �

bi ¼ 200 � fYi

YW

� �−f

Zi

ZW

� �� �ð4Þ

f ðtÞ ¼ t1=3 if t40:0088567:787� t þ 0:1379 otherwise

(ð5Þ

where i∈fl; rg, and fXW ;YW ; ZW g are XYZ values of thewhite point, i.e., ½XW ;YW ; ZW �T ¼ Cs � ½1;1;1�T .

3.3. Matching color appearance attributes in CIELAB colorspace

Given Lnanbn values of the stereo pair, our objective isto get {LAl, aAl, bAl, LAr, aAr, bAr} of the perceived anaglyph6

which generate similar lightness, saturation and hue asthose calculated from fLl; al; bl; Lr ; ar ;brg. In our method,one (red) channel is used to present the left image and theother two channels (green and blue) are used to presentthe right one. Therefore, Lnanbn values of the perceivedanaglyph must be intentionally chosen, so that, whenconverted back into RGB, the three-dimensional vectorfLAl; aAl; bAlg can be represented by a single red value fRAlg,and fLAr ; aAr ; bArg can be represented by fGAr ;BArg.

3.3.1. Right image processingFirstly, we calculate the lightness, hue, and saturation

of the right image of the stereo pair. According to thedefinition of the CIELAB color space, Ln value, i.e., Lr, can beused to represent the lightness. Hue (Hr) and saturation (Sr)

6 Subscripts ‘Al’ and ‘Ar’ indicate variables related to the left and rightviews of the anaglyph image, respectively.

Page 5: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

Fig. 3. (a) 256�256 colors fRAr ¼ 0;GAr ¼ 0;1;…;255;BAr ¼ 0;1;…;255g in the CIELAB color space. (b) The approximation surface modeled by Eq. (9) andindicated by the red dashed lines. (c) Top views of the original and the approximation surface.

Fig. 4. The red line resembles the top view of the approximation surfaceas shown in Fig. 3(c). Examples 1–4 illustrate how Eqs. (10) and (11)project far ;brg to faAr ; bArg in the CIELAB color space. Constant T is fromEq. (11).

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607 601

of the right image are represented by

Hr ¼ 180π

� tan−1 brar

� �þ π � uð−arÞ

� �ð6Þ

Sr ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2r þ b2r

qð7Þ

where uð�Þ is the unit step function. fLAr ; aAr ; bArg representsthe right view of the perceived anaglyph image in theCIELAB color space. A good choice of fLAr ; aAr ; bArg shouldsatisfy two conditions: (1) they generate lightness, hue,and saturation values as close to Lr, Hr and Sr as possible;(2) in the RGB color space, they can be represented byfRAr ;GAr ;BArg with RAr ¼ 0. For illustration, all fLAr ; aAr ;bArgthat conform to the second condition are plotted in Fig. 3(a).They are obtained by converting the 256�256 RGB valuesfRAr ¼ 0;GAr ¼ 0;1;…;255;BAr ¼ 0;1;…;255g into the CIE-LAB color space. The conversion follows the two stepsdescribed in Section 3.2. In this case, the conversion (RGBto CIE XYZ) matrix CAr is

CAr ¼0:0153 0:1092 0:11710:0176 0:3088 0:07770:0201 0:1016 0:6546

264

375 ð8Þ

which is determined by spectral distributions of displayprimaries and also the transmission function of the right(cyan) lens. ½XWr ;YWr ; ZWr �T ¼ CAr � ½1;1;1�T is taken as XYZvalues of the white point.

To satisfy the second condition, we need to ensure thatfLAr ; aAr ; bArg lie in the surface shown in Fig. 3(a). However,it will be very complex to precisely meet this requirement.Therefore, we approximate the original surface shown inFig. 3(a) by a simple surface empirically modeled as

bAr ¼ k � aAr if ar40

ðaAr−acÞ2−ðbAr−bcÞ2 ¼ a2c þ b2c otherwise

(ð9Þ

and LAr∈½0;100�. The parameters are set as k¼−0:7273,ac¼125, bc¼172. The dashed lines in Fig. 3(b) representthe approximation surface. An animation version of Fig. 3(b) can be found in the project website [23], from which itcan be observed that the original surface shown in Fig. 3(a)is nearly perpendicular to an–bn plane. Fig. 3(c) shows thetop view. All fLAr ; aAr ; bArg derived below will lie in thisapproximation surface. When they are converted back into

RGB, the resultant small red values (RAr) introduced by theapproximation will be clipped to zero.

aAr and bAr specify the hue and saturation. Their valuesare determined by

ðaAr−acÞ2−ðbAr−bcÞ2 ¼ a2c þ b2c & aAr≤0 if Hr≥HRED and Hr≤HCYAN

bAr ¼ k � aAr&aAr40 otherwise

(

ð10Þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2Ar þ b2Ar

Sr �Hr−HRED

Tif jHr−HREDj≤T

Sr �Hr−HCYAN

Tif jHr−HCYANj≤T

Sr otherwise

8>>>><>>>>:

ð11Þ

where HRED ¼ 41:61, HCYAN ¼ 221:61, T¼15. Fig. 4 illustrateshow the projection from far ;brg to faAr ; bArg is performedaccording to Eqs. (10) and (11):

Colors in the light green area are projected to the leftpart of the approximated surface (examples 1 and 3).

Colors in the light blue area are projected to the rightpart of the approximated surface (examples 2 and 4).

For colors in the flat area (without slashes), projectionswill generate the same saturation value as those of the
Page 6: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607602

right image of the stereo pair, i.e.,ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2Ar þ b2Ar

q¼ Sr

(examples 1 and 2).

� For colors in the textured area (with slashes),ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

a2Ar þ b2Ar

qwill be smaller than Sr (examples 3 and 4).

In examples 3 and 4, the two colors appear similar in theright image of the stereo pair. If keeping their saturationsduring the projection, the resultant colors will be widelyapart, introducing visual artifacts to the anaglyph image.By de-saturation as in Eq. (11), the projected colors will belocated more closely.

LAr represents the lightness, and its value is determinedby

LAr ¼Lr � 1−PSr �

T−jHr−HREDjT

� �if jHr−HREDjoT

Lr otherwise

8><>: ð12Þ

PSr ¼

Pmax if Sr4Shigh

Pmax � Sr−SlowShigh−Slow

if Sr≤Shigh and Sr≥Slow

0 if SroSlow

8>>><>>>:

ð13Þ

where Pmax ¼ 0:4, Slow ¼ 40, Shigh ¼ 50, HRED and T are thesame as in Eqs. (10) and (11). It can be observed fromEq. (12) that for most colors the lightness remainsunchanged after the projection, i.e., LAr ¼ Lr . Special treat-ment is given to red-like colors, i.e., a maximum of 40%lightness reduction ðLAr ¼ 0:6LrÞ. This empirical operationintends to improve the chrominance accuracy of red-likecolors (at the expense of increasing retinal rivalry and lesslightness accuracy), which can help to improve the overallvisual quality according to our viewing experiences. Anexample is given in Fig. 5 which illustrates the improve-ment on chrominance accuracy. As shown in (12) and (13),the amount of lightness reduction is dependent on the hueand saturation values, i.e., Hr and Sr, respectively. Themaximum reduction is reached when Hr equals HRED and

Fig. 5. A red strawberry: (a) before the lightness reduction

the saturation value is high ðSr4ShighÞ. As given by Eq. (13),for low saturated colors ðSroSlowÞ, lightness reduction is notperformed (i.e., PSr ¼ 0). The purpose of this is to avoidabrupt lightness changes for low saturated colors. For thesame reason, PSr slowly evolves from 0 to Pmax when thesaturation value Sr lies between Slow and Shigh.

3.3.2. Left image processingTo minimize retinal rivalry, left and right eyes should

perceive identical lightness. Assuming that the stereoimage pair is taken under well controlled conditions(e.g., same shutter speed, aperture, etc.), matching pointsin left and right views will have the same color. Therefore,to match the perceived lightness we simply make

LAl ¼ Ll ð14Þwhere Ll represents the lightness of the left image of thestereo pair. In the final anaglyph, only the red channel isused to present the left view. We use it to reproduce thelightness LAl. Hence, there is no need to calculate aAl andbAl anymore.

It should be noted that because lightness reduction isapplied to red-like colors in the right image, as given byEqs. (12) and (13), lightness matching for these colorscannot be achieved. Although this lightness reduction willcause more retinal rivalry and less lightness accuracy, ithelps to improve chrominance accuracy of red-like colors,which we find empirically is useful for enhancing theoverall visual quality of perceived images.

3.4. Converting CIELAB to RGB

For the right image, firstly fLAr ; aAr ; bArg are converted tofXAr ;YAr ; ZArg using

XAr ¼ XWr � f −1LAr þ 16116

þ aAr500

� �

YAr ¼ YWr � f −1LAr þ 16116

� �

of the right view, (b) after the lightness reduction.

Page 7: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607 603

ZAr ¼ ZWr � f −1LAr þ 16116

−bAr200

� �ð15Þ

f ðtÞ−1 ¼ t3 if t40:20690:1284� ðt−0:1379Þ otherwise

(ð16Þ

Then fXAr ;YAr ; ZArg are converted to RGB values by

RAr

GAr

BAr

264

375¼ C−1

Ar �XAr

YAr

ZAr

264

375 ð17Þ

As discussed in Section 3.3, the small RAr will be clipped tozero, leaving the right view represented by fGAr ;BArg only.

For the left image, firstly LAl is converted to YAl:

YAl ¼ YWl � f −1LAl þ 16116

� �ð18Þ

where YWl is from the XYZ values of the white point, i.e.,½XWl;YWl; ZWl� ¼ CAl � ½1;1;1�T , and the conversion matrixfor the left lens is

CAl ¼0:1840 0:0179 0:00480:0876 0:0118 0:00180:0005 0:0012 0:0159

264

375 ð19Þ

Given YAl, RAl can be calculated by

RAl ¼maxðYAl−0:0118GAr−0:0018BAr ;0Þ

0:0876ð20Þ

where 0:0118GAr and 0:0018BAr quantify the luminanceleakage for the left eye, which are subtracted from theintended luminance YAl. This is a subtractive ghostingreduction method. If the matrix CAl is accurate, it canguarantee a perfect ghosting inhibition in regions whereYAl is larger than the luminance leakage. As discussed in[19,20], for complete ghosting elimination it is required tosuppress the image dynamic range, which will degradevisual quality intuitively.

It can be observed from Eq. (8) that compared to that ofthe left eye, the luminance leakage of the right eye will befar less severe. In [20], it is also claimed that for red-cyanglasses the right eye suffers less from the luminanceleakage (about 1/7 of that of the left eye). Therefore, inthe proposed algorithm ghosting reduction is implemen-ted for the left image only.

Eventually, right and left RGB values are combinedtogether as {RAl, GAr, BAr}. Each element is clipped in therange 0–1. They are gamma corrected, and finally scaled bymultiplying 255.

3.5. Parameterization

For color space conversion and ghosting reduction,values of the matrices Cs, CAl, and CAr are cited from [1].These parameters are fixed and should be suitable fortypical LCD displays and red-cyan glasses. If they arerefined by true properties of the display and glasses, orby simple subjective tests as in [20], the performance willbe further optimized. The other parameters are set manu-ally. Values of k, ac, bc are chosen so that top views of theoriginal and approximated surfaces as introduced inSection 3.3 overlap closely. Slow, Shigh, Pmax, and T are

determined empirically, based on a few stereo pairs whichare excluded from the following subjective performanceevaluation.

4. Performance evaluation

We conduct subjective tests to evaluate performancesof four anaglyph algorithms, i.e., Color anaglyph, XYZanaglyph [5], LAB anaglyph [1], and the proposed ana-glyph. To the best of our knowledge, this is the firstsubjective test described in the literature to evaluate thevisual quality of anaglyph images. Experimental data canbe downloaded from the project website [23].

4.1. Overall performance

We download 25 stereo image pairs from the Internet[24,25], which cover a wide range of colors and scenetypes. They are down-scaled to standard-definition(around 800�600) while keeping their aspect ratios.Right images of these stereo pairs are shown in Fig. 6.For each stereo pair, four anaglyph images, i.e., Coloranaglyph, XYZ anaglyph, LAB anaglyph, and the proposedone, are presented simultaneously as illustrated by Fig. 7.This side-by-side arrangement is to speed up the compar-ison process in order to lessen the fatigue of subjects. Weimplemented Color and XYZ anaglyph generation algo-rithms. Implementation of the XYZ anaglyph generationalgorithm strictly follows instructions from [22]. The codeof LAB anaglyph was provided by its author.

Evaluation is performed in a room with fluorescentlights. The display monitor is a 24″ 1920�1200 Dell LCDdisplay (2407WFPB), and the viewing distance is about 3times the screen height. The red-cyan glasses are theNVIDIA 3D Vision Discover. Totally 20 subjects (non-experts) attended the tests. They have normal vision (withor without corrective glasses) and have passed the colorblindness test. For each stereo pair, each subject is asked toassess the visual quality of four anaglyph images which aredisplayed simultaneously. The assessment is reported on afive-point scale: 5 – Excellent, 4 – Good, 3 – Fair, 2 – Poor,and 1 – Bad. To lessen fatigue, subjects tell their opinionsto another person who is responsible for recording thescores. There is no limitation on the viewing duration, andsubjects are encouraged to express their feelings aboutwhat factors they attribute the quality to. The order of the25 images and positions of the four anaglyph methods arerandomized for each subject. Each test lasts around30 min. At the beginning, three anaglyph images arearranged as a training process to familiarize the subjectswith the anaglyph 3D viewing and test the possible stereoblindness of the subjects.

We use the proposed method as a reference. Subjectiveratings of the proposed method are subtracted from thoseof the corresponding Color, XYZ, and LAB anaglyphs.Difference values are processed using the methoddescribed in the BT.500 standard [26] to derive thedifference mean opinion scores (DMOS) and 95% confi-dence interval for each anaglyph image. The β2 testsuggested in [26] is used to identify the subjects whose

Page 8: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

Fig. 6. Right images of the 25 stereo pairs used in the subjective tests.

Anaglyph image 1

Anaglyph image 2

Anaglyph image 3

Anaglyph image 4

Fig. 7. Four anaglyph images are displayed simultaneously in the originalresolution.

7 Color accuracy include both lightness and chrominance accuracy. Inthis experiment, only chrominance accuracy is tested, since we found inthe previous experiment that our method may not be good at chromi-nance reproduction.

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607604

quality judgments deviate from the distribution of thenormal scores significantly. Two subjects are rejected.

Fig. 8 shows final results of 75 anaglyph images. Red,green, and blue circles indicate DMOS of Color, XYZ, andLAB anaglyphs, respectively. The vertical line passingthrough each circle represents the 95% confidence interval.Larger mean score denotes better visual quality. With thecircle below the dashed zero line, it means that theproposed method demonstrates better visual quality. Ingeneral, it can be observed from Fig. 8 that the proposedmethod achieves very good performance. Most circles arebelow the dashed zero line.

Table 1 gives a brief summary of the performancecomparison. For each method, the 1st row shows theDMOS on the 25 images (higher value indicates betterperformance), and the 2nd/3rd/4th row lists the number ofrank in top 1/2/3 (i.e., a count for how many times themethod is ranked first/second/third). Notice that there are

ties for the best performer, so the sum of values in the 2ndrow does not add up to 25. Similarly, for the 3rd or 4throw, the sum of values does not add up to 50 or 75 either.In general, it can be seen from Table 1 that the proposedmethod performs relatively well on this data set.

However, it is also evident that performances arecontent-dependent. No method consistently outperformsanother on all images, and the 95% confidence intervals inmany cases are large, as shown in Fig. 8. According tocomments from subjects collected during tests, there is avariation between individuals about which factor is con-sidered more important for good visual quality, e.g., lessretinal rivalry, or better chrominance accuracy, etc. Thelarge 95% confidence intervals shown in Fig. 8 can beattributed to this variation.

4.2. Chrominance accuracy, retinal rivalryand ghosting effect

Five images are selected in another experiment toevaluate the image quality regarding specific visual factors,e.g., chrominance accuracy,7 retinal rivalry, and ghostingeffect. To analyze the reason why one method has parti-cularly good performance on a certain image, we selectimages based on subjective quality evaluation resultsshown in Fig. 8. Four images, i.e., women, garden, straw-berry and gate, are chosen, because they are associatedwith best performances of the four anaglyph methods, i.e.,Color, XYZ, LAB, and the proposed one, respectively. Oneadditional image, i.e., penguin, is selected because it iscommonly used for comparing anaglyph methods [25].

Page 9: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

5 10 15 20 25

−2

−1

0

1

2 ↓penguin

↓makeup

↓waxwork

↓gate

↓fishes

↓duck

↓garden

↓goose

↓superman

↓terrace

↓transformer

↓vase

↓woman

↑strawberry

↑rabbit

↑piano

↑expo

↑shop

↑flower

↑pavilion

↑map

↑tiger

↑tower

↑tree

↑spray

DM

OS

Fig. 8. Performance comparison on 25 images among four anaglyph methods using the proposed one as a reference. The dashed zero line: proposedmethod; red: Color anaglyph; green: XYZ anaglyph; blue: LAB anaglyph. Each circle corresponds to an anaglyph image. The vertical line passing througheach circle represents the 95% confidence interval. Larger mean score indicates better visual quality. The proposed method produces better quality if thecircle is below the dashed zero line.

Table 1A brief summary of the performance comparison among four anaglyphgeneration methods, i.e., Color, XYZ, LAB, and the proposed method, on25 images.

Methods Color XYZ LAB Proposed

Mean score on 25 images −0.260 −0.176 −0.173 0No. of rank in top 1 4 9 8 8No. of rank in top 2 7 14 13 19No. of rank in top 3 15 18 22 22

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607 605

The main differences between this experiment and theprevious one described in Section 4.1 include:

(1)

Subjects need to give four quality scores for eachanaglyph image, evaluating its chrominance accuracy,retinal rivalry, ghosting, and overall quality. Therefore,in the training stage, to each subject we explain theconcepts of these visual phenomena by using imageexamples. These image examples do not include thefive images used in the testing stage.

(2)

The original image is shown to each subject, so thatthe subject can memorize its color and evaluate thechrominance accuracy of the anaglyph images. Speci-fically, for each of the five images in the test, the leftview of the stereo image pair is shown in 2D to thesubject. He/She can take as much time as he/she needsto memorize the colors. Then, he/she is asked to put onthe 3D glasses and evaluate four anaglyph imageswhich are arranged in the way shown in Fig. 7. Toprevent memory fading, the chrominance accuracy isevaluated first, followed by retinal rivalry, ghosting,and overall quality.

The other experimental procedures are similar withthose described in Section 4.1. For each subject, the wholesession needs around 20–30 min to complete. 18 subjectsin total participate in the experiment. The assessment isalso reported on a five-point scale. As in Section 4.1,subjective ratings of our anaglyph images are subtractedfrom those of the other anaglyph methods. The differencevalues are processed using the method described in theBT.500 standard [26] to derive the difference mean opi-nion score (DMOS). The β2 test [26] is used to identifyoutlier subjects. In this experiment, no subject is rejected.

Fig. 9 shows experimental results. From top to bottom,subfigures demonstrate evaluation results for chromi-nance accuracy, retinal rivalry, ghosting effect, and overallvisual quality. The horizontal axis indicates differentimages, and vertical axis indicates DMOS - higher DMOS,better chrominance accuracy (less retinal rivalry, lessghosting effect, or better overall quality). Red, green, andblue bars are respectively associated with Color, XYZ, andLAB anaglyph methods. Positive DMOS means the corre-sponding method performs better than the proposed one,while negative DMOS means that it is outperformed by theproposed one.

It can be observed from Fig. 9 that regarding chromi-nance accuracy, Color anaglyph outperforms the proposedmethod on all five images. It is the best performer forpenguin, gate, women, and garden, and second best forstrawberry. LAB anaglyph is also relatively good at chro-minance reproduction, shown by penguin (a very colorfulimage) and especially strawberry. The XYZ anaglyph per-forms relatively worse than others in terms of the chro-minance accuracy, but better at presenting greenish colors,as shown in the test image garden.

Regarding the retinal rivalry effect, we can see that ingeneral XYZ and our method outperform Color and LABanaglyph methods obviously except for the test imagewomen in which all methods generate little retinal rivalry.When comparing Color with LAB anaglyph, it can be seenthat in many cases (4 out of 5) LAB anaglyph is better atreducing retinal rivalry.

Regarding the ghosting effect, there is less variationbetween different methods compared to that of chromi-nance accuracy and retinal rivalry. Generally, the proposedmethod seems to perform the best, closely followed byXYZ, LAB and lastly Color anaglyph.

Comparing Figs. 8 and 9, we can see that overall qualityscores in these two figures exhibit similar trends. Inparticular, the top performing method remains the samefor each of the five tested images. On the other hand,differences between these two sets of results are due tothe variation of opinions between different subjects asanalyzed in Section 4.1, and also from the fact that thisexperiment uses the original image as a reference, whilethe previous one does not. The use of reference imagesdoes have influence on the overall quality judgement. Asexemplified in a very colourful image penguin, sincesubjects can refer to the original colors, the large overall

Page 10: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

1 2 3 4 5−2

−1

0

1

2

Chrominance Accuracy

DM

OS

Color anaglyphXYZ anaglyphLAB anaglyph

1 2 3 4 5−2

−1

0

1

2

Retinal Rivalry Effect

DM

OS

1 2 3 4 5−2

−1

0

1

2

Ghosting Effect

DM

OS

1 2 3 4 5−2

−1

0

1

2

Overall Quality

DM

OS

Fig. 9. From top to bottom, subfigures demonstrate evaluation results forchrominance accuracy, retinal rivalry, ghosting effect, and overall visualquality. The horizontal axis indicates different images, while the verticalaxis indicates DMOS. Higher DMOS, better chrominance accuracy/lessretinal rivalry/less ghosting effect/better overall quality. The red, greenand blue bars are associated with Color, XYZ, and LAB anaglyph methods,respectively.

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607606

quality differences shown in Fig. 8 now diminish in Fig. 9.The advantages of less retinal rivalry of XYZ and theproposed method are weakened by their disadvantagesin chrominance reproduction. However, retinal rivalryindeed plays a very important role in the overall quality,despite the presence of reference colors. We simplycalculate the linear correlation coefficient (LCC) betweendifferent groups of quality scores. The LCC between overallquality scores and chrominance accuracy scores is 0.319,while the LCC between overall quality scores and retinalrivalry scores is 0.511, which means that retinal rivalry maybe more closely related to the overall quality of perceivedanaglyph images. However, this does not mean chromi-nance accuracy is not important. It can be observed fromFig. 9 that when retinal rivalry is subtle, chrominance willdominate the overall quality judgment, as exemplified bywomen and strawberry.

It should be noted that limited by the experimentduration, the above analysis is only based on five imageswhich may be inadequate. Future works need to be donefor a more detailed statistical analysis.

4.3. Complexity

Among the four anaglyph generation algorithms, Coloranaglyph is the simplest one. If disregarding the gammatransfer, XYZ anaglyph is as time-efficient as Color ana-glyph. LAB anaglyph has the greatest complexity, since itinvolves a non-linear transform in the minimization pro-cess, as discussed in Section 2, which needs to be solvediteratively. Compared to LAB anaglyph, the proposedmethod is much simpler. Without specific optimization,the Matlab code of the proposed method needs about 2 sto process an 800�600 sized image.

5. Conclusion

In this paper, a new algorithm is proposed for anaglyphimage generation. Distinguished from the existing techni-ques which aim at matching the perceived anaglyph imageto the stereo pair in terms of RGB, XYZ or Lnanbn values,the proposed method tries to match the three perceptualcolor appearance attributes, i.e., lightness, hue, and satura-tion, since they can more precisely define the HVS colorperception. Subjective tests are conducted to evaluateperformances of the various anaglyph generation meth-ods. It turns out that the proposed algorithm performsquite well. The subjective test also reveals that the perfor-mance is content-dependent. Besides, it shows that thereis a large variation between individuals about which factoris considered more important in the evaluation of visualquality of anaglyph images. A further point to note is thatincorporating color enhancement into the proposed ana-glyph generation method is extremely easy by tuning thelightness and saturation values.

References

[1] D.F. McAllister, Y. Zhou, S. Sullivan, Methods for computing coloranaglyphs, Stereoscopic Displays and Applications XXI 7524 (2010).

[2] L.M.J. Meesters, W.A. Ijsselsteijn, P.J.H. Seuntiens, A survey ofperceptual evaluations and requirements of three-dimensional TV,IEEE Transactions on Circuits and Systems for Video Technology 14(3) (2004) 381–391.

[3] R.Z. Zeng, H.Z. Zeng, Printing anaglyph maps optimized for display,Proceedings of the SPIE 7866 (2011). 78661S–78661S-5, 2011.

[4] M.J. Tovee, Brain and Space, An Introduction to the Visual System,2nd ed. Cambridge University Press, 2008.

[5] E. Dubois, A projection method to generate anaglyph stereo images,in: IEEE International Conference on Acoustics, Speech, and SignalProcessing, vol. 3, 2001, pp. 1661–1664.

[6] A.J. Woods, T. Rourke, Ghosting in anaglyphic stereoscopic images,Stereoscopic Displays and Virtual Reality Systems XI 5291 (2004)354–365.

[7] A.J. Woods, Understanding Crosstalk in Stereoscopic Displays, in:Keynote Presentation at 3DSA (Three-Dimensional Systems andApplications) Conference, Tokyo, Japan, 2010.

[8] J. Konrad, B. Lacotte, E. Dubois, Cancellation of image crosstalk intime-sequential displays of stereoscopic video, IEEE Transactions onImage Processing 9 (5) (2000) 897–908.

[9] Trioscopic 3D, ⟨http://www.trioscopics.com/⟩.[10] ColorCode 3D, ⟨http://www.colorcode3d.com/⟩.

Page 11: Signal Processing: Image Communicationknngan/2013/SP-IC_v28_n6_p597-607.pdfanaglyph image, C is a 6 6 matrix converting V from the device RGB color space intothe CIE XYZ color space,

S. Li et al. / Signal Processing: Image Communication 28 (2013) 597–607 607

[11] Robin Lobel, Magenta-cyan anaglyphs, ⟨http://www.divideconcept.net/papers/MCA-RL09.pdf⟩.

[12] Infitec Dualcolor3D, ⟨http://www.infitec-global-sales.com/english/infitec-definiton.htm⟩.

[13] 3DTV, ⟨http://3dtv.at/Knowhow/AnaglyphComparison_en.aspx⟩.[14] Z. Zhang, D.F. McAllister, A uniform metric for anaglyph calculation,

Stereoscopic Displays and Virtual Reality Systems XIII 6055 (2006)5513.

[15] M.D. Fairchild, Color Appearance Models, 2nd ed. Wiley-IS&T,Chichester, UK, 2005.

[16] I. Ideses, L. Yaroslavsky, Three methods that improve the visualquality of colour anaglyphs, Journal of Optics A: Pure and AppliedOptics 7 (2005) 755–762.

[17] W. Alkhadour, et al., Creating a Color Anaglyph from a Pseudo-StereoPair of Images, in: 4th International Conference on InformationTechnology, Amman, Jordan, June 2009.

[18] A.J. Chang, et al., Ghosting reduction method for color anaglyphs,Proceedings of the SPIE 6803 (2008). 68031G–68031G-10.

[19] V. Tran, New Methods for Rendering Anaglyphic StereoscopicImages on CRT Displays and Photo-Quality Inkjet Printers, M.S.Thesis, University of Ottawa, 2005.

[20] H. Sanftmann, D. Weiskopf, Anaglyph stereo without ghosting,Computer Graphics Forum 30 (June (4)) (2011) 1251–1259.

[21] C. Poynton, Gamma a Technical Introduction to Digital Video, Wiley,New York, 1996.

[22] E. Dubois, Conversion of a Stereo Pair to Anaglyph with the Least-Squares Projection Method, ⟨http://www.site.uottawa.ca/�edubois/anaglyph/LeastSquaresHowToPhotoshop.pdf⟩, 2009.

[23] IVP Anaglyph Image Database, ⟨http://www.ee.cuhk.edu.hk/�snli/anaglyph_database.htm⟩.

[24] 3D China, ⟨http://www.china3-d.com/3dclub/⟩.[25] SWell3D, ⟨http://www.swell3d.com/color-anaglyph-methods-com

pare.html⟩.[26] ITU-R Recommendation BT.500-11, Methodology for the subjective

assessment of the quality of television pictures, ITU, Geneva,Switzerland, 2002.