19
Image representation using separable two-dimensional continuous and discrete orthogonal moments Hongqing Zhu n Department of Electronics and Communications Engineering, East China University of Science and Technology, No. 130 Mei Long Road, Shanghai 200237, China article info Article history: Received 26 January 2011 Received in revised form 29 September 2011 Accepted 3 October 2011 Available online 8 October 2011 Keywords: Bivariate Separable Classical orthogonal polynomials Discrete orthogonal moments Continuous orthogonal moments Local extraction Tensor product abstract This paper addresses bivariate orthogonal polynomials, which are a tensor product of two different orthogonal polynomials in one variable. These bivariate orthogonal polynomials are used to define several new types of continuous and discrete orthogonal moments. Some elementary properties of the proposed continuous Chebyshev–Gegenbauer moments (CGM), Gegenbauer–Legendre moments (GLM), and Chebyshev–Legendre moments (CLM), as well as the discrete Tchebichef–Krawtchouk moments (TKM), Tchebichef–Hahn moments (THM), Krawtchouk–Hahn moments (KHM) are pre- sented. We also detail the application of the corresponding moments describing the noise-free and noisy images. Specifically, the local information of an image can be flexibly emphasized by adjusting parameters in bivariate orthogonal polynomials. The global extraction capability is also demonstrated by reconstructing an image using these bivariate polynomials as the kernels for a reversible image transform. Comparisons with the known moments are performed, and the results show that the proposed moments are useful in the field of image analysis. Furthermore, the study investigates invariant pattern recognition using the proposed three moment invariants that are independent of rotation, scale and translation, and an example is given of using the proposed moment invariants as pattern features for a texture classification application. & 2011 Elsevier Ltd. All rights reserved. 1. Introduction Image moments have been shown to be useful in image analysis [1,2], image watermarking [3], and invariant pattern recognition [45]. These moments include continuous orthogonal Legendre moments (LLM) [6] and discrete orthogonal moments (e.g., Tchebichef moments (TTM) [7], Krawtchouk moments (KKM) [8], Hahn moments (HHM) [911]). All of these known two-dimensional orthogonal moments have separable basis func- tions that can be expressed as two separate terms by producing two same classical orthogonal polynomials each of one variable. Such classical orthogonal polynomials of a single variable have been long studied [1214]. Multivariate orthogonal polynomials are important research topics for pure mathematics and mathematical physics. Especially in recent years, bivariate orthogonal polynomials have attracted considerable research interest as solutions of second-order partial differential equations [15,16]. The general method of generating bivariate continuous orthogonal polynomials from univariate continuous orthogonal polynomials can trace its origins back to [17]. In [17], Koornwinder introduces seven different classes of orthogonal polynomials in two variables and gives some exam- ples of two variable analogs of the Jacobi polynomials. In addition to Jacobi polynomials, bivariate continuous orthogonal polyno- mials as extensions of other univariate continuous orthogonal polynomials have also been reported in [18,19]. Dunkl and Xu [20] give an excellent review of bivariate discrete orthogonal polynomials as the tensor product of two families of classical discrete orthogonal polynomials of one variable in their highly regarded work. The applications of bivariate or multivariate orthogonal poly- nomials are evident in various areas of applied mathematics, but only a few papers are involved in their application for image analysis and pattern recognition. The current work has been inspired by the tensor product of two orthogonal polynomials in one variable as proposed by [16,17,2124]. By producing an orthogonal polynomials method, this paper presents several continuous and discrete orthogonal moments with bivariate orthogonal polynomials as basis functions. Unlike the traditional basis functions that are tensor products of two same continuous (or discrete) orthogonal polynomials in one variable, the present study shows that the tensor products of two different or same orthogonal polynomials in one variable, such as product Gegen- bauer polynomials, product Chebyshev polynomials, product Chebyshev and Gegenbauer polynomials, product Gegenbauer and Legendre polynomials, product Chebyshev and Legendre Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/pr Pattern Recognition 0031-3203/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2011.10.002 n Tel./fax: þ86 21 64253145. E-mail address: [email protected] Pattern Recognition 45 (2012) 1540–1558

Image representation using separable two-dimensional

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Image representation using separable two-dimensional

Pattern Recognition 45 (2012) 1540–1558

Contents lists available at SciVerse ScienceDirect

Pattern Recognition

0031-32

doi:10.1

n Tel./

E-m

journal homepage: www.elsevier.com/locate/pr

Image representation using separable two-dimensional continuous anddiscrete orthogonal moments

Hongqing Zhu n

Department of Electronics and Communications Engineering, East China University of Science and Technology, No. 130 Mei Long Road, Shanghai 200237, China

a r t i c l e i n f o

Article history:

Received 26 January 2011

Received in revised form

29 September 2011

Accepted 3 October 2011Available online 8 October 2011

Keywords:

Bivariate

Separable

Classical orthogonal polynomials

Discrete orthogonal moments

Continuous orthogonal moments

Local extraction

Tensor product

03/$ - see front matter & 2011 Elsevier Ltd. A

016/j.patcog.2011.10.002

fax: þ86 21 64253145.

ail address: [email protected]

a b s t r a c t

This paper addresses bivariate orthogonal polynomials, which are a tensor product of two different

orthogonal polynomials in one variable. These bivariate orthogonal polynomials are used to define

several new types of continuous and discrete orthogonal moments. Some elementary properties of the

proposed continuous Chebyshev–Gegenbauer moments (CGM), Gegenbauer–Legendre moments

(GLM), and Chebyshev–Legendre moments (CLM), as well as the discrete Tchebichef–Krawtchouk

moments (TKM), Tchebichef–Hahn moments (THM), Krawtchouk–Hahn moments (KHM) are pre-

sented. We also detail the application of the corresponding moments describing the noise-free and

noisy images. Specifically, the local information of an image can be flexibly emphasized by adjusting

parameters in bivariate orthogonal polynomials. The global extraction capability is also demonstrated

by reconstructing an image using these bivariate polynomials as the kernels for a reversible image

transform. Comparisons with the known moments are performed, and the results show that the

proposed moments are useful in the field of image analysis. Furthermore, the study investigates

invariant pattern recognition using the proposed three moment invariants that are independent of

rotation, scale and translation, and an example is given of using the proposed moment invariants as

pattern features for a texture classification application.

& 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Image moments have been shown to be useful in imageanalysis [1,2], image watermarking [3], and invariant patternrecognition [4–5]. These moments include continuous orthogonalLegendre moments (LLM) [6] and discrete orthogonal moments(e.g., Tchebichef moments (TTM) [7], Krawtchouk moments(KKM) [8], Hahn moments (HHM) [9–11]). All of these knowntwo-dimensional orthogonal moments have separable basis func-tions that can be expressed as two separate terms by producingtwo same classical orthogonal polynomials each of one variable.Such classical orthogonal polynomials of a single variable havebeen long studied [12–14].

Multivariate orthogonal polynomials are important researchtopics for pure mathematics and mathematical physics. Especiallyin recent years, bivariate orthogonal polynomials have attractedconsiderable research interest as solutions of second-order partialdifferential equations [15,16]. The general method of generatingbivariate continuous orthogonal polynomials from univariatecontinuous orthogonal polynomials can trace its origins back to[17]. In [17], Koornwinder introduces seven different classes of

ll rights reserved.

orthogonal polynomials in two variables and gives some exam-ples of two variable analogs of the Jacobi polynomials. In additionto Jacobi polynomials, bivariate continuous orthogonal polyno-mials as extensions of other univariate continuous orthogonalpolynomials have also been reported in [18,19]. Dunkl and Xu[20] give an excellent review of bivariate discrete orthogonalpolynomials as the tensor product of two families of classicaldiscrete orthogonal polynomials of one variable in their highlyregarded work.

The applications of bivariate or multivariate orthogonal poly-nomials are evident in various areas of applied mathematics, butonly a few papers are involved in their application for imageanalysis and pattern recognition. The current work has beeninspired by the tensor product of two orthogonal polynomials inone variable as proposed by [16,17,21–24]. By producing anorthogonal polynomials method, this paper presents severalcontinuous and discrete orthogonal moments with bivariateorthogonal polynomials as basis functions. Unlike the traditionalbasis functions that are tensor products of two same continuous(or discrete) orthogonal polynomials in one variable, the presentstudy shows that the tensor products of two different or same

orthogonal polynomials in one variable, such as product Gegen-bauer polynomials, product Chebyshev polynomials, productChebyshev and Gegenbauer polynomials, product Gegenbauerand Legendre polynomials, product Chebyshev and Legendre

Page 2: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1541

polynomials, product Tchebichef and Krawtchouk polynomials,product Tchebichef and Hahn polynomials, and product Krawtch-ouk and Hahn polynomials, can also be used to generate basisfunctions for two-dimensional orthogonal moments, which aredenoted GGM, CCM, CGM, GLM, CLM, TKM, THM, and KHM,respectively. To the best of our knowledge, no such basis func-tions have been used to define the two-dimensional moments.The objective of this paper is to introduce such bivariate ortho-gonal polynomials into the field of image analysis and attempt todemonstrate their usefulness in this field. In this study, weemphasize that such nomenclature is derived from leadingliterature [6] where Teague proposed orthogonal polynomials asthe basis or kernel functions for moment calculations; differentpolynomials lead to various types of moments.

The accuracy of proposed moments as global descriptors isassessed by reconstructing the whole image and subsequentlycomparing to reconstructions using the known Legendre, Tche-bichef, Krawtchouk, and Hahn moments. In addition, the studyinvestigates the local image representation capabilities of theproposed moments for images by adjusting the parameters ofbivariate polynomials. Because the parameters of orthogonalpolynomials are set as special values, the emphasized region ofthese polynomials is completely different. Thus, by producing apolynomials method, a few proposed two-dimensional momentfunctions (e.g., TKM, THM, and KHM) gain the ability to extractthe features of any select region of the image by choosingappropriate parameters and give additional flexibility in describingthe image. In contrast to non-negative matrix factorization (NMF)[25,26], our descriptor can capture the local information of an imageby adjusting the polynomial parameters. NMF attempts to find twonon-negative matrices with product that can well approximate theoriginal data and achieves part-based representation.

As an important task in pattern recognition, texture classifica-tion has been the subject of extensive research during the lastseveral years [27,28]. The key step in any texture classificationprocess is the choice of the set of features used to characterize thetexture, and moment invariants are highly popular sets that arewidely used in various classification tasks. In this work, we usethe proposed moments that are invariant under 2-D transforms,such as rotation, scaling, and translation, as features for textureclassification. Because the proposed moments are expressed aslinear combinations of the geometric moments, we can derivethree moment invariants using the method applied to Krawtch-ouk moments by Yap et al. [8]. The texture classification accuracyof the proposed moment invariants is also compared to theexisting moment invariants. The experimental results indicatethat the proposed descriptors have better discriminative powerthan methods based on geometric or complex moments.

The main contributions of this work include the following fouraspects. (1) The tense product of two different univariate poly-nomials can construct basis function for defining moments.(2) The paper derives three moment invariants that are indepen-dent of the transformations involving rotation, scale and transla-tion. (3) By producing two different polynomials, some proposedmoments have the local feature extraction capability. (4) Theproposed method can be easily extended to obtain other bivariateorthogonal polynomials and their corresponding moments.

The rest of this paper is structured as follows. In Section 2, wediscuss the known results of the classical continuous and discreteorthogonal polynomials of a variable; these previous studies serve asbasic background for the rest of this work. Sections 3 and 4 aredevoted to introducing continuous and discrete orthogonal poly-nomials in two variables. Section 5 presents the two-dimensionalorthogonal moments. Section 6 focuses on the deriving momentinvariants from the geometric moments. Section 7 demonstrates afew examples. Finally, concluding remarks can be found in Section 8.

2. Classical orthogonal polynomials of a variable

For completeness, we have included the classical orthogonalpolynomials of a variable to serve as a foundation for the rest ofthe work. These polynomials were previously obtained in[12–14].

2.1. Classical continuous orthogonal polynomials

2.1.1. Jacobi polynomials

The family of Jacobi polynomials Jða,bÞn ðxÞ satisfies the following

orthogonality relation:Z 1

�1ð1�xÞað1þxÞbJða,bÞ

m ðxÞJða,bÞn ðxÞdx

¼2aþbþ1

2nþaþbþ1

Gðnþaþ1ÞGðnþbþ1Þ

n!Gðnþaþbþ1Þdmn, m,nZ0, ð1Þ

with the known conditions a4�1, and b4�1, and the recur-rence relation:

xJða,bÞn ðxÞ ¼

2ðnþ1Þðnþaþbþ1Þ

ð2nþaþbþ1Þð2nþaþbþ2ÞJða,bÞnþ1ðxÞ

þb2�a2

ð2nþaþbþ2Þð2nþaþbÞ Jða,bÞn ðxÞ

þ2ðnþaÞðnþbÞ

ð2nþaþbÞð2nþaþbþ1ÞJða,bÞn�1 ðxÞ: ð2Þ

2.1.2. Gegenbauer polynomials

The Gegenbauer polynomials [12] GðlÞn ðxÞ are Jacobi polyno-mials with a¼b¼l�1/2, which satisfies the following orthogon-ality relation:Z 1

�1ð1�x2Þ

l�ð1=2ÞGðlÞm ðxÞGðlÞn ðxÞdx¼

pGðnþ2lÞ21�2l

ðGðlÞÞ2ðnþlÞn!dmn, ð3Þ

with the known conditions, l4�ð1=2Þ, la0.Gegenbauer orthogonal polynomials satisfy a recurrence rela-

tion of the form:

2ðnþlÞxGðlÞn ðxÞ ¼ ðnþ1ÞGðlÞnþ1ðxÞþðnþ2l�1ÞGðlÞn�1ðxÞ,

G0ðxÞ ¼ 1 and G1ðxÞ ¼ 2lx: ð4Þ

The normalized Gegenbauer polynomials ~GðlÞn ðxÞ are defined as

~GðlÞn ðxÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið1�x2Þ

l�1=2ðGðlÞÞ2ðnþlÞn!

pGðnþ2lÞ21�2l

sGðlÞn ðxÞ: ð5Þ

2.1.3. Chebyshev polynomials

The Chebyshev polynomials of the first kind [12] Cn(x) can beobtained from the Jacobi polynomials by taking a¼b¼1/2, whichsatisfies the following orthogonality relation:Z 1

�1ð1�x2Þ

�ð1=2ÞCmðxÞCnðxÞdx¼p2 dmn, na0

pdmn, n¼ 0:

(ð6Þ

The Chebyshev polynomials of the first kind satisfy a recur-rence relation of the form:

2xCnðxÞ ¼ Cnþ1ðxÞþCn�1ðxÞ, C0ðxÞ ¼ 1 and C1ðxÞ ¼ x: ð7Þ

The normalized Chebyshev polynomials ~C nðxÞ are defined as

~C nðxÞ ¼

ffiffiffiffi2

p

rð1�x2Þ

�ð1=4ÞCnðxÞ, na0: ð8Þ

Page 3: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–15581542

2.1.4. Legendre polynomials

The Legendre polynomials [12] Ln(x) are Jacobi polynomialswith a¼b¼0, which satisfy the following orthogonality relation:

Z 1

�1LmðxÞLnðxÞdx¼

2

2nþ1dmn: ð9Þ

Legendre orthogonal polynomials satisfy a recurrence relationof the form:

ð2nþ1ÞxLnðxÞ ¼ ðnþ1ÞLnþ1ðxÞþnLn�1ðxÞ, L0ðxÞ ¼ 1 and L1ðxÞ ¼ x:

ð10Þ

The normalized Legendre polynomials ~LnðxÞ are defined as

~LnðxÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi2nþ1

2

rLnðxÞ: ð11Þ

2.2. Classical discrete orthogonal polynomials

2.2.1. Tchebichef polynomials

The nth-order discrete Tchebichef polynomials tn(x;N) satisfythe following orthogonality relation

XN�1

x ¼ 0

tnðx;NÞtmðx;NÞ ¼ rtðn;NÞdnm, 0rm,nrN�1, ð12Þ

where rtðn;NÞ ¼ ð2nÞ!Nþn

2nþ1

!, and a three-term recurrence

relation of type

ðnþ1Þtnþ1ðx;NÞ�ð2nþ1Þð2x�Nþ1Þtnðx;NÞ

þnðN2�n2Þtn�1ðx;NÞ ¼ 0, n¼ 1, . . ., N�1, ð13Þ

with the initial conditions

t0ðx;NÞ ¼ 1, t1ðx;NÞ ¼ 2x�Nþ1:

In this study,p

q

p!q!ðp�qÞ! denotes the combination number.

The normalized Tchebichef polynomials ~tnðx;NÞ were defined byMukundan et al. [7] as

~tnðx;NÞ ¼ tnðx;NÞ=ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffirtðn;NÞ

q: ð14Þ

2.2.2. Krawtchouk polynomials

The nth-order discrete Krawtchouk polynomials kn(x;p,N) areorthogonal on V¼{0, 1, y, N} with respect to the weight function:

wkðx;NÞ ¼ pxð1�pÞN�x N

x

� �: ð15Þ

The Krawtchouk polynomials satisfy the following orthogonalcondition:

XN

x ¼ 0

wkðx;NÞknðx; p,NÞkmðx; p,NÞ ¼ rkðn;NÞdnm, 1rn, mrN,

ð16Þ

where rkðn;NÞ ¼ð�1Þnn!ð�NÞn

1�pp

� �n, (a)k¼a(aþ1)(aþ2)y(aþk�1) is

the Pochhammer symbol, and a three-term recurrence relationof type:

�xknðx; p,NÞ ¼ pðN�nÞknþ1ðx; p,NÞ�½pðN�nÞþnð1�pÞ�knðx; p,NÞ

þnð1�pÞkn�1ðx; p,NÞ: ð17Þ

The normalized Krawtchouk polynomials ~knðx; p,NÞ weredefined by Yap et al. [8] as

~knðx; p,NÞ ¼ knðx; p,NÞ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiwkðx;NÞ

rkðn;NÞ

s: ð18Þ

2.2.3. Hahn polynomials

The nth-order discrete Hahn polynomials hða,bÞn ðx;NÞ are ortho-

gonal on V¼{0, 1, y, N�1} with respect to the weight function

whðx;NÞ ¼GðNþa�xÞGðbþ1þxÞ

Gðxþ1ÞGðN�xÞ: ð19Þ

The Hahn polynomials satisfy the following orthogonalcondition

XN�1

x ¼ 0

whðx;NÞhða,bÞn ðx;NÞhða,bÞ

m ðx;NÞ ¼ rhðn;NÞdnm, 0rn, mrN�1,

ð20Þ

and a three-term recurrence relation of type

xhða,bÞn ðx;NÞ ¼ anhða,bÞ

nþ1ðx;NÞþbnhða,bÞn ðx;NÞþgnhða,bÞ

n�1 ðx;NÞ, ð21Þ

where

rhðn;NÞ ¼Gðaþnþ1ÞGðbþnþ1Þðaþbþnþ1ÞN

ðaþbþ2nþ1Þn!ðN�n�1Þ!, ð22aÞ

an ¼ðnþ1Þðaþbþnþ1Þ

ðaþbþ2nþ1Þðaþbþ2nþ2Þ, ð22bÞ

bn ¼a�bþ2N�2

ðb2�a2Þðaþbþ2NÞ

4ðaþbþ2nÞðaþbþ2nþ2Þ, ð22cÞ

gn ¼ðaþnÞðbþnÞðaþbþNþnÞðN�nÞ

ðaþbþ2nÞðaþbþ2nþ1Þ: ð22dÞ

The normalized Hahn polynomial ~hða,bÞ

n ðx;NÞ was defined byZhu et al. [11] as

~hða,bÞ

n ðx;NÞ ¼ hða,bÞn ðx;NÞ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiwhðx;NÞ

rhðn;NÞ

s: ð23Þ

Fig. 1 shows the plots of the first few orders of continuousorthogonal Gegenbauer, Chebyshev, Legendre, and the normal-ized discrete orthogonal Tchebichef, Krawtchouk, and Hahnpolynomials with different parameter values.

3. The bivariate continuous orthogonal polynomials

In [17], Koornwinder introduces seven different methods todefine bivariate orthogonal polynomials. One of these methods isa direct tensor product of two orthogonal polynomials of a singlevariable. In this section, we start with the tensor product of Jacobipolynomials in one variable proposed by [16,17,23,24] as follows:

Jða,b,a,bÞh,k ðx,yÞ ¼ Jða,bÞ

h ðxÞJða,bÞk ðyÞ, ð24Þ

are orthogonal on [�1, 1]� [�1, 1] with respect to the weightfunction

rðx,yÞ ¼ ð1�xÞað1þxÞbð1�yÞað1þyÞb, a, b, a, b4�1:

Because the Gegenbauer, Chebyshev, and Legendre polyno-mials can be defined as a particular case of Jacobi polynomials,the following several bivariate orthogonal polynomials can beinterpreted as tensor products of two orthogonal polynomials inone variable with the help of the Eq. (24).

Page 4: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1543

3.1. Product Gegenbauer polynomials

According to Eq. (24), we have the product Gegenbauer poly-nomials of two variables, GðlÞh ðxÞG

ðlÞk ðyÞ, 0rhrN1�1, 0rkrN2�1,

which are orthogonal on the set V¼[�1, 1]� [�1, 1] with respectto the weight function:

rgðx,yÞ ¼ ð1�xÞl�ð1=2Þð1þxÞl�ð1=2Þ

ð1�yÞl�ð1=2Þð1þyÞl�ð1=2Þ:

The bivariate Gegenbauer polynomials in order n (k¼n�h) aredefined as

vðx,yÞ ¼ GGh,kðx,y; lÞ ¼ GðlÞh ðxÞGðlÞk ðyÞ: ð25Þ

−1 −0.5 0 0.5 1

−1−0.8−0.6−0.4−0.2

00.20.40.60.8

1

x

Pol

ynom

ial V

alue

s

Chebyshev Polynomials of the First Kind, Order: 10

−1 −0.5 0 0.5 1−1.5

−1

−0.5

0

0.5

1

1.5

x

Pol

ynom

ial V

alue

s

Legendre Polynomials,Order: 10

0 50 100 150 200 250−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3Krawtchouk Polynomials (p = 0.1), Order: 10

x

Pol

ynom

ials

Val

ues

Fig. 1. The continuous orthogonal Chebyshev, Gegenbauer, Legendre polynomials and the

3.2. Product Chebyshev polynomials

According to Eq. (24), we have the product Chebyshev poly-nomials of two variables, Ch(x)Ck(y), 0rhrN1�1, 0rkrN2�1,which are orthogonal on the set V¼[�1, 1]� [�1, 1] with respectto the weight function:

rcðx,yÞ ¼ ð1�xÞ1=2ð1þxÞ1=2

ð1�yÞ1=2ð1þyÞ1=2:

The bivariate Chebyshev polynomials in order n (k¼n�h) aredefined as

vðx,yÞ ¼ CCh,kðx,yÞ ¼ ChðxÞCkðyÞ: ð26Þ

−1 −0.5 0 0.5 1−3

−2

−1

0

1

2

3

x

Pol

ynom

ial V

alue

s

0 50 100 150 200 250−0.25

−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25Tchebichef Polynomials, Order: 10

x

Pol

ynom

ials

Val

ues

0 50 100 150 200 250−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25Krawtchouk Polynomials (p = 0.5), Order: 10

x

Pol

ynom

ials

Val

ues

Gegenbauer Polynomials (λ = 0.75), Order: 10

normalized discrete orthogonal Tchebichef, Krawtchouk, Hahn polynomials (N¼255).

Page 5: Image representation using separable two-dimensional

0 50 100 150 200 250−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3Krawtchouk Polynomials (p = 0.9), Order: 10

x

Pol

ynom

ials

Val

ues

0 50 100 150 200 250

−0.25−0.2

−0.15−0.1

−0.050

0.050.1

0.150.2

0.25

Hahn Polynomials (a = 20, b = 0), Order: 10

x

0 50 100 150 200 250x

0 50 100 150 200 250x

Pol

ynom

ials

Val

ues

−0.1

−0.05

0

0.05

0.1

0.15Hahn Polynomials (a = 10, b = 10), Order: 10

Pol

ynom

ials

Val

ues

−0.25−0.2

−0.15−0.1

−0.050

0.050.1

0.150.2

0.25

Hahn Polynomials (a = 10, b = 100), Order: 10

Pol

ynom

ials

Val

ues

Fig. 1. (continued)

H. Zhu / Pattern Recognition 45 (2012) 1540–15581544

3.3. Product Chebyshev–Gegenbauer polynomials

According to Eq. (24), we have the product Chebyshev andGegenbauer polynomials, ChðxÞG

ðlÞk ðyÞ, 0rhrN1�1, 0rkrN2�1,

which are orthogonal on the set V¼[�1, 1]� [�1, 1] with respectto the weight function

rcgðx,yÞ ¼ ð1�xÞ1=2ð1þxÞ1=2

ð1�yÞl�ð1=2Þð1þyÞl�ð1=2Þ:

The bivariate Chebyshev–Gegenbauer polynomials in order n

(k¼n�h) are defined as

vðx,yÞ ¼ CGh,kðx,y; lÞ ¼ ChðxÞGðlÞk ðyÞ: ð27Þ

3.4. Product Gegenbauer–Legendre polynomials

According to Eq. (24), we have the product Gegenbauer andLegendre polynomials, GðlÞh ðxÞLkðyÞ, 0rhrN1�1, 0rkrN2�1,which are orthogonal on the set V¼[�1, 1]� [�1, 1] with respectto the weight function:

rglðx,yÞ ¼ ð1�xÞl�ð1=2Þð1þxÞl�ð1=2Þ:

The bivariate Gegenbauer–Legendre polynomials in order n

(k¼n�h) are defined as

vðx,yÞ ¼ GLh,kðx,y; lÞ ¼ GðlÞh ðxÞLkðyÞ: ð28Þ

3.5. Product Chebyshev–Legendre polynomials

According to Eq. (24), we have the product Chebyshev andLegendre polynomials, Ch(x)Lk(y), 0rhrN1�1, 0rkrN2�1,

which are orthogonal on the set V¼[�1, 1]� [�1, 1] with respectto the weight function:

rclðx,yÞ ¼ ð1�xÞ1=2ð1þxÞ1=2:

The bivariate Chebyshev–Legendre polynomials in order n

(k¼n�h) are defined as

vðx,yÞ ¼ CLh,kðx,yÞ ¼ ChðxÞLkðyÞ: ð29Þ

Many more bivariate polynomials can be obtained using asimilar method. For example, the tensor product of two Legendrepolynomials in one variable is known as a basis function ofLegendre moments [6] in image analysis communities.

4. The bivariate discrete orthogonal polynomials

In [21–22], Xu also constructed and characterized bivariatepolynomials by producing the classical discrete orthogonal poly-nomials of a single variable. According to the method proposed byXu, this study has produced several bivariate discrete orthogonalpolynomials that are listed below.

4.1. Product Tchebichef–Krawtchouk polynomials

The tensor products of the Tchebichef and Krawtchouk poly-nomials in one variable tl(x;N1) and km(y;p,N2) are defined as

tkl,mðx,y;p,NÞ ¼ tlðx;N1Þkmðy; p,N2Þ, 0r lrN1�1, 0rmrN2,

ð30Þ

Page 6: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1545

and are orthogonal on the set V¼{0, 1, y, N1�1}� {0, 1, y, N2}with respect to the weight function:

wtkðx,y;N1,N2Þ ¼wkðy;N2Þ ¼N2

y

!pyð1�pÞN2�y: ð31Þ

Such tensor products may be regarded as bivariate discreteTchebichef–Krawtchouk orthogonal polynomials, which satisfythe following linear partial difference equation:

x

nþ1ðN1�xÞD1r1uðx,yÞþð1�pÞyD2r2uðx,yÞþ

ðN1�1�2xÞ

nþ1D1uðx,yÞ

þ½pðN2�yÞ�ð1�pÞy�D2uðx,yÞþnuðx,yÞ ¼ 0, ð32Þ

where

r1uðx,yÞ ¼ uðx,yÞ�uðx�1,yÞ, r2uðx,yÞ ¼ uðx,yÞ�uðx,y�1Þ,

D1uðx,yÞ ¼ uðxþ1,yÞ�uðx,yÞ, D2uðx,yÞ ¼ uðx,yþ1Þ�uðx,yÞ:

For each n, the above equation has solution u(x,y)¼tkl,m(x,y;p,N).

4.2. Product Tchebichef–Hahn polynomials

The tensor products of the Tchebichef and Hahn polynomialsin one variable tl(x;N1) and hða,bÞ

m ðy;N2Þ are defined as

thl,mðx,y; a,b,NÞ ¼ tlðx;N1Þhða,bÞm ðy;N2Þ, 0r lrN1�1,

0rmrN2�1, ð33Þ

and are orthogonal on the set V¼{0, 1, y, N1�1}� {0, 1, y,N2�1} with respect to the weight function:

wthðx,y;N1,N2Þ ¼whðy;N2Þ ¼GðN2þa�yÞGðyþbþ1Þ

GðN2�yÞGðyþ1Þ: ð34Þ

Such tensor products may be regarded as bivariate discreteTchebichef–Hahn orthogonal polynomials, which satisfy the fol-lowing linear partial difference equation:

xðN1�xÞ

nþ1D1r1uðx,yÞþ

yðN2þa�yÞ

aþbþnþ1D2r2uðx,yÞþ

ðN1�1�2xÞ

nþ1D1uðx,yÞ

þ½ðbþ1ÞðN2�1Þ�ðaþbþ2Þy�

aþbþnþ1D2uðx,yÞþnuðx,yÞ ¼ 0: ð35Þ

For each n, the above equation has solution uðx,yÞ ¼ thl,m

ðx,y; a,b,NÞ.

4.3. Product Krawtchouk–Hahn polynomials

The tensor products of the Krawtchouk and Hahn polynomialsin one variable kl(x;p,N1) and hða,bÞ

m ðy;N2Þ are defined as

khl,mðx,y; p,a,b,NÞ ¼ klðx; p,N1Þhða,bÞm ðy;N2Þ, ð36Þ

which are orthogonal on the set V¼{0, 1, y, N1}� {0, 1, y,N2�1} with respect to the weight function:

wkhðx,y;N1,N2Þ ¼N1

x

� �pxð1�pÞN1�x GðN2þa�yÞGðyþbþ1Þ

GðN2�yÞGðyþ1Þ: ð37Þ

Such tensor products may be regarded as bivariate discreteKrawtchouk–Hahn orthogonal polynomials, which satisfy thefollowing linear partial difference equation:

ð1�pÞxD1r1uðx,yÞþ1

aþbþnþ1yðN2þa�yÞD2r2uðx,yÞ

þ pðN1�xÞ�ð1�pÞx½ �D1uðx,yÞþ1

aþbþnþ1� ½ðbþ1ÞðN2�1Þ�ðaþbþ2Þy�D2uðx,yÞþnuðx,yÞ ¼ 0: ð38Þ

For each n, the above equation has solution uðx,yÞ ¼ khl,m

ðx,y; p,a,b,NÞ.This study only presents the above three polynomials. Similar

methods can be used to obtain other bivariate discrete orthogonal

polynomials (e.g., product Krawtchouk–Tchebichef polynomials:ktl,m(x, y; p, N)¼kl(x; p, N1)tm(y; N2), product Hahn–Krawtchoukpolynomials hkl,mðx,y; p,a,b,NÞ ¼ hða,bÞ

l ðx;N1Þkmðy; p,N2Þ). It is worthmentioning that the basis functions of the known Tchebichef [7],Krawtchouk [8], and Hahn moments [9–11] can be taken as thetensor product of two identical univariate discrete orthogonalTchebichef, Krawtchouk, and Hahn polynomials, respectively.

5. Two-dimensional orthogonal moments

5.1. Two-dimensional continuous orthogonal moments

Without loss of generality, we assume a digital image f(x, y)with size N�N can be defined over the region S¼[�1, 1]� [�1, 1].Here, f(x, y) represents the gray level of individual pixels, (x, y)represent pixel coordinates of the grayscale image, the 2-D contin-uous orthogonal moments of order (hþk) are of the form:

Zhk ¼

Z 1

�1

Z 1

�1f ðx,yÞvðx,yÞdxdy, 0rh, kr1, ð39Þ

where v(x, y) is a continuous function of (x, y) defined over S.The GGM, CCM, CGM, GLM, or CLM can be obtained if v(x, y) is

set as GGh,k(x,y;l), CCh,k(x,y), CGh,k(x,y;l), GLh,k(x,y;l), or CLh,k(x,y),respectively. By the orthogonality principle, the image functionf(x, y) can be written as an infinite series of expansion in terms ofthe bivariate orthogonal polynomials over the region S

f ðx,yÞ ¼X1h ¼ 0

X1k ¼ 0

Zhkvðx,yÞ, ð40Þ

where the continuous orthogonal moments {Zhk} are computedover the same region. It should also be noticed that the computa-tion of continuous orthogonal moments of digital images will faceone problem: because the image intensity function is alwaysdefined in Cartesian coordinates (x, y), x, y¼0, y, N�1, the imagecoordinate space must be mapped to the range [�1, 1] where theorthogonal polynomials definition is valid. The mapping transfor-mations are defined as follows:

xi ¼�1þ iþ1

2

� �Dx, yj ¼�1þ jþ

1

2

� �Dy, i,j¼ 0,1,2, . . ., N�1,

ð41Þ

and Dxi¼Dx¼2/N, Dyj¼Dy¼2/N are sampling intervals in thex- and y-directions, respectively. Thus, the continuous orthogonalmoments can be computed exactly by

Zhk ¼XN�1

i ¼ 0

XN�1

j ¼ 0

f ðxi,yiÞvðxi,yiÞ: ð42Þ

If only the moments of order (hþk)rN are given, then thefunction f(x, y) can be approximated by a continuous function as atruncated series:

f ðxi,yjÞ ¼XN

h ¼ 0

XN�h

k ¼ 0

Zhkvðxi,yjÞ: ð43Þ

5.2. Two-dimensional discrete orthogonal moments

Consider a digital image f(x, y) with size N�N; the image’s 2-Ddiscrete orthogonal moments of order (l, m) can be defined by thefollowing relation:

Dlm ¼XN�1

x ¼ 0

XN�1

y ¼ 0

f ðx,yÞuðx,yÞ, l,m¼ 0,1,2, . . .: ð44Þ

where u(x,y) are the bivariate orthogonal polynomials used asthe moment basis set. By choosing u(x,y) as tkl,mðx,y; p,NÞ,

Page 7: Image representation using separable two-dimensional

Fig. 2. The basis functions of several discrete orthogonal moments (N¼8). Neutral white color represents one, white color represents positive amplitudes, and black color

represents negative amplitudes. (d) TKM (p¼0.5), (e) THM (a¼b¼4), and (f) KHM (p¼0.5, a¼b¼4).

H. Zhu / Pattern Recognition 45 (2012) 1540–15581546

thl,mðx,y; a,b,NÞ, khl,mðx,y; p,a,b,NÞ, ktl,m(x,y;p,N), and hkl,mðx,y; p,a,b,NÞ, one can obtain TKM, THM, KHM, KTM, and HKM. Fig. 2 showsthe basis functions of several discrete orthogonal moments for a blockof size 8�8. It is noted that the basis functions exhibit a progressiveincrease in frequency in both the vertical and horizontal directions.The top is designated the DC coefficient and holds a constant value.The AC coefficients are the remaining 63 coefficients with non-zerofrequencies. Using the orthogonal property of normalized orthogonalpolynomials, Eq. (44) also leads to the following inverse momenttransform:

f ðx,yÞ ¼Xs

l ¼ 0

Xs

m ¼ 0

Dlmuðx,yÞ, ð45Þ

where s is N�1 for the Tchebichef and Hahn polynomials, and N forKrawtchouk polynomials. Eq. (45) indicates that an image can becompletely reconstructed by calculating its discrete orthogonalmoments up to order 2N�2 for THM (or 2N�1 for KHM andTKM). Theoretically speaking, if one computes all of the imagemoments and uses them in (45), the reconstructed image will beidentical to the original one with minimal reconstruction error.Image reconstruction from discrete orthogonal moments producesbetter results than from continuous ones and is easier to performusing matrix algebra. These advantages are primarily due to thediscretization error inherent in implementations of continuousmoments not existing in discrete moments.

If moments are limited to an order K, we can approximate f

with f :

f ðx,yÞ ¼XK

l ¼ 0

XK�l

m ¼ 0

Dlmuðx,yÞ: ð46Þ

6. Moment invariants

The (lþm)th order geometric moments Mlm of a grayscaleimage intensity function f(x, y) with size N�N is defined using adiscrete sum approximation as

Mlm ¼XN�1

x ¼ 0

XN�1

y ¼ 0

xlymf ðx,yÞ: ð47Þ

Then the geometric moment invariants, which are indepen-dent of rotation, scaling, and translation, can be written as [8,29]

vlm ¼M�r00

XN�1

x ¼ 0

XN�1

y ¼ 0

½ðx�xÞcosyþðy�yÞsiny�l

�½ðy�yÞcosy�ðx�xÞsiny�mf ðx,yÞ, ð48Þ

where

r¼lþm

2þ1, y¼

1

2tan�1 2m11

m20�m02

, ð49Þ

and the coordinates ðx,yÞ denote the centroid of image f(x,y),which are defined as

x¼M10

M00, y¼

M01

M00, ð50Þ

and mlm are the (lþm)th order central moments defined as

mlm ¼

Z 1�1

Z 1�1

ðx�xÞlðy�yÞmf ðx,yÞdxdy: ð51Þ

6.1. Tchebichef–Krawtchouk moment invariants

The Tchebichef–Krawtchouk moments of ~f ðx,yÞ ¼ ½wtkðx,y;N,N�1Þ��1=2f ðx,yÞ can be written in terms of geometric moment as

Qtklm ¼

XN�1

x ¼ 0

XN�1

y ¼ 0

~t lðx;NÞ~kmðy; p,N�1Þ~f ðx,yÞ

¼ ½rtðl;NÞrkðm;N�1Þ��1=2XN�1

x ¼ 0

XN�1

y ¼ 0

tlðx;NÞkmðy; p,N�1Þf ðx,yÞ

¼ ½rtðl;NÞrkðm;N�1Þ��1=2Xl

i ¼ 0

Xmj ¼ 0

cNl,ie

N�1m,j,pMij, ð52Þ

where the nth-order orthogonal Tchebichef polynomials given by

tlðx;NÞ ¼Xl

i ¼ 0

cNl,ix

i and kmðy; p,N�1Þ ¼Xm

j ¼ 0

eN�1m,j,pyj, ð53Þ

and cNl,i and eN�1

m,j,p are coefficients. Table 1 lists the first fewmoments in explicit forms. Qtk

lm is a linear combination of thegeometric moments {Mij} up to order i¼ l and j¼m weighted bycoefficients cN

l,i and eN�1m,j,p. Eq. (53) transforms the nonorthogonal

geometric moments to the orthogonal Tchebichef–Krawtchoukmoments. According to [8,29], Eq. (48) can be rewritten asfollows:

~vlm ¼Xl

i ¼ 0

Xm

j ¼ 0

l

i

� �m

j

!N2

2

!ððiþ jÞ=2Þþ1

�N

2

� �lþm�i�j

vij, ð54Þ

in order to make the normalized image fall inside the domain of[0, N�1]� [0, N�1] as is required by Tchebichef–Krawtchoukmoments. The new set of moments must be formed by replacingthe regular geometric moments {Mlm} by their invariants

Page 8: Image representation using separable two-dimensional

Table 1

The first few of coefficients cNl,i , e

Nl,i,p , and Zða,b,NÞ

l,i in explicit forms.

(l, i) cNl,i eN

l,i,p Zða,b,NÞl,i

(0, 0) 1 1 1

(1, 0) 1�N 1 (bþ1)(1�N)

(1, 1) 2 1ð1�NÞp

aþbþ2

(2, 0) (N�1)(N�2) 1 12 ðN�1ÞðN�2Þðbþ1Þðbþ2Þ

(2, 1) �6(N�1) 2ð1�NÞp þ

1ð1�NÞðN�2Þp2 ðaþbþ3Þ ðbþ2Þð2�NÞ� 1

2 ðaþbþ4Þ� �

(2, 2) 6 1ðN�1ÞðN�2Þp2

12 ðaþbþ3Þðaþbþ4Þ

(3, 0) �(N�1)(N�2) (N�3) 1 � 13! ðbþ1Þ3ðN�3Þ3

(3, 1) 12N2�30Nþ22 3

ð1�NÞp�3

ðN2�3Nþ2Þp2

� 2ðN3�6N2

þ11N�6Þp312 ðbþ2Þ2ðN�3Þ2ð4þaþbÞþðbþ3ÞðN�3Þð4þaþbÞ2� �

þ1

3ð4þaþbÞ3

(3, 2) �30(N�1) 3ðN3�6N2

þ11N�6Þp3þ 3ðN2�3Nþ2Þp2

� 12 ð4þaþbÞ3þðbþ3ÞðN�3Þð4þaþbÞ2� �

(3, 3) 20 � 1ðN3�6N2

þ11N�6Þp316 ð4þaþbÞ3

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1547

counterparts { ~vlm}. Thus, one obtains

~Qtk

lm ¼ ½rtðl;NÞrkðm;N�1Þ��1=2Xl

i ¼ 0

Xmj ¼ 0

cNl,ie

N�1m,j,p

~nij: ð55Þ

6.2. Tchebichef–Hahn moment invariants

Similarly, the Tchebichef–Hahn moments of ~f ðx,yÞ ¼ ½wthðx,y;N,NÞ��1=2f ðx,yÞ can be written in terms of geometric moment as

Qthlm ¼

XN�1

x ¼ 0

XN�1

y ¼ 0

~t lðx;NÞ~hða,bÞ

m ðy;NÞ~f ðx,yÞ

¼ ½rtðl;NÞrhðm;NÞ��1=2

XN�1

x ¼ 0

XN�1

y ¼ 0

tlðx;NÞhða,bÞm ðy;NÞf ðx,yÞ

¼ ½rtðl;NÞrhðm;NÞ��1=2

Xl

i ¼ 0

Xm

j ¼ 0

cNl,iZða,b,NÞm,j Mij, ð56Þ

where Zða,b,NÞm,j are coefficients listed in Table 1, and rh(m;N) are

obtained in Eq.(22a). The new set of moments can be formed byreplacing the regular geometric moments {Mij} by their invariantscounterparts { ~vij}. Similarly, we obtain

~Qth

lm ¼ ½rtðl;NÞrhðm;NÞ��1=2

Xl

i ¼ 0

Xmj ¼ 0

cNl,iZða,b,NÞm,j

~vij: ð57Þ

6.3. Krawtchouk–Hahn moment invariants

The Krawtchouk–Hahn moments of ~f ðx,yÞ ¼ ½wkhðx,y;N�1,NÞ��1=2f ðx,yÞ can be written in terms of geometric moment as

Qkhlm ¼

XN�1

x ¼ 0

XN�1

y ¼ 0

~klðx; p,N�1Þ ~hða,bÞ

m ðy;NÞ~f ðx,yÞ

¼ ½rkðl;N�1Þrhðm;NÞ��1=2

XN�1

x ¼ 0

XN�1

y ¼ 0

klðx; p,N�1Þhða,bÞm ðy;NÞf ðx,yÞ

¼ ½rkðl;N�1Þrhðm;NÞ��1=2

Xl

i ¼ 0

Xm

j ¼ 0

eN�1l,i,p Z

ða,b,NÞm,j Mij: ð58Þ

The new set of moments can be formed by replacing theregular geometric moments {Mij} by their invariant counterparts

{ ~vij}, we obtain

~Qkh

lm ¼ ½rkðl;N�1Þrhðm;NÞ��1=2

Xl

i ¼ 0

Xm

j ¼ 0

eN�1l,i,p Z

ða,b,NÞm,j

~vij: ð59Þ

7. Experimental results

In the previous sections, several continuous and discreteorthogonal moments were addressed. To verify their global andlocal feature extraction properties, this section first discusses theproblem of image reconstruction and subsequently offers atexture classification example to test the discriminative capabilityof the proposed moment invariants in recognizing rotation, scale,and translation texture features.

First, the proposed bivariate orthogonal polynomials (contin-uous or discrete) are used to transform the image pixel intensitiesinto image moments. The separability of tensor products allowsthe proposed moments to be easily computed in two steps bysuccessive 1-D operations on the rows and columns of the image.Next, the bivariate orthogonal polynomials are used to inversetransform the image moments back to image pixel intensities. Inthis study, both Eqs. (43) and (46) are the inverse momenttransforms and provide image reconstruction from a finite set oftheir moments. In order to evaluate the performance of differentmethods, the mean square error (mse) between the original imagef(x,y) and the reconstructed image f ðx,yÞ from a finite set of itsmoments is calculated by

mse¼1

N2

XN�1

x ¼ 0

XN�1

y ¼ 0

½f ðx,yÞ�f ðx,yÞ�2: ð60Þ

This error is considered to be a good measure of the imagereconstruction by specific moments.

A set of eight images, shown in Fig. 3, are used as the testimages in this study. In addition to the images Girl and Duck, therest of the images are standard test images from the WaterlooImage Repository database [30] that are used to evaluate theperformances of proposed moments on various kinds of graphicalmaterial (e.g., landscapes, animals, portraits, and textures). Tomake this set more challenging, the large-size and high-textureimage Barbara (512�512) is included; the other images are256�256 pixels. The image Baboon has high local texture, thestandard test image Goldhill presents fine detail, and the Camera-men and Bird images present large uniform regions. The imageDuck is chosen to evaluate the local feature extraction capabilityof the proposed moments.

Page 9: Image representation using separable two-dimensional

Fig. 3. Test images: (a) Lena, (b) Goldhill, (c) Girl, (d) Cameramen, (e) Baboon, (f) Bird, (g) Duck, and (h) Barbara.

Table 2Image reconstruction of the grayscale images Lena and Girl.

Methods Orders

50 100 255

GGM

CCM

LLM

CGM

GLM

CLM

H. Zhu / Pattern Recognition 45 (2012) 1540–15581548

Page 10: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1549

7.1. Global feature extraction

First, we use grayscale images Lena, Girl, and Barbara tocompare the global image description capability of a continuousGGM, CCM, CGM, GLM, CLM, and LLM. The LLM are the classicalLegendre moments defined by Teague in [6]. The reconstructionof images Lena and Girl with an increasing number of momentsare tabulated in Table 2. The next step consists of reconstructingthe large-size and high-texture image Barbara from its moments.Fig. 4 shows the quality of the reconstruction and indicates thatthe reconstructed image is closer to the original one when themaximum moment’s order reaches a certain value. To comparethe performance of each moment, we measured the reconstruc-tion error. The images’ reconstruction errors and mses when

Reconstructed imag

Reconstructed imag

Reconstructed imag

Reconstructed imag

Reconstructed imag

Reconstructed imag

Fig. 4. Reconstruction of the image Barbara. The order’s num

comparing different methods are shown in Fig. 5. The resultsshow that GGM performed better in terms of reconstruction errorwhen compared with CCM, CGM, GLM, CLM, and LLM. Therefore,it is highly possible that GGM has desirable properties for the fieldof image processing and pattern recognition.

The second experiment compares the global feature extractioncapability of the proposed discrete orthogonal moments on thesame images by reconstructing the complete images from themoments. In this case, the parameter p is 0.5 for the Krawtchoukpolynomials, and a and b are 10 for the Hahn polynomials.Because of the influence of parameter p on the Krawtchoukpolynomials and parameters a and b on the Hahn polynomials,the region of emphasis of the Krawtchouk and Hahn polynomialsis focused on the center of the 0–255. These facts were already

es using GGM

es using CCM

es using LLM

es using CGM

es using GLM

es using CLM

bers from left to right are 50, 150, 300, 450, and 512.

Page 11: Image representation using separable two-dimensional

0 50 100 150 200 2500

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

Moment Order

Rec

onst

ruct

ion

Err

or210 220 230 240 2505

6

7

8

9

10

11

12 x 10−3

GGMCCMLLMCGMGLMCLM

0 50 100 150 200 2500

0.01

0.02

0.03

0.04

0.05

0.06

Moment Order

Rec

onst

ruct

ion

Err

or

210 220 230 240 250

5

6

7

8

9

x 10−3

GGMCCMLLMCGMGLMCLM

0 100 200 300 400 500

0.005

0.01

0.015

0.02

0.025

0.03

Moment Order

Rec

onst

ruct

ion

Err

or

400 420 440 460 480 500

56789

101112

x 10−3

GGMCCMLLMCGMGLMCLM

Fig. 5. Comparative study of reconstruction errors by using GGM (l¼2), CCM, LLM, CGM (l¼2), GLM (l¼2), and CLM, (a) errors from image Girl, (b) errors from image

Lena, and (c) errors from image Barbara.

H. Zhu / Pattern Recognition 45 (2012) 1540–15581550

shown by Yap [8,10] and by our previous study (Ref. [11],pp. 346–347). Here, while parameter p¼0.5 for the Krawtchoukpolynomials and a¼b¼10 for the Hahn polynomials, the KKM,HHM, TKM, THM, and KHM are set to global feature extractionmode. Image reconstruction processes for the test images Girl andLena using TTM, KKM, HHM, TKM, THM, and KHM with order upto 255 are shown in Table 3. Reconstructions of the large-size andhigh-textures image Barbara based on the proposed discretemoments of order 50, 150, 300, 450, and 512 are illustrated inFig. 6. Fig. 7 shows the detailed plots of the corresponding mses

using six different orthogonal moments of maximum order of upto 512. Note that the reconstruction errors decrease monotoni-cally with an increase in the number of orders as predicted. Weobserve that the reconstruction results based on KKM are betterthan TTM, HHM, TKM, THM, and KHM. Fig. 7 illustrates that KHMis not as good as KKM. However, it outperforms HHM, TKM, THM,and TTM. Both TTM and THM perform poorly. We draw theconclusion that KKM has better global feature extractions thanthe other moments. Moreover, the moments with basis functionsthat include Krawtchouk polynomials have better feature extrac-tion capabilities, while the moments with polynomials that areconstructed by Tchebichef have weak extraction capability.

To investigate the different types of moments may be suitablefor different images, we compute the average mse on test imagesLena, Goldhill, Girl, Cameramen, Barboon, and Bird shown inFig. 8. Fig. 8(a) indicates that continuous GGM, LLM performbetter than the other continuous moment methods. It can benoted that the performances of the KKM, KHM are significantlybetter than any of the other discrete moments we have shown inFig. 8(b). This is consistent with the results obtained by aboveexperiments.

Since higher order moments are sensitive to image noise, therobustness against different types of noise has usually beenevaluated in the research of moments. In order to further testthe robustness of the proposed moments, we created degradedimages of Baboon, Cameramen, and Goldhill by adding either zeromean Gaussian noise with variance 0.01 or 3% salt-and-peppernoise. The reconstructed images obtained using the CLM, GLM,CGM, GGM, CCM, LLM are compared with Gaussian–Hermitemoments (GHM) newly studied in [31] as shown in Fig. 9. Thefirst row depicts the noisy version of the test images, and the lastrow gives the reconstructed images by using GHM. The corre-sponding error comparison is shown in Table 4. From thiscomparison, the proposed GGM and CGM perform competitively

Page 12: Image representation using separable two-dimensional

Table 3Image reconstruction of the grayscale images Lena and Girl. TKM (p¼0.5), THM (a¼b¼10), KHM (p¼0.5, a¼b¼10), TTM, KKM (p¼0.5), and HHM (a¼b¼10).

Methods Orders

50 100 255

TTM

KKM

HHM

TKM

THM

KHM

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1551

with Gaussian–Hermite moments in terms of the noise robust-ness. Notably, the Gaussian–Hermite moments that we comparedour method with have used similar experimental setups; there-fore, the comparison is fair.

The same noisy images are employed to verify the robustnessof the proposed discrete orthogonal moments KHM, THM, TKM,TTM, KKM, and HHM against the Gaussian noise or salt-and-pepper noise. Fig. 10 illustrates the reconstructed images. Weprovide corresponding numerical values of mse in Table 5. As wecan see form Table 5, the performance of KKM moments is stillbetter compared to that of other moments. Both KKM and KHM ofhigher orders are also less sensitive to image noise as can be seenfrom Table 5.

7.2. Local feature extraction

Another numerical experiment shows that when the para-meters p, a, and b are set to the local feature extraction mode, theproposed moments can be utilized to capture the local informa-tion of an image.

Considering that Yap [8,10] and our previous work [11,32]have already addressed the influence of polynomials’ parameterson the local feature extraction, this discussion focuses on choos-ing parameters for the proposed bivariate orthogonal polynomialsand moments. From Fig. 1, one can observe that Krawtchouk

polynomials move from left to right as p increases. Therefore,different choices of parameter p cause the Krawtchouk momentsto emphasize different regions of an image. The reconstructionbegins from such a region. Similar characteristic can be held byHahn polynomials. From Fig. 11, one can find that the localinformation of an image is emphasized at low orders if para-meters p, a, and b are well-set. For example, if we set p¼0.5, a¼b

¼10 for KHM (Fig. 11(b)), the reconstruction begins from thecenter of the image. If we set p¼0.9, a¼0, b¼100 for KHM(Fig. 11(j)), the reconstruction begins from bottom right corner ofthe image. Compared to the local feature extraction capabilitydemonstrated in [8,11,32], the proposed moments emphasizemore regions of an image. In [8,11,32], only a limited local regionof an image is emphasized. In the current study, we can flexiblydescribe the local features of an image according to the realdemands by producing different discrete orthogonal polynomialsin one variable. This phenomenon is also verified by the recon-structed images in Fig. 12.

7.3. Object recognition

The final experiment evaluates the discrimination perfor-mance of the proposed TKM invariants (TKMIs), THM invariants(THMIs), and KHM invariants (KHMIs) under rotation, scale, andtranslation transforms of textured images. The texture classification

Page 13: Image representation using separable two-dimensional

Reconstructed images using TTM

Reconstructed images using KKM (p = 0.5)

Reconstructed images using HHM (a = b = 10)

Reconstructed images using TKM (p = 0.5)

Reconstructed images using THM (a = b = 10)

Reconstructed images using KHM (p = 0.5, a = b = 10)

Fig. 6. Reconstruction of the image Barbara. The order’s numbers from left to right are 50, 150, 300, 450, and 512.

H. Zhu / Pattern Recognition 45 (2012) 1540–15581552

problem is normally divided into two steps: (i) feature extraction,and (ii) classification. In this study, we use the following featurevector:

V ¼ ½Q10,Q21,Q20,Q11,Q22,Q30�, ð61Þ

for the classification task, where Qlm, l,m¼0, 1, 2, denote the(lþm)th order of invariants. The proposed TKMIs, THMIs, andKHMIs can be obtained if Qlm is set as ~Q

tk

lm, ~Qth

lm, or ~Qkh

lm,respectively. The Euclidean distance between two textures inthe feature space is used here as the classification metric and isdefined by

dðVs,VðkÞt Þ ¼

XT

j ¼ 1

ðvsj�vtjÞ2, ð62Þ

where Vs is the T-dimensional feature vector of an unknownsample, and V ðkÞt is the training vector of class k.

Eight texture images with resolution of 80�80 pixels arerandomly selected from the photometric texture database [33] asan original image set (Fig. 13). The reason to choose these imagesfrom public databases is that these images are freely available forother experimenters. In addition, this database is challengingbecause there are significant variations within some textures, andsome of them are similar to each other (see images an2, an3, andan7 in Fig. 13).

Each image was rotated from 01 to 3601 by 151 steps to form aset of 24 images, and these images were subsequently scaled by afactor varying from 0.6 to 1.4 at an increment of 0.2. This wasfollowed by a translation Di, Dj¼�5, –2, 2, 5 in both thehorizontal and vertical directions. With this approach, we

Page 14: Image representation using separable two-dimensional

0 50 100 150 200 2500

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

Moment Order

Rec

onst

ruct

ion

Err

or

225 230 235 240 245 250 2553.25

3.3

3.35

3.4

3.45

3.5

3.55

3.6

x 10−3

TTMKKMHHMTKMTHMKHM

0 50 100 150 200 2500

0.05

0.1

0.15

0.2

0.25

Moment Order

Rec

onst

ruct

ion

Err

or

225 230 235 240 245 250 255

1.4

1.6

1.8

2

2.2

2.4

x 10−3

TTMKKMHHMTKMTHMKHM

0 100 200 300 400 5000

0.05

0.1

0.15

0.2

0.25

Moment Order

Rec

onst

ruct

ion

Err

or

400 420 440 460 480 500 520

3

4

5

6

7

8 x 10−3

TTMKKMHHMTKMTHMKHM

Fig. 7. Comparative study of reconstruction errors by using TKM (p¼0.5), THM (a¼b¼10), and KHM (p¼0.5, a¼b¼10) in three different images, (a) errors from image

Girl, (b) errors from image Lena, and (c) errors from image Barbara.

0

0.0005

0.001

0.0015

0.002

0.0025

0.003

0.0035

TTM KKM HHM TKM THM KHM0

0.001

0.002

0.003

0.004

0.005

0.006

0.007

0.008

GGM CCM LLM CGM GLM CLM

Fig. 8. Comparative study of average mse on test images Lena, Goldhill, Girl, Cameramen, Baboon, and Bird, (a) continuous moments, (b) discrete moments.

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1553

constructed a test set of 3840 images for the 8 texture classes.Fig. 14 shows some of the testing images. Next, all test imageswere degraded by adding salt-and-pepper noise with different

noise densities or zero mean Gaussian noise with differentvariances. The feature vector based on TKM, THM, and KHMmoment invariants were used to classify these images. The

Page 15: Image representation using separable two-dimensional

Gaussian noisy images (mean: 0, variance: 0.01) Salt-and-pepper noisy images (3%)

Reconstructed images using GGM

Reconstructed images using CCM

Reconstructed images using LLM

Reconstructed images using CGM

Reconstructed images using GLM

Reconstructed images using CLM

Reconstructed images using Gaussian-Hermite moments

Fig. 9. The first three columns are reconstructed images using Gaussian noise-contaminated images. The last three columns are reconstructed images using salt-and-

pepper noise-contaminated images. The maximum order used is 255 for each algorithm.

Table 4Comparative analysis of reconstruction errors using continuous orthogonal moments with different noise (all values are multiplied by 10�2).

Images Gaussian noisy images (mean: 0, variance: 0.01) Salt-and-pepper noisy images (3%)

GGM CCM LLM CGM GLM CLM GHM GGM CCM LLM CGM GLM CLM GHM

Baboon 3.309 3.395 3.353 3.337 3.400 3.380 3.353 4.398 4.550 4.454 4.419 4.615 4.482 4.521

Goldhill 2.541 2.594 2.584 2.542 2.566 2.588 2.563 3.890 4.184 3.021 3.949 3.232 3.035 4.015

Cameramen 3.134 3.180 3.147 3.146 3.207 3.164 3.179 2.543 2.711 2.648 2.630 2.804 2.702 2.703

H. Zhu / Pattern Recognition 45 (2012) 1540–15581554

Page 16: Image representation using separable two-dimensional

Reconstructed images using TTM

Reconstructed images using KKM

Reconstructed images using HHM

Reconstructed images using TKM

Reconstructed images using THM

Reconstructed images using KHM

Fig. 10. The first three columns are reconstructed images using Gaussian noise-contaminated images. The last three columns are reconstructed images using salt-and-

pepper noise-contaminated images. The maximum order used is 255 for each algorithm.

Table 5Comparative analysis of reconstruction errors using discrete orthogonal moments with different noise (all values are multiplied by 10�3).

Images Gaussian noisy images (mean: 0, variance: 0.01) Salt-and-pepper noisy images (3%)

TTM KKM HHM TKM THM KHM TTM KKM HHM TKM THM KHM

Baboon 5.8679 5.3863 5.5525 5.4183 5.3863 5.4082 6.8996 6.3429 6.6681 6.5655 6.7877 6.4892

Goldhill 4.6757 4.5208 4.6348 4.6102 4.6626 4.5355 5.2154 4.9135 5.1146 5.0904 5.1806 5.0829

Cameramen 5.0760 4.6118 4.9646 4.7822 5.0073 4.7320 5.8277 5.2165 5.5390 5.3565 5.5991 5.3529

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1555

recognition accuracy is comparable with that of Hu momentinvariants [29] and complex moment invariants (CMI) reportedin [34]. Table 6 shows the classification results using the differentmethods under noise-free and noise condition. One can observefrom this table that high recognition rates are obtained in the noise-free case. It is noted that the proposed descriptors’ recognitionaccuracy decreases with increasing noise level. However, the descrip-tors perform perfectly in terms of recognition for noisy images,regardless of the noise type, and they have better discriminativepower.

8. Conclusions and discussion

Orthogonal polynomials in two variables have attracted con-siderable interest over the past several years. However, most ofthese studies focus purely on theoretical analysis and proof.Comparatively little attention has been paid to image processingand pattern recognition applications. We believe that studiesabout bivariate orthogonal polynomials in the mathematics com-munity have provided the theoretical basis of constructing thebasis functions for two-dimensional orthogonal moments.

Page 17: Image representation using separable two-dimensional

Fig. 12. Reconstructed images (thresholded) up to order 25. (a) TKM (p¼0.5), (b) KHM (p¼0.5, a¼b¼10), (c) TKM (p¼0.1), (d) TKM (p¼0.9), (e) KTM (p¼0.1), (f) KTM

(p¼0.9), (g) HKM (a¼0, b¼100, p¼0.5), (h) KHM (a¼0, b¼100, p¼0.5), (i) KHM (a¼0, b¼100, p¼0.1), (j) KHM (a¼0, b¼100, p¼0.9), (k) HKM (a¼0, b¼100, p¼0.9), and

(l) HKM (a¼0, b¼100, p¼0.1).

Fig. 11. Reconstructed images using different moments up to order 35. (a) TKM (p¼0.5), (b) KHM (p¼0.5, a¼b¼10), (c) TKM (p¼0.1), (d) TKM (p¼0.9), (e) KTM (p¼0.1),

(f) KTM (p¼0.9), (g) HKM (a¼0, b¼100, p¼0.5), (h) KHM (a¼0, b¼100, p¼0.5), (i) KHM (a¼0, b¼100, p¼0.1), (j) KHM (a¼0, b¼100, p¼0.9), (k) HKM (a¼0, b¼100,

p¼0.9), and (l) HKM (a¼0, b¼100, p¼0.1).

an1 an2 an3 an4 an5 an6 an7 an8

Fig.13. Texture images as training set for invariant image recognition (size: 80�80).

H. Zhu / Pattern Recognition 45 (2012) 1540–15581556

This paper proposes several continuous and discrete orthogo-nal moments, and subsequently discusses these moments aspattern features in analysis of two-dimensional noise-free andnoisy images. We have studied mean square error to determinethe capabilities of the proposed moment functions. Comparativeanalyses show that proposed moments, such as GGM, CGM, KHM,and TKM, have desirable image representation capability and canbe useful in the field of image analysis. In texture classification

experiments, the proposed three moment invariants exhibitbetter performance than Complex and Hu’s moment invariants.

The discretization step is the source of errors in the calculation ofall continuous-domain moments, such as the Legendre and Zernikemoments [6]. Here, the proposed continuous orthogonal moments’computation would also face the numerical accuracy problem.While this study does not discuss this detail, we plan to discussquantization and approximation errors in a separate paper.

Page 18: Image representation using separable two-dimensional

Fig.14. Part of the images in the testing set.

Table 6Classification rates (%) of TKMIs, THMIs, and KHMIs in texture recognition.

Moment Invariants Noise-free Salt-and-pepper noise Gaussian white noise

1% 2% 3% 4% s2¼0.01 s2

¼0.02 s2¼0.04

Hu 97.51 60.16 51.29 48.81 37.53 53.09 52.58 40.01

CMIs 100 70.62 65.31 61.43 50.22 64.86 56.39 49.42

TKMIs 100 92.77 84.03 82.18 71.83 83.10 82.91 78.53

THMIs 100 95.60 91.13 87.32 72.42 86.46 86.00 83.48

KHMIs 100 90.13 81.44 78.83 69.95 82.20 81.83 78.34

H. Zhu / Pattern Recognition 45 (2012) 1540–1558 1557

Acknowledgments

The authors would like to thank the anonymous referees fortheir helpful comments and suggestions. This work was sup-ported by the National Natural Science Foundation of China underGrant no. 60975004.

References

[1] B. Bayraktar, T. Bernas, J.P. Robinson, B. Rajwa, A numerical recipe foraccurate image reconstruction from discrete orthogonal moments, PatternRecognition 40 (2007) 659–669.

[2] H. Shu, H. Zhang, B. Chen, P. Haigron, L. Luo, Fast computation of Tchebichefmoments for binary and gray-scale images, IEEE Transactions on ImageProcessing 19 (2010) 3171–3180.

[3] X.-Y. Wang, Y.-P. Yang, H.-Y. Yang, Invariant image watermarking usingmulti-scale Harris detector and wavelet moments, Computers and ElectricalEngineering 36 (2010) 31–44.

[4] H. Zhang, H. Shu, G. Han, G. Coatrieux, L. Luo, J.L. Coatrieux, Blurred imagerecognition by Legendre moment invariants, IEEE Transactions on ImageProcessing 19 (2010) 596–691.

[5] S.M. Lajevardi, Z.M. Hussain, Higher order orthogonal moments for invariantfacial expression recognition, Digital Signal Processing 20 (2010) 1771–1779.

[6] M. Teague, Image analysis via the general theory of moments, Journal of theOptical Society of America 70 (1980) 920–930.

[7] R. Mukundan, S.H. Ong, P.A. Lee, Image analysis by Tchebichef moments, IEEETransactions on Image Processing 10 (2001) 1357–1364.

[8] P.-T. Yap, R. Paramesran, S.-H. Ong, Image analysis by Krawtchouk moments,IEEE Transactions on Image Processing 12 (2003) 1367–1377.

[9] J. Zhou, H. Shu, H. Zhu, C. Toumoulin, L. Luo, Image analysis by discreteorthogonal Hahn moments, in: Proceedings of the Second InternationalConference on Image Analysis and Recognition, 2005, pp. 524–531.

[10] P.-T. Yap, R. Paramesran, S.-H. Ong, Image analysis using Hahn moments, IEEETransactions on Pattern Analysis and Machine Intelligence 29 (2007)2057–2062.

[11] H. Zhu, M. Liu, H. Shu, H. Zhang, L. Luo, General form for obtaining discreteorthogonal moments, IET Image Processing 4 (2010) 335–352.

[12] A. Erdelyi, W. Magnus, F. Oberhettinger, F.G. Tricomi, Higher TranscendentalFunctions, 2, McGraw-Hill, New York, 1953.

[13] A.V. Nikiforov, S.K. Suslov, V.B. Uvarov, Classical Orthogonal Polynomials of aDiscrete Variable, Springer, New York, 1991.

[14] R. Koekoek, R. Swarttouw, The Askey-Scheme of Hypergeometric OrthogonalPolynomials and Its q-Analogue, Report 98-17, Faculty of Technical Mathe-matics and Informatics, Delft University of Technology, Delft, 1998.

[15] S. Lewanowicz, P. Wozny, Two-variable orthogonal polynomials of big q-Jacobitype, Journal of Computational and Applied Mathematics 233 (2010) 1554–1561.

[16] L. Fernandez, T.E. Perez, M.A. Pinar, Orthogonal polynomials in two variablesas solutions of higher order partial differential equations, Journal of Approx-imation Theory 163 (2011) 84–97.

[17] T. Koornwinder, Two-variable analogues of the classical orthogonal poly-nomials, in: Theory and Application of Special Functions, Proceedings of theAdvanced Seminar (Madison: University of Wisconsin Press), Academic Press,1975, pp. 435–495.

[18] M.A. Morales, L. Fernandez, T.E. Perez, M.A. Pinar, Bivariate orthogonalpolynomials in the Lyskova class, Journal of Computational and AppliedMathematics 233 (2009) 597–601.

[19] L. Fernandez, T.E. Perez, M.A. Pinar, Y. Xu, Krall-type orthogonal polynomialsin several variables, Journal of Computational and Applied Mathematics 233(2010) 1519–1524.

[20] C.F. Dunkl, Y. Xu, Orthogonal Polynomials of Several Variables, CambridgeUniversity Press, 2001.

[21] Y. Xu, Second order difference equations and discrete orthogonal polyno-mials of two variables, International Mathematics Research Notices 8 (2005)449–475.

[22] Y. Xu, On discrete orthogonal polynomials of several variables, Advances inApplied Mathematics 33 (2004) 615–663.

[23] L. Fernandez, T.E. Perez, M.A. Perez, Second order partial differential equa-tions for gradients of orthogonal polynomials in two variables, Journal ofComputational and Applied Mathematics 199 (2007) 113–121.

Page 19: Image representation using separable two-dimensional

H. Zhu / Pattern Recognition 45 (2012) 1540–15581558

[24] H.S.P. Shrivastava, On multiindex multivariable Jacobi polynomials, IntegralTransforms and Special Functions 13 (2002) 417–445.

[25] D.D. Lee, H.S. Seung, Learning the parts of the objects by non-negative matrix

factorization, Nature 401 (1999) 788–791.

[26] T. Zhang, B. Fang, Y.Y. Tang, G. He, J. Wen, Topology preserving non-negative

matrix factorization for face recognition, IEEE Transactions on Image Proces-sing 17 (2008) 574–584.

[27] L. Li, C.S. Tong, S.K. Choy, Texture classification using refined histogram, IEEETransactions on Image Processing 19 (2010) 1371–1378.

[28] X. Liu, D. Wang, Texture classification using spectral histograms, IEEETransactions on Image Processing 12 (2003) 661–670.

[29] M.K. Hu, Visual pattern recognition by moment invariants, IRE Transactionson Information Theory IT 8 (1962) 179–187.

[30] /http://links.uwaterloo.ca/Repository.htmlS.[31] B. Yang, M. Dai, Image analysis by Gaussian–Hermite moments, Signal

Processing 91 (2011) 2290–2303.[32] H. Zhu, H. Shu, J. Zhou, L. Luo, J.-L. Coatrieux, Image analysis by discrete

orthogonal dual Hahn moments, Pattern Recognition Letters 28 (2007)1688–1704.

[33] /http://www.taurusstudio.net/research/pmtexdb/sample.htmS.[34] F. Ghorbel, S. Derrode, R. Mezhoud, T. Bannour, S. Dhahbi, Image reconstruc-

tion from a complete set of similarity invariants extracted from complexmoments, Pattern Recognition Letters 27 (2006) 1361–1369.

Hongqing Zhu obtained her Ph.D. in 2000 from Shanghai Jiao Tong University. From 2003 to 2005, she was a postdoctoral fellow in the Department of Biology and MedicalEngineering at Southeast University. Currently, she is an associate professor at East China University of Science and Technology. Her current research interests includesignal processing, image reconstruction, image segmentation, image compression, and pattern recognition.