Upload
griffith
View
0
Download
0
Embed Size (px)
Citation preview
A smoothness constraint set based on local statistics
of BDCT coefficients for image postprocessing
Xiangchao Gana,*, Alan Wee-Chung Liewb, Hong Yana,c
aDepartment of Computer Engineering and Information Technology, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong, ChinabDepartment of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong, China
cSchool of Electrical and Information Engineering, University of Sydney, Sydney, NSW 2006, Australia
Received 11 February 2004; received in revised form 4 April 2005; accepted 5 May 2005
Abstract
In blocking artifacts reduction based on the projection onto convex sets (POCS) technique, good constraint sets are very important. Until
recently, smoothness constraint sets (SCS) are often formulated in the image domain, whereas quantization constraint set is defined in the
block-based discrete cosine transform (BDCT) domain. Thus, frequent BDCT transform is inevitable in alternative projections. In this paper,
based on signal and quantization noise statistics, we proposed a novel smoothness constraint set in the BDCT transform domain via the
Wiener filtering concept. Experiments show that POCS using this smoothness constraint set not only has good convergence but also has
better objective and subjective performance. Moreover, this set can be used as extra constraint set to improve most existing POCS-based
image postprocessing methods.
q 2005 Elsevier B.V. All rights reserved.
Keywords: Projection onto convex sets; Postprocessing methods; BDCT
1. Introduction
The block-based discrete cosine transform (BDCT) [1]
has been used widely in image and video compression. To
reduce the bit-rate, the coefficients of BDCT are often
quantized. At low bit rate, this causes annoying blocking
artifacts in the decoded image. Recently, several post-
processing methods have been proposed to alleviate
blocking artifacts. Postprocessing techniques are attractive
because they are independent of coding schemes and can be
applied to commonly used JPEG [1], H.263, and MPEG
compression standards.
The approach based on the theory of POCS has a major
advantage in that it can exploit the a priori knowledge about
the image. If the convex constraints sets associated with the
image information can be found, the POCS algorithm with
corresponding projectors will converge to the intersection of
0262-8856/$ - see front matter q 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.imavis.2005.05.001
* Corresponding author. Tel.: C852 2788 9895; fax: C852 2784 4262.
E-mail address: [email protected] (X. Gan).
all the constraint sets. In the past, various constraint sets
have been proposed. Generally, these constraint sets can be
classified into two categories. One is the quantization
constraint set (QCS) [2,3], and the other is the smoothness
constraint set (SCS) [4,5]. However, most SCS are
implemented in the image domain, whereas QCS are
defined in the BDCT domain. Therefore, a BDCT transform
of the whole image is needed in each iteration. This incurs
high computational cost. Although there are some filtering
methods available that work in BDCT domain [6], they are
not POCS-based and it is difficult to incorporate new a priori
knowledge.
In this paper, we proposed a new SCS, which is
defined in the BDCT domain. The new SCS is derived
from signal and quantization noise statistics and uses
a least mean square formulation based on the Wiener
filter. Experiments show that POCS using this SCS
not only has faster convergence but also has better
objective and subjective performance. Moreover, this new
SCS can be used as a new constraint set to improve most
of the available POCS-based image postprocessing
algorithms.
Image and Vision Computing 23 (2005) 731–737
www.elsevier.com/locate/imavis
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737732
2. Mathematical background
2.1. POC-based image reconstruction
In POCS-based image post-processing [7], every known
a priori property about the original image can be formulated
as a corresponding convex set in a Hilbert space H. Given n
closed convex set Ci, iZ1,2,.,n, and C0 ZhmiZ1Ci none-
mpty, the iteration
xkC1 Z PmPmK1.P1xk; k Z 0; 1; 2;. (1)
where Pi is the projector onto Ci defined by
jjx KPixjj Z ming2Ci
jjx Kgjj (2)
and g is the projection of x onto Ci, will converge to a point
in C0 for any initial x0.
2.2. The mathematical model of image deblocking problem
Throughout this paper we use the following conventions:
a real N!N image x can be treated as an N2!1 vector in the
space RN2
by lexicographic ordering by either rows or
columns. Then, the 2D DCT transform can be expressed as:
X Z Tx; and x Z TK1X (3)
where X is the BDCT coefficients of x and T is the BDCT
transform matrix.
In order to lower the bit-rate, X is quantized. Let Q
denote the quantization process. The BDCT coefficients
suffering lossy quantization can be denoted by
Y Z QTx (4)
The decoded image with blocking artifacts is given by
yZTK1QTx. If a uniform scalar quantizer is used, then Y
can be expressed as
Y Z X Cn (5)
where n is the additive zero mean noise introduced by the
quantizer and contributes to the blocking artifacts in the
encoded image. The POCS deblocking problem thus
involves the estimation of X from Y using the available
information about the quantizer and the image, formulated
as convex constraint sets.
3. Proposed postprocessing technique
It is well known that the least mean square error solution
for Eq. (5) is Wiener filtering. Specifically, the locally
adaptive Wiener filter [8], which is capable of tracking the
signal and noise characteristics over different image regions,
can be used to estimate the true BDCT coefficients by
X̂i Z �Xi Cs2
Xi
s2Xi
Cs2ni
ðYi K �XiÞ (6)
where �X is the a prior mean of X, X, �Xand Y are treated as an
N2!1 vector in the space RN2
by lexicographic ordering by
either rows or columns of their 2D versions. By defining two
matrix M and R,
R Z
1
s2n1
0 0 / 0
01
s2n2
0 / «
0 0 1 0 «
« « 0 1 0
0 / / /1
s2nN$N
26666666666664
37777777777775
M
Z
1
s2X1
0 0 / 0
01
s2X2
0 / «
0 0 1 0 «
« « 0 1 0
0 / / /1
s2XN$N
26666666666664
37777777777775
(7)
Eq. (6) can be written in the matrix form
X̂ Z �X CRðM CRÞK1ðY K �XÞ (8)
Although adaptive Wiener filtering is effective in image
deblocking, it has an apparent shortcoming. If we transform
and quantize the output image, what we obtain does not
equal to the original quantized coefficients. This violates our
information about the original image apparently. In [9], a
method combining low-pass filtering and QCS is provided
to limit the output of low-pass filter to conform to the
quantized information. However, in [10], this method is
approved to be non-convergent unless an ideal low pass
filter is used. As is well known, ideal low pass filter is
impossible to realize.
To solve this problem, we proposed a new SCS to replace
the filtering. It is derived from least mean square
formulation based on the Wiener filter and thus retains its
effectiveness in image deblocking.
In fact, Eq. (8) is the solution to the following
regularization problem [6]:
J Z ðX K �XÞtMðX K �XÞC ðY KXÞtRðY KXÞ (9)
The first term in Eq. (9) accounts for image smoothness,
whereas the second term ensures image fidelity. According
to the POCS theory, we do not need to obtain the solution
minimizing Eq. (9), which clearly depends on our estimate
of �X, M and R. Instead, we only need to find a set that
includes the original image. A reasonable choice is to limit J
with a threshhold value Ea, that is jJj%Ea, where Ea is a
value larger than but close enough to the minimum of J so
that all the images satisfying the condition are smooth and
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737 733
faithful to the original image. Unfortunately, jJj%Ea is not
convex. Instead, we approximate it by imposing constraints
on each term in (9),
Cw Z fX : ðX K �XÞtMðX K �XÞ%Ewg (10)
Cr Z fX : ðX KYÞtRðX KYÞ%Erg (11)
3.1. The constraint set Cw
For a 512!512 image and 8!8 blocks, X is a
512$512!1 vector. M is a (512$512)!(512$512) diagonal
matrix. If we define a new (512$512)!(512$512) diagonal
matrix W with elements
ui Z m1=2i ; i Z 1; 2;.; 512!512 (12)
then Eq. (10) can be rewritten as
Cw Z fX : jjWðX K �XÞjj%Ewg (13)
It is straightforward to show that CW is both convex and
closed. Before describing how ui and �X are obtained, we
examine the projection of Cw in detail. Using the Lagrange
multiplier method, we can obtain the projection
X̂i ZXi Clw2
i�Xi
1 Clw2i
(14)
where l is the only positive root of the equation
X512!512
iZ1
w2i ðXi K �XiÞ
2
1 Clw2i
Z E2w (15)
The parameter l can be computed as follows [4]. Let
jðlÞ ZX512!512
iZ1
w2i ðXi K �XiÞ
2
1 Clw2i
KE2w (16)
be such that j(0)O0. Then, with l0Z0 the iterates
generated by Newton’s method
lkC1 Z lk KjðlÞ
j0ðlÞk Z 0; 1; 2;. (17)
will converge increasingly to lC, the unique positive root of
j(l)Z0. In other words, we have lk!lkC1!lC for every
k.
3.2. The computation of �X, W and Ew
Both the mean and variance of Xi can be estimated from
Yi. Since ni is assumed zero mean and uncorrelated with X,
we can deduce the following from Eq. (5)
�X Z �Y (18)
s2Xi
Z s2Yi
Ks2ni
(19)
By assuming a uniform scalar quantizer, the quantization
error ni has a uniform pdf with s2ni
Z ð1=12Þq2i , where qi is the
known stepsize of the corresponding quantizer applied to Xi.
Taking advantage of the local image smoothness, we can
use the ‘local’ mean and variance of Y as approximation in
practical implementation [6]. Let y!m,nO denote y that is
shifted in the image domain by (m,n) and Ty!m,nOZY!
m,nO,then
�Yi Z1
ð2L C1Þ2
XL
mZKL
XL
nZKL
Y!m;nOi (20)
s2Yi
Z1
ð2L C1Þ2
XL
mZKL
XL
nZKL
ðY KY!m;nOi Þ2 (21)
where L is the window size. Since s2Xi
R0, s2Xi
is determined
as maxf0;s2Yi
Ks2nig.
Now that �X and s2Xi
are available, we get W according to
Eqs. (7) and (12) as follows,
wi Z1
sXi
(22)
To avoid mathematical difficulties associated with
sXiZ0, a reasonable compressed form of this function is
given by [11,12]
wi Z ln 1 C1
1 CsXi
� (23)
To implement the projector of (14), Ew is also an
important parameter. Since �X is unknown and we get it from�XZ �Y , based on Eqs. (10) and (18), Ew is actually a
threshold value of Mahalanobis distance between the
original image and local mean of the blocky image. It is
obvious that the bigger the quantization stepsize, the larger
the distance. So we calculate Ew using the quantization error
variance as,
S Z1
64
X64
iZ1
s2ni
Z1
64
X64
iZ1
q2i
12(24)
Ew Z kffiffis
p(25)
where k is a weighting scale. We find that kZ0.5 is a good
choice in many cases and used it in all our simulation
experiments.
3.3. The constraint set Cr and the image recovery algorithm
Since ni is assumed zero mean, we obtain EfðXi KYiÞ2gZ
s2ni
based on Eq. (5). Then (YKX)tR(YKX) can be viewed as
the Mahalanobis distance between the recovered image and
the blocky image in statistical theory. So the constraint set
Cr guarantees the fidelity of the recovered image. In the
POCS theory, this is realized by the QCS. Hence, Cr is
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737734
replaced by QCS as follows:
CQ Z fx : F0i KDi% ðTxÞi%F0
i CDig; i Z 1; 2;.N2
(26)
where Di is half of the corresponding quantization table
coefficient. The projector for QCS is
X̂i Z
F0i KDi if ðTxÞi!F0
i KDi
F0i CDi if ðTxÞi!F0
i KDi
ðTxÞi otherwise
8>><>>: (27)
Besides the sets defined previously, another set Cp is also
used to capture the information about the range of the pixel
intensity of the image. This set is defined by
Cp Z fx : 0%xi;j%255; 1% i; j%Ng (28)
Using all the convex sets defined above, the POCS theory
yields the following recovery algorithm:
1. Set x0Zy
2. For kZ1,2,., compute xk from
xk Z PQPwPpxkK1
where PQ, Pw, Pp denote the projectors onto the
constraint sets CQ, Cw, Cp, respectively.
3. If xkZxkK1, exit the iteration, else go to step 2.
It should be pointed out that we require PQPwPpxkZxkK
1 to be satisfied, not just kfkKfkK1k%d as in [4], as the
converge measure. It is a stricter measure.
(a) (b)
4. Spatially-adaptive algorithm based on human
visual system
By far, our constraint set is only based on least square
error measure, without considering the visual characteristic.
In order to improve the visual quality of this method, we
modified Eqs. (20) and (21) to incorporate the property of
human visual system (HVS) [13]. Due to the masking effect
in the HVS, artifacts are more visible in smooth areas than
in non-regular areas. In order to account for the masking
effect, a block based classification method is necessary.
Since our algorithm is implemented entirely in the BDCT
domain, we require that the classification method be
realized in the BDCT domain. Fortunately, the method in
[14] can be adopted.
Fig. 1. Block classification using HVS: (a) JPEG-coded image (0.24bpp).
(b) The result of block classification.
4.1. Block-region classification based on HVS
In [15], Ngan et al. proposed an HVS sensitivity function
to measure the relative sensitivity of the eyes at different
frequencies as follows:
HðuÞ Z jAðuÞjHðuÞ (29)
In this equation, the modulation transfer function (MTF)
H(u) is given by
HðuÞ Z ð0:31 C0:69uÞexpðK0:29uÞ (30)
It has a peak at uZ3 cycles per degree (cpd), and the
multiplicative function A(u) is defined as
AðuÞ Z1
4C
1
p2ln
2pu
sC
ffiffiffiffiffiffiffiffiffiffiffiffiffiffi4p2u2
s2
r !" #2( )1=2
(31)
where sZ11.636 degreeK1.
We can relate the DCT coefficients to the radial
frequency u to improve the computation efficiency. In
[16], the conversion was proposed
u Z ud !us (32)
where ud Zffiffiffiffiffiffiffiffiffiffiffiffiffiffiu2Cv2
p=2N for u,vZ0,.NK1 and Ws in
pixels/degree is the sampling density which is dependent on
the viewing distance. For 512!512 images, WsZ64 pixels/
degree is suggested in [16].
Now, we used the HVS sensitivity function to weight the
ac BDCT coefficients to approximate the amount of
masking effect as follows,
~Bm;nðu; vÞ Z HK1ðwÞBðu; vÞ (33)
In BDCT domain, the smoothness of a block can be
estimated from its ac energy. We define the smoothness of a
block bm,n(i,j) as
Am;n Z1
Bm;nð0; 0Þ
XNK1
uZ0
XNK1
vZ0
~Bm;nðu; vÞ2 K ~Bm;nð0; 0Þ
2
" #(34)
With the assistance of Am,n, blocks in the image can be
classified into either smooth or non-regular class, where
different classes of blocks are processed in different ways.
Fig. 1(b) shows the block classification result for ‘Lena’.
4.2. Adaptive windows
The postprocessing can be made adaptive to the block
smoothness. In our algorithm, this can be easily
implemented by using different window size for Eqs. (20)
and (21). For a smooth block, where the blocking artifacts
appear to be visually more discernable, a 7!7 window (i.e.
Table 1
Objective quality evaluation of the image reconstructed with various algorithms
Test Image PSNR(dB)
JPEG Rosenholtz [9] Yang [4] Paek [5] Kim [17] Proposed (no HVS) Proposed (with HVS)
Bridge 29.805 29.308 30.484 30.568 30.621 31.024 30.998
Peppers 30.672 30.244 31.187 31.363 31.412 31.523 31.587
Goldhill 28.883 28.523 29.296 29.280 29.353 29.537 29.522
Lena 30.700 30.130 31.310 31.472 31.552 31.662 31.705
Man 28.435 28.056 28.884 28.916 28.922 29.173 29.145
Tank 27.998 27.891 28.366 28.334 28.406 28.618 28.608
Zelda 31.142 31.869 32.850 33.119 33.102 33.246 33.415
Boat 29.171 28.464 29.676 29.667 29.685 29.861 29.817
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737 735
LZ3) is used. On the other hand, for non-regular blocks, we
use a 3!3 window (i.e. LZ1). A small window for non-
regular blocks is to avoid blurring image details.
5. Perfoemance evaluation
We present some experimental results to evaluate the
performance of the proposed recovery algorithm. A number
of de facto standard 256 gray-level test images of size 512!512 are used. The decoded images, with visible blocking
artifacts are obtained by JPEG compression with the
quantization tables shown in Appendix.
Fig. 2. Deblocking results of the ‘Lena’ image. (a) Original image, (b) JPEG-cod
Paek’s algorithm, (f) Kim’s algorithm and (g) Proposed algorithm.
For comparative studies, several popular deblocking
algorithms reported in the literature, namely, (a) Rosen-
holtz’s algorithm [9], (b) Yang’s algorithm [4], (c) Paek’s
algorithm [5] and (d) Kim’s algorithm [17] are implemented
and compared. As a measure of reconstructed image quality,
the peak signal-to-noise ratio (PSNR) in dB is used, defined
as follows:
PSNR Z 10!log10 2552
� XMs
iZ1
ðxi KyiÞ2
�Ms
( )" #(35)
where xi and yi denote the original and reconstructed pixels,
respectively, and Ms is the number of pixels in a coded
ed image (0.24bpp), (c) Rosenholtz’s algorithm, (d) Yang’s algorihm, (e)
Fig. 3. Deblocking results of the ‘Peppers’ image. (a) Paek’s algorithm and
(b) Proposed algorithm.
0 5 10 1530.6
30.8
31
31.2
31.4
31.6
31.8
Fig. 5. PSNR performance variations on ‘Lena’ image (0.24bpp). Dashed
line: Yang’s algorithm. Dotted line: Paek’s algorithm. Dashdot line: Kim’s
algorithm. Solid line: the proposed algorithm.
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737736
image. The objective results based on PSNR are shown in
Table 1. The proposed SCS without considering HVS model
is also provide. Since Rosenholtz’s algorithm is non-
convergent, we only list results of the four methods after
10 iterations. It can be seen from Table 1 that the proposed
method is superior in objective quality measure.
In Fig. 2, an enlarged portion of ‘Lena’ is shown for
subjective comparison. It can be seen that Rosenholtz’s
algorithm generally produces a blurred image due to the low-
pass filtering operation and Yang’s algorithm still contain
significant blockiness in smooth region. Around the shoulder
region, our method is considerably freer from artifacts. Since
Paek’s algorithm is a very typical deblocking algorithm and
usually gives good visual effect, we also compare it with our
method using ‘Peppers’ and ‘Zelda’ in Figs. 3 and 4. It is
obvious that the proposed algorithm can provide better
visual performance especially in smooth areas.
We also tested the convergence characteristic of our
algorithm. The PSNR variation with the iteration number is
provided using ‘Lena’ image in Fig. 5. It is observed that the
proposed method converges with 2 iterations. Note that the
condition of convergence is xkZxkK1. Under this measure,
Paek’s, Yang’s and Kim’s algorithms all need a few
iterations. In our experiment, all images listed in Table 1
only need 2–3 iterations to converge.
Another major advantage of the proposed algorithm is
that it operates solely in the transform domain, unlike the
POCS-based algorithms we compared, which alternate
between the image domain and the transform domain. So,
it has much lower computation cost. This is particularly
important for real-time video processing.
Fig. 4. Deblocking results of the ‘Zelda’ image. (a) Paek’s algorithm and (b)
Proposed algorithm.
The new SCS can also be used as a new constraint set to
improve most of the available POCS-based deblocking
methods. Fig. 6 has shown the improvement of our SCS on
Rosenholtz’s algorithm. The initial image Lena is coded at
0.24 b/pixel with the above quantization table. For Yang’s
and Paek’s algorithm, the PSNR improvements are 0.324
and 0.102 dB, respectively. This is almost impossible for
BDCT domain filtering algorithm [6].
6. Conclusions
In this paper, a POCS-based deblocking algorithm
utilizing a new smoothness constraint set is proposed. The
new smoothness constraint set is constructed based on
the local statistics of the BDCT transform coefficients and
the probability density function (pdf) of the quantizer.
The proposed method has been shown to give superior
performance in comparison to several well-known POCS-
based deblocking algorithms. Since no BDCT transform is
needed in each POCS iteration, it has a low computational
Fig. 6. Improvement of Rosenholtz’s algorithm with the addition of the new
SCS. Dotted line: Rosenholtz’s algorithm. Solid line: adding the new SCS.
X. Gan et al. / Image and Vision Computing 23 (2005) 731–737 737
cost. Moreover, the new SCS can also be used as an
additional smoothness constraint set to improve most
existing POCS-based deblocking algorithms.
Acknowledgements
This work is supported by a strategic research grant from
City University of Hong Kong (Project 7001556).
Appendix. Quantizatin Table for BDCT
50 60 70 70 90 120 255 255
60 60 70 96 130 255 255 255
70 70 80 120 200 255 255 255
70 96 120 145 255 255 255 255
90 130 200 255 255 255 255 255
120 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255
255 255 255 255 255 255 255 255
References
[1] W.B. Pennebaker, J.L. Mitchell, JPEG Still Image Data Compression
Standard, Van Nostrand Reinhold, New York, 1993.
[2] S.H. Park, D.S. Kim, Theory of projection onto the narrow
quantization constraint set and its application, IEEE Trans. Image
Processing 8 (10) (1999) 1361–1373.
[3] Y. Jeong, I. Kim, H. Kang, A practical projection-based postproces-
sing of block-coded images with fast convergence rate, IEEE Trans.
Circuits Syst. Video Technol. 10 (2000) 617–623.
[4] Y. Yang, N.P. Galatsanos, A.K. Katsaggelos, Regularized reconstruc-
tion to reduce blocking artifacts of block discrete cosine transform
compressed images, IEEE Trans. Circuits Syst. Video Technol. 3
(1993) 421–432.
[5] H. Paek, R.C. Kim, S.U. Lee, On the POCS-based postprocessing
techniques to reduce the blocking artifacts in transform coded images,
IEEE Trans. Circuits Syst. Video Technol. 8 (1998) 358–367.
[6] S.S.O. Choy, Y.H. Chan, W.C. Siu, Reduction of block-transform
image coding artifacts by using local statistics of transform
coefficients, IEEE Signal. Processing Lett. 4 (1) (1997) 5–7.
[7] H. Stark, Y. Yang, Vector Space Projections, A Numerical Approach
to Signal and Image Processing, Neural Nets, and Optics, Wiley, New
York, 1998.
[8] J.S. Lim, Two-Dimensional Signal and Image Processing, Prentice
Hall, New Jersey, 1990. pp. 536–540.
[9] R. Rosenholtz, A. Zakhor, Iterative procedures for reduction of
blocking effects in transform image coding, IEEE Trans. Circuits Syst.
Video Technol. 2 (1) (1992) 91–95.
[10] S.J. Reeves, S.L. Eddins, Comments on iterative procedures for
reduction of blocking effects in transform image coding, IEEE Trans.
Circuits Syst. Video Technol. 3 (1993) 439–440.
[11] Y. Yang, N.P. Galatsanos, A.K. Katsaggelos, Projection-based
spatially adaptive reconstruction of block-transform compressed
images, IEEE Trans. Image Processing 4 (7) (1995) 896–908.
[12] R.L. Lagendijk, J. Biemond, D.E. Boekee, Regularized iterative
restoration with ringing reduction, IEEE Trans. Acoust. Speech Signal
Processing 36 (12) (1988) 1874–1888.
[13] N. Jayant, J. Johnston, R. Safranek, Signal compression based on
models of human perception, Proc. IEEE 81 (1993) 1385–1422.
[14] T. Chen, H.R. Wu, B. Qiu, Adaptive postfiltering of transform
coefficients for the reduction of blocking artifacts, IEEE Trans.
Circuits Syst. Video Technol. 10 (2000) 617–623.
[15] K.N. Ngan, K.S. Leong, H. Singh, Adaptive cosine transform coding
of images in perceptual domain, IEEE Trans. Acoust. Speech Signal
Processing 37 (1989) 1743–1750.
[16] B. Chitprasert, K.R. Rao, Human visual weighted progressive image
transmission, IEEE Trans. Commun. 38 (1990) 1040–1044.
[17] Y. Kim, C.S. Paek, S.J. Ko, Frequency domain post-processing
technique based on POCS, Electronics Lett 39 (22) (2003) 1583–1584.
Xiangchao Gan received his MS in Electrical
and Electronic Engineering from Xi’an Jiao-
tong University, China, in 2001. He is
currently studying for his PhD degrees at
City University of HongKong. His research
interests include image reconstruction, image
compression and multimedia communication.
Alan Wee-Chung Liew received his BE with
first class honors in Electrical and Electronic
Engineering from the University of Auckland,
New Zealand, in 1993 and PhD in Electronic
Engineering from the University of Tasmania,
Australia, in 1997. He is currently an Assistant
Professor in the Department of Computer
Science and Engineering, The Chinese Uni-
versity of Hong Kong. His current research
interests include computer vision, medical
imaging, pattern recognition and bioinfor-
matics. He has served as a technical reviewer for a number of international
conferences and journals in IEEE Transactions, IEE proceedings,
bioinformatics and computational biology. Dr Liew is a member of the
Institute of Electrical and Electronic Engineers (IEEE), and his biography is
listed in the 2005 Marquis Who’s Who in the World and Marquis Who’s
Who in Science and Engineering.
Hong Yan received a BE degree from Nanking
Institute of Posts and Telecommunications,
Nanking, China, in 1982, an MSE degree from
the University of Michigan in 1984, and a PhD
degree from Yale University in 1989, all in
electrical engineering. In 1982 and 1983 he
worked on signal detection and estimation as a
graduate student and research assistant at
Tsinghua University, Beijing, China. From
1986 to 1989 he was a research scientist at
General Network Corporation, New Haven,
CT, USA, where he worked on design and optimization of computer and
telecommunications networks. He joined the University of Sydney in 1989
and became Professor of Imaging Science in 1997. He is currently Professor
of Computer Engineering at City University of Hong Kong. His research
interests include image processing, pattern recognition and bioinformatics.
He is author or co-author of one book and over 200-refereed technical
papers in these areas. Professor Yan is a fellow of the International
Association for Pattern Recognition (IAPR), a fellow of the Institution of
Engineers, Australia (IEAust), a senior member of the Institute of Electrical
and Electronic Engineers (IEEE) and a member of the International Society
for Computational Biology (ISCB).