Salt-and-pepper Noise Removalby a Spectral Conjugate Gradient Method
Wei Xue, Dandan Cui, and Jinhong Huang
AbstractâDenoising is an important problem in signal pro-cessing. This paper proposes an efficient two-phase methodfor salt-and-pepper noise removal. In the first phase, adaptivemedian filter is used to detect the contaminated pixels. Thenin the second phase, the candidate pixels will be restored byminimizing a regularization functional. To end of this, a spectralconjugate gradient method is considered. Its global convergenceresult can be established under some suitable conditions. Ex-perimental results show that the proposed approach is efficientand practical.
I. INTRODUCTION
independent of image content. An important type of impulsenoise is salt-and-pepper noise. When images are corruptedby this kind of noise, only part of pixels will be changedand the noisy pixels like white and black dots will sprinkleon the images. Recently, a two-phase method was proposedfor removing salt-and-pepper impulse noise [2]. Firstly, anadaptive median filter [3] is used to identify pixels whichare likely to be contaminated. Then it will restore the noisepixels by minimizing an edge-preserving functional in thesecond phase. It restores noisy pixels one by one and hasa poor computational efficiency. Motivated to improve thetwo-phase method, an efficient spectral conjugate gradientmethod (SCG) is introduced in this paper.
Let I be the numerical image, ð = {1, 2, â â â ,ð} Ã{1, 2, â â â , ð} be the index set of I and ð© â ð denote theset of indices of the noise pixels detected in the first phase.The second phase will restore the noise pixels by minimizingthe following functional:
ð¹ðŒ(ð¢) =â
(ð,ð)âð©â£ð¢ð,ðâðŠð,ð â£+ ð
2
â(ð,ð)âð©
(2 â ð 0ð,ð+ð 1
ð,ð), (1)
where ð > 0 is a parameter and ð¢ = [ð¢ð,ð ](ð,ð)âð©is a column vector of length ð ordered lexicographically,ð is the number of elements of ð© and ðŠð,ð denotesthe observed pixel value of the image at position (ð, ð).Both ð 0
ð,ð =â
(ð,ð)âð±ð,ðâð© ððŒ(ð¢ð,ð â ðŠð,ð) and ð 1ð,ð =â
(ð,ð)âð±ð,ð
â©ð© ððŒ(ð¢ð,ð â ð¢ð,ð) are regularization terms,where ð±ð,ð denotes the set of the four closest neighbors ofthe pixel at position (ð, ð) â ð. An explanation of the extra
Wei Xue, Dandan Cui and Jinhong Huang are with the School ofMathematics and Computer Sciences, GanNan Normal University, Ganzhou,Jiangxi Province, Peopleâs Republic of China (email: [email protected],[email protected], [email protected]).
This work was partly supported by the Postgraduate Student InnovationFoundation of Gannan Normal University (No. YCX10B006) and theNational Natural Science Foundation of China (No. 11001060).
factor â2â in the second summation in (1) can be seen in[4]. ððŒ is an edge-preserving functional with a regularizationparameter ðŒ. Examples of such ððŒ are ððŒ(ð¢) =
âðŒ+ ð¢2
and ððŒ(ð¢) = â£ð¢â£ðŒ. See [5]â[9].The two-phase method can restore large patches of noisy
pixels because the pertinent prior information is containedin the regularization term. In addition, since the adaptivemedian filter can detect almost all of the noisy pixels withvery high accuracy and the noisy pixel values are independenton the pixel values before corrupted, the data-fitting term inthe objective functional of edge-preserving regularization isno longer necessary. Furthermore, the objective functionalshould be simple enough in order to improve the algorithmon computational complexity. So we can drop the nonsmoothdata-fitting term in the second phase where only noisy pixelsare restored in the minimization. Then there exists lots ofoptimization methods which can be extended to minimize thefollowing smooth edge-preserving regularization functional[4] [10]:
ð¹ðŒ(ð¢) =â
(ð,ð)âð©(2 â ð 0
ð,ð + ð 1ð,ð). (2)
Now we are in a position to present some properties of thesmooth edge-preserving regularization functional (2), whichhave been discussed in [10].
Proposition 1.1. If ððŒ(ð¢) is second order Lipschitz contin-uous, continuously differentiable, convex, strictly convex orcoercive. Then the functional ð¹ðŒ(ð¢) will be second orderLipschitz continuous, continuously differentiable, convex,strictly convex or coercive respectively .
Proposition 1.2. If ððŒ(ð¢) is even, continuous and strictlyincreasing with respect to â£ð¢â£. Then the global minimum ofð¹ðŒ(ð¢) will exist and any global minimum ð¢â is in a dynamicrange, i.e., ð¢â â [ð¡ððð, ð¡ððð¥] for all (ð, ð) â ð© .
The outline of the paper is as follows. We present thespectral conjugate gradient algorithm in the next section. InSection 3, the global convergence of the proposed algorithmis studied. Section 4 shows some numerical results illustratethat our method is very practical and promising. Finally wehave a conclusion section.
II. ALGORITHM
Conjugate gradient methods are quite useful in large scaleunconstrained optimization. For a general unconstrainedproblem
min{ð(ð¥) ⣠ð¥ â âð} (3)
where ð : âð â â is smooth and its gradient is available.
URING the acquisition or transmission, digital images are often corrupted by impulse noise [1] which is D
816
2012 IEEE International Conference on Information Science and Technology Wuhan, Hubei, China; March 23-25, 2012
978-1-4577-0345-4/12/$26.00 ©2012 IEEE
In order to accelerate the conjugate gradient method, somespectral conjugate gradient method have been given by Birginand Martinez [11], then studied by Andrei [12] and Yu etal. [13] [14]. In this paper we consider a spectral conjugategradient method which has the following form:
ðð =
{ âðð ððð k= 1,â 1ð¿ððð + ðœðððâ1 ððð k⥠2,
(4)
ð¥ð+1 = ð¥ð + ðŒððð, (5)
with
ðŒð = â ððð ððð¿ðâ¥ððâ¥2Îð
, (6)
ð¿ð =ðŠððâ1ð ðâ1
â¥ð ðâ1â¥2 , (7)
ðœð =â¥ððâ¥2
ð¿ððððâ1ðŠâðâ1
, (8)
ðŠðâ1 = ðð â ððâ1, ð ðâ1 = ð¥ð â ð¥ðâ1, (9)
ðŠâðâ1 = ðŠðâ1 +Ίðâ1, Ίðâ1 = ððð¥(ð£ðâ1, 0)ð ðâ1, (10)
ð£ðâ1 =ð(ðð + ððâ1)
ð ð ðâ1 â ð(ðð â ððâ1)
â¥ð ðâ1â¥2 , (11)
where ðŒð is a steplength, ðœð is a scalar, a and b are twopositive parameters, ðð denotes the gradient of ð(ð¥ð) and ððdenotes ð(ð¥ð). The form of ðŠâðâ1 has appeared in [15] andit possesses some interesting properties.
The SCG algorithm can be written as follows.Algorithm 2.1 (SCG method)Step 1. (Initial step) Choose an initial point ð¥0 â ð ð, set
ðŒ0 =â99/16 and ð0 = âð0, let ð := 1. Compute
ð¥ð = ð¥ðâ1 + ðŒðâ1ððâ1.Step 2. (Termination test) If â¥ðð = 0â¥, stop.Step 3. (Generating direction) Compute ð¿ð and generate ðœð
by the formula (7), (8), (9), (10) and (11). Set ðð =â ððð¿ð
+ ðœðððâ1.Step 4. (Iteration) Compute ðŒð via (6) and set ð¥ð+1 = ð¥ð+
ðŒððð. Then let ð:=ð+1 and go to Step 2.We choose Îð â¡ ðŒ (the unit matrix) in this Algorithm.
III. GLOBAL CONVERGENCE
In this section, we study the global convergence of Algo-rithm 2.1. We make the following assumptions.
Assumption 3.1. The level set
Ω = {ð¥â£ð(ð¥) †ð(ð¥0)}is contained in a bounded convex set ð·.
Assumption 3.2. The function ð in (3) is continuouslydifferentiable on ð· and there exists constant ð¿ > 0 suchthat
â¥ðð+1 â ðð⥠†ð¿â¥ð¥ð+1 â ð¥ðâ¥
for any ð¥ð, ð¥ð+1 â ð·.Assumption 3.3. The function ð in (3) is strong convex,
i.e., there exists constant ðŸ > 0 such that
ðŸâ¥ð¥ð+1 â ð¥ðâ¥2 †(ðð+1 â ðð)ð (ð¥ð+1 â ð¥ð)
for any ð¥ð, ð¥ð+1 â ð·.Let {Îð} be a sequence of positive definite matrices.
Assume that there exist ðððð > 0 and ðððð¥ > 0 such thatfor all ð â ð ð, there holds
ðððððð ð †ððÎðð †ðððð¥ð
ð ð. (12)
This condition would be satisfied, for example, if Îð = Î andÎ is a positive definite. Let the steplength ðŒð be computedby (6), where
â¥ððâ¥Îð:=
âððð Îððð, ð¿ð >
ð¿
ðððð. (13)
Note that the specification of ð¿ð ensures ð¿/ð¿ððððð < 1.Theorem 3.1: Suppose that ð¥ð is given by (4), (5) and (6).
Thenððð+1ðð = ððð
ðð ðð (14)
will hold for all ð, where ðð = 1â ðð
ð¿ðand
ðð =
{0 ifðŒð = 0,
(ðð+1âðð)ð (ð¥ð+1âð¥ð)â¥ð¥ð+1âð¥ðâ¥2 ifðŒð â= 0.
(15)
Proof: The case of ðŒð=0 implies ðð=1 and ðð+1=ðð. Thus(12) is valid. We now prove for the case ðŒð â= 0. From (5)and (6), we have
ððð+1ðð = ððð ðð + (ðð+1 â ðð)ð ðð
= ððð ðð + ðŒâ1ð (ðð+1 â ðð)
ð (ð¥ð+1 â ð¥ð)= ððð ðð + ðŒâ1
ð ððâ¥ð¥ð+1 â ð¥ðâ¥2= ððð ðð â ððð ðð
ð¿ðâ¥ððâ¥2ððâ¥ððâ¥2= (1â ðð
ð¿ð)ððð ðð.
(16)Corollary 3.1: If Assumption 3.2 is valid and ðŒð â= 0,
then there will hold
ðð ⥠1â ð¿
ð¿ððððð(17)
for all k; andðð †1â ðŸ
ð¿ð(18)
will hold under Assumption 3.3. From (17) and (18), weeasily know
0 < ðð †1 +ð¿
ð¿ððððð. (19)
Theorem 3.2: Suppose that Assumption 3.2 is valid, ð¥ð isgiven by (4), (5) and (6), then we have
âðð â=0
(ððð ðð)2
â¥ððâ¥2 < â. (20)
Proof: By the mean-value theorem we obtain
ð(ð¥ð+1)â ð(ð¥ð) = ðð (ð¥ð+1 â ð¥ð), (21)
817
where ð=âð(ð¥) for some ð¥ â [ð¥ð, ð¥ð+1]. Assumption 3.1,Cauchy-Schwartz inequality, (5) and (6) yield
ðð (ð¥ð+1 â ð¥ð) = ððð (ð¥ð+1 â ð¥ð)+ (ð â ðð)
ð (ð¥ð+1 â ð¥ð)†ððð (ð¥ð+1 â ð¥ð)+ â¥ð â ððâ¥â¥ð¥ð+1 â ð¥ðâ¥â€ ðŒðð
ðð ðð + ð¿ðŒ2
ðâ¥ððâ¥2= ðŒðð
ðð ðð(1â ð¿â¥ððâ¥2
ð¿ðâ¥ððâ¥2Îð
)
†ðŒðððð ðð(1â ð¿
ð¿ððððð)
= â (ððð ðð)2
ð¿ðâ¥ððâ¥2Îð
(1â ð¿ð¿ððððð
),
(22)
i.e.,
ð(ð¥ð+1)â ð(ð¥ð) †â (ððð ðð)2
ð¿ðâ¥ððâ¥2Îð
(1â ð¿
ð¿ððððð), (23)
which implies ð(ð¥ð+1)â ð(ð¥ð) < 0. It follows by Assump-tion 3.2 that lim
ðââð(ð¥ð) exists. Then we obtain
(ððð ðð)2
â¥ððâ¥2 †ðððð¥(ððð ðð)
2
â¥ððâ¥2Îð
†ð¿ððððð¥
1âð¿/ð¿ððððð[ð(ð¥ð)â ð(ð¥ð+1)].
(24)
Hence the inequality (20) is valid.Theorem 3.3: Suppose that Assumptions hold and ð¥ð is
given by (4), (5) and (6). Then limðââ
ðððâ¥ðð⥠â= 0 implies
âðð â=0
â¥ððâ¥4â¥ððâ¥2 < â. (25)
Proof: If limðââ
ðððâ¥ðð⥠â= 0, there will exist constant ð > 0
such that â¥ðð⥠⥠ð for all k. Let ðð = â£ððð ððâ£/â¥ððâ¥, thenby Theorem 3.2, there holds ðð †ð/4 for all large k. FromTheorem 3.1 and (19) we have
â£ððð ððâ1⣠= â£ðððððâ1ððâ1â£â€ â£(1 + ð¿
ð¿ððððð)ðððâ1ððâ1â£
< 2â£ðððâ1ððâ1â£.(26)
Considering (4) we have
ðð = ð¿ððœðððâ1 â ð¿ððð = ð¿ð(ðœðððâ1 â ðð). (27)
By multiplying ðð on both side of (27) we know that
â¥ððâ¥2 = ð¿ð(ðœðððð ððâ1 â ððð ðð). (28)
(26) and (28) yield
â¥ððâ¥2
â¥ðð⥠=ð¿ð(ðœðð
ðð ððâ1âððð ðð)â¥ððâ¥
†ð¿ð[2â£ðœðâ£â£ðððâ1ððâ1â£+â£ððð ððâ£]â¥ððâ¥
= ð¿ð[â£ððð ððâ£â¥ðð⥠+
2â£ðððâ1ððâ1â£â¥ðœðððâ1â¥â¥ððâ1â¥â¥ðð⥠]
= ð¿ð[ðð +2ððâ1â¥ðð+ðð/ð¿ðâ¥
â¥ðð⥠]
†ð¿ð[ðð + 2ððâ1(1 +1ð¿ð
â¥ððâ¥2
â¥ððâ¥â¥ðð⥠)]
†ð¿ð(ðð + 2ððâ1 + 2ððâ1â¥ððâ¥2
ðð¿ðâ¥ðð⥠),
(29)
i.e.,â¥ððâ¥2
â¥ðð⥠†ð¿ð(ðð + 2ððâ1 + 2( ð4 )â¥ððâ¥2
ðð¿ðâ¥ðð⥠)
†ð¿ððð + 2ð¿ðððâ1 +â¥ððâ¥2
2â¥ðð⥠,(30)
i.e.,â¥ððâ¥2â¥ðð⥠†4ð¿ð(ðð + ððâ1), (31)
Thenâ¥ððâ¥4â¥ððâ¥2 †32ð¿2ð(ð
2ð + ð2
ðâ1) (32)
holds or all sufficient large k. Now inequality (25) followsfrom (32).
Theorem 3.4: Suppose that Assumptions hold and thesequence {ð¥ð} be generated by Algorithm 2.1. Then we havelimðââ
ðððâ¥ðð⥠= 0.
Proof: If limðââ
ðððâ¥ðð⥠â= 0, there will exist constant ð >
0 such that â¥ðð⥠⥠ð for all k. And by Theorem 3.3 we have
âðð â=0
â¥ððâ¥4â£ððâ¥2 < â. (33)
We consider the following two cases.Firstly, if ð£ðâ1 < 0, i.e., ðŠâðâ1 = ðŠðâ1, then by (4), (8),
(18) and Theorem 3.1, we obtain
ððð ðð = ððð (â 1ð¿ððð +
â¥ððâ¥2
ð¿ððððâ1ððâ1ððâ1)
= â 1ð¿ðâ¥ððâ¥2(1â ððâ1ð
ððâ1ððâ1
(ððâ1â1)ðððâ1ððâ1)
= â 1ð¿ð(1âððâ1)
â¥ððâ¥2.(34)
(4),(8), (34) and Theorem 3.1 yield
â¥ððâ¥2 = ⥠â 1ð¿ððð +
â¥ððâ¥2
ð¿ððððâ1ðŠðâ1ððâ1â¥2
= â¥ððâ¥2
ð¿2ð+ â¥ððâ¥4â¥ððâ1â¥2
ð¿2ð(ðððâ1ðŠðâ1)2
â 2â¥ððâ¥2
ð¿2ð
ððð ððâ1
ðððâ1ðŠðâ1
= â¥ððâ¥2
ð¿2ð+ â¥ððâ¥4â¥ððâ1â¥2
ð¿2ð(ðððâ1ðŠðâ1)2
+ 2ððâ1
1âððâ1
â¥ððâ¥2
ð¿2ð
= â¥ððâ1â¥2â¥ððâ¥4
ð¿2ð(ððâ1â1)2(ððð ðð)2 + â¥ððâ¥2
ð¿2ð
1+ððâ1
1âððâ1
=ð¿2ðâ1(1âððâ2)
2
ð¿2ð(1âððâ1)2â¥ððâ1â¥2
â¥ððâ1â¥4 â¥ððâ¥4+ 1
ð¿2ð
1+ððâ1
1âððâ1â¥ððâ¥2.(35)
The above relation can be re-written as
â¥ððâ¥2
â¥ððâ¥4 =ð¿2ðâ1(1âððâ2)
2
ð¿2ð(1âððâ1)2â¥ððâ1â¥2
â¥ððâ1â¥4 + 1ð¿2ð
1+ððâ1
1âððâ11
â¥ððâ¥2 .
(36)Hence we get
â¥ððâ¥2
â¥ððâ¥4 †ð¿2ðâ1(1âððâ2)2
ð¿2ð(1âððâ1)2â¥ððâ1â¥2
â¥ððâ1â¥4 + 1ð¿2ð
1+ððâ1
1âððâ1
1ð2
†ð¿2ðâ2(1âððâ3)2
ð¿2ðâ1(1âððâ1)2â¥ððâ2â¥2
â¥ððâ2â¥4
+ 1ð¿2ðâ1
[1âð2
ðâ1
(1âððâ1)2+
1âð2ðâ2
(1âððâ1)2] 1ð2
†â â â †( ð¿2ð¿ð )
2( 1âð11âððâ1
)2 â¥ð2â¥2
â¥ð2â¥4
+ 1ð¿2ð[
1âð2ðâ1
(1âððâ1)2+ â â â + 1âð2
2
(1âððâ1)2] 1ð2 .
(37)
According to Corollary 3.1, we know the quantities
( 1âð11âððâ1
)2,1âð2
ðâ1
(1âððâ1)2, â â â , 1âð2
2
(1âððâ1)2(38)
818
have a common upper bound. Denoting this common boundby ð, ð = ð/ð2 and ð = ðâ¥ð2â¥2/â¥ð2â¥4.From (37) weobtainâ¥ððâ¥4â¥ððâ¥2 ⥠ð¿2ð
(ð â 2)ð+ ð¿22ð⥠ð¿2
[(ð â 2)ð+ ð¿2ð]ð2ððð, (39)
which indicates âðð â=0
â¥ððâ¥4â¥ððâ¥2 = â. (40)
Obviously, it is contradictory to (33). Thus the theorem 3.4is valid.
Secondly, if ð£ðâ1 > 0, i.e., ðŠâðâ1 = ðŠðâ1 + ð£ðâ1ð ðâ1, thenby (4), (8), (18) and Theorem 3.1, we get
ððð ðð = ððð (â 1ð¿ððð +
â¥ððâ¥2ððâ1
ð¿ððððâ1ðŠðâ1+ð£ðâ1ðŒðð¿ðâ¥ððâ1â¥2 )
= ââ¥ððâ¥2
ð¿ð(1â ððð ððâ1
ðððâ1ðŠðâ1â ð£ðâ1ððð
ðð
ð¿ðâ¥ððâ¥2Îð
â¥ððâ1â¥2)
= ââ¥ððâ¥2
ð¿ð(1â ð¿ðâ¥ððâ¥2
Îðððð ððâ1
ð¿ðâ¥ððâ¥2Îððððâ1ðŠðâ1âð£ðâ1â¥ððâ1â¥2ððð ðð
)
= ââ¥ððâ¥2
ð¿ð[1 +
ð¿ðððâ1â¥ððâ¥2Îð
(1âððâ1)ð¿ðâ¥ððâ¥2Îð
+ð£ðâ1ððâ1â¥ððâ1â¥2 ]
†â 1ð¿ðâ¥ððâ¥2.
(41)Furthermore, we have
âððð ðð ⥠â¥ððâ¥2/ð¿ð =â (ððð ðð)2 ⥠â¥ððâ¥4/ð¿2ð. (42)
(4),(8) and Theorem 3.1 yield
â¥ððâ¥2 = ⥠â 1ð¿ððð +
â¥ððâ¥2
ð¿ððððâ1(ðŠðâ1+ð£ðâ1ð ðâ1)ððâ1â¥2
= â¥ððâ¥2
ð¿2ðâ 2â¥ððâ¥2
ð¿2ð
ððð ððâ1
ðððâ1(ðŠðâ1+ð£ðâ1ð ðâ1)
+ â¥ððâ¥4â¥ððâ1â¥2
ð¿2ð(ðððâ1ðŠðâ1+ð£ðâ1ðððâ1ð ðâ1)2
.
(43)The above relation can be re-written as
â¥ððâ¥2
â¥ððâ¥4 = 1ð¿2ðâ¥ððâ¥2
â 2ððâ1ðððâ1ððâ1
(ððâ1â1)ð¿2ðâ¥ððâ¥2ðððâ1ððâ1+ðŒðâ1ð£ðâ1ð¿2ðâ¥ððâ¥2â¥ððâ1â¥2
+ â¥ððâ1â¥2
ð¿2ð[(ððâ1â1)ðððâ1ððâ1+ðŒðâ1ð£ðâ1â¥ððâ1â¥2]2
†1ð¿2ðâ¥ððâ¥2 + â¥ððâ1â¥2
ð¿2ð[(ððâ1â1)ðððâ1ððâ1+ðŒðâ1ð£ðâ1â¥ððâ1â¥2]2
†1ð¿2ð
â¥ððâ1â¥2
(1âððâ1)2(ðððâ1ððâ1)2+ 1
ð¿2ð
1â¥ððâ¥2 .
(44)From (42) and (44) we have
â¥ððâ¥2
â¥ððâ¥4 †ð¿2ðâ1
ð¿2ð
â¥ððâ1â¥2
(1âððâ1)2â¥ððâ¥4 + 1ð¿2ð
1â¥ððâ¥2
†ð¿2ðâ1
ð¿2ð
1(1âððâ1)2
â¥ððâ1â¥2
â¥ððâ1â¥4 + 1ð¿2ð
1ð2
†ð¿2ðâ2
ð¿2ð
1(1âððâ1)2(1âððâ2)2
â¥ððâ2â¥2
â¥ððâ2â¥4
+ 1ð¿2ð[1 + 1
(1âððâ1)2] 1ð2
†ð¿2ðâ3
ð¿2ð
1(1âððâ1)2(1âððâ2)2(1âððâ3)2
â¥ððâ3â¥2
â¥ððâ3â¥4
+ 1ð¿2ð[1 + 1
(1âððâ1)2+ 1
(1âððâ1)2(1âððâ2)2] 1ð2
†â â â †ð¿22
ð¿2ð
1âðâ2
ð=1 (1âððâð)2â¥ð2â¥2
â¥ð2â¥4
+ 1ð¿2ð[1 +
âðâ3ð=2
1âð
ð=1(1âððâð)2] 1ð2 .
(45)
According to Corollary 3.1, we know the quantities
1âðâ2ð=1 (1â ððâð)2
,1
(1â ððâ1)2,
1
(1â ððâ1)2(1â ððâ2)2
, â â â ,
1âðâ3ð=1 (1â ððâð)2
have a common upper bound. Denoting this common boundby ð, from (45) we obtain
â¥ððâ¥2â¥ððâ¥4 †1
ð¿2ð{ðð¿
22â¥ð2â¥2â¥ð2â¥4 + [1 + (ð â 4)ð]
1
ð2}. (46)
We denote ðð¿22â¥ð2â¥2
â¥ð2â¥4 + [1+ (ðâ 4)ð] 1ð2 by Ω(ð) here. Then
the above relation can be re-written as
â¥ððâ¥2â¥ððâ¥4 †1
ð¿2ðΩ(ð) †ð2ðððΩ(ð)
ð¿2, (47)
i.e.,â¥ððâ¥4â¥ððâ¥2 ⥠ð¿2
ð2ðððΩ(ð). (48)
From (48) we have âðð â=0
â¥ððâ¥4â¥ððâ¥2 = â. (49)
Obviously, it is contradictory to (33). Thus the theorem 3.4is valid. This finishes our proof completely.
IV. NUMERICAL EXPERIMENTS
In this section the numerical experiments are presentedto show the performance of the Algorithm 2.1 for salt-and-pepper noise removal. These experiments are tested in Matlab7.0 and all the test images are 512Ã512 gray level images.
According to the numerical comparison in [4], the Polak-Ribiere conjugate gradient method (PRCG) is the mostefficient method. Here, we test SCG method and compare itwith and PRCG method. The computation results are givenin Tab. 1. We report the number of iterations (NI) and theCPU-time required for the whole denoising process and thePSNR of the restored images.
It should be stressed that we are mainly concerned withthe speed of slolving the minimization of (2). We chooseððŒ(ð¢) =
âðŒ+ ð¢2 with ðŒ = 100 and let the parameters
ð = 3 and ð = 6 in (11) during the whole tests.To assess the restoration performance qualitatively, we use
the PSNR (peak signal to noise ratio, see details in [1])defined as
ðððð = 10 log102552
1ðð
âð,ð
(ð¢ðð,ð â ð¢âð,ð)
2, (50)
where ð¢ðð,ð and ð¢âð,ð denote the pixel values of the restored
image and the original image respectively.The stopping cri-terion of both methods are
â¥ðð⥠†10â6(1 + â£ð(ð¢ð)â£) (51)
819
1 2 3 4 5 6 7 8 9 100
50
100
150
200
250
300
350nu
mbe
r of
iter
atio
ns
PRCGSCG
Fig. 1. Number of iterations of PRCG/SCG methods for ten test imagesin Table I.
1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
CP
Uâ
time
PRCGSCG
Fig. 2. CPU-time of PRCG/SCG methods for ten test images in Table I.
andâ£ð(ð¢ð)â ð(ð¢ðâ1)â£
â£ð(ð¢ð)⣠†10â6. (52)
Table I indicates that the proposed SCG method is aboutthree or four even five times faster than the famous PRCGmethod for all of the test images. We also find the PSNRvalues attained by the SCG and PRCG method are verysimilar in some degree. Figs.1 and 2 show the SCG methodhas less sensitivity to different problems. Fig.3 shows partoriginal test images in this paper, the corresponding noisyimages with noise level ð = 70% and the restoration resultsby PRCG method and SCG method respectively. Theseresults show that the proposed spectral conjugate gradientmethod (SCG) is feasible and it can restore corrupted imagequite well in an efficient manner.
V. CONCLUSION
In this paper,we present a spectral conjugate gradientmethod (SCG) to minimize the smooth regularization func-tional for salt-and-pepper noise removal. Its global conver-gence can be established under some suitable conditions.Numerical experiments results illustrate that SCG methodcan significantly improve the CPU-time for image processingwhile obtaining the same restored image quality.
Our motivation comes from [14]-[16]. In this paper wejust let Îð â¡ ðŒ (the unit matrix) and it brings us somegood results. So we can try adopting some new forms of Îð,updating by the quasi-Newton method, such as the classicalBFGS and DFP quasi-Newton update formulaes.
TABLE IPERFORMANCE OF SALT-AND PEPPER DENOISING VIA PRCG AND SCG
Test Noise PRCGImages Level PSNR (dB) CPU-time NIBoat 27.938 46.3281 194
Bridge 24.998 60.672 255Cameraman 30.755 76.2344 228
Goldhill 70% 29.792 48.1094 144Head DTI 32.892 47.8281 196
Lena 31.167 42.2813 177Pepper 30.3889 78.2188 234Texture 19.84 104.9214 306
Brain MRI 70% 35.1812 41.9688 123Brain MRI 90% 30.247 59.3125 257
SCGPSNR (dB) CPU-time NI
Boat 27.978 11.6533 45Bridge 24.853 14.461 55
Cameraman 30.799 12.012 48Goldhill 70% 29.857 9.5005 38
Head DTI 32.712 12.199 48Lena 31.207 12.4021 49
Pepper 30.417 12.729 50Texture 19.852 19.0165 56
Brain MRI 70% 35.315 9.297 37Brain MRI 90% 30.222 12.6985 50
REFERENCES
[1] A. Bovik, Handbook of Image and Video Processing. Academic, NewYork, 2000.
[2] R.H. Chan, C.W. Ho and M. Nikolova, âSalt-and-pepper noise removalby median-type noise detectors and detail-preserving regularization,âIEEE Trans, Image Process, vol. 14, pp. 1479-1485, 2005.
[3] H. Hwang and R.A. Haddad, âAdaptive median filters: New algorithmsand results,â IEEE Trans, Image Process, vol. 4, pp. 499-502, 1995.
[4] J.F. Cai, R.H. Chan and C.D. Fiore, âMinimization of a detail-preservingregularization function for impulse noise removal,â J. Math, ImagingVision, vol. 27, pp. 79-91, 2007.
[5] T. Lukið, J.Lindblad and N,Sladoje, âRegularized image denoisingbased on spectral gradient optimization,â Inverse Problems, vol. 27,2011.
[6] M. Black and A. Rangarajan, âOn the unification of line processes,outlier rejection, and robust statistics with applications to early vision,âInternational Journal of Computer Vision, vol. 19, pp. 57-91, 1996.
[7] C. Bouman and K. Sauer, âOn discontinuity-adaptive smoothness pri-ors in computer vision,â IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 17, pp. 576-586, 1995.
[8] P. Charbonnier, L. Blanc-Feraud, G. Aubert and M. Barlaud, âDeter-ministic edge-preserving regularization in computed imaging,â IEEETransactions on Image Processing, vol. 6, pp. 298-311, 1997.
[9] P. J. Green, âBayesian reconstructions from emission tomographydata using a modified EM algorithm,â IEEE Transactions on MedicalImaging, MI-9, pp. 84-93, 1990.
820
Fig. 3. The first line is the original images: Head DTI/Brain MRI/Lena. The second line is the noisy images with noise level r =70%: Head DTI/BrainMRI/Lena. The third line is the restored images via PRCG (the PSNR values are 32.892dB, 35.1812dB and 31.167dB severally). The fourth line is therestored images via SCG (the PSNR values are 32.712dB, 35.315dB and 31.207dB severally).
[10] J.F. Cai, R.H. Chan and B. Morini, âMinimization of an edge-preserving regularization function by conjugate gradient type methods,âin: Image Precessing baded on Partial Differential Equations, in Math-ematic and Visualization, Springer, Berlin, Heidelberg, pp. 109-122,2007.
[11] E.G. Birgin and J.M. Martinez, âA spectral conjugate gradient methodfor unconstrained optimization,â Appl. Math. Optim, vol. 43, pp. 117-128, 2001.
[12] N. Andrei, âScaled memoryless BFGS preconditioned conjugate gra-dient algorithm for unconstrained optimization,â Optim. Methods Softw,vol. 22, pp. 561-571, 2007.
[13] G.H. Yu, L.T. Guan and W.F. Chen, âSpectral conjugate gradientmethods with sufficient descent property for large-scale unconstrainedoptimization,â Optim.Methods Softw, vol. 23, pp. 275-293, 2008.
[14] G.H. Yu, J.H. Huang and Y. Zhou, âA descent spectral conjugategradient method for impulse noise removal,â Applied Mathematics
Letters, vol. 23, pp. 555-560, 2010.[15] G.Y. Li, C.M. Tang and Z.X. Wei, âNew conjugate condition and re-
lated new conjugate gradient methods for unconstrained optimization,âJournal of Computational and Applied Mathematics, vol. 202, pp. 523-529, 2007.
[16] J. Sun and J.P. Zhang, âGlobal convergence of conjugate gradientmethods without line search,â Annals of Operations Research, vol. 103,pp. 161-173, 2001.
821