Salt-and-pepper Noise Removalby a Spectral Conjugate Gradient Method
Wei Xue, Dandan Cui, and Jinhong Huang
AbstractβDenoising is an important problem in signal pro-cessing. This paper proposes an efficient two-phase methodfor salt-and-pepper noise removal. In the first phase, adaptivemedian filter is used to detect the contaminated pixels. Thenin the second phase, the candidate pixels will be restored byminimizing a regularization functional. To end of this, a spectralconjugate gradient method is considered. Its global convergenceresult can be established under some suitable conditions. Ex-perimental results show that the proposed approach is efficientand practical.
I. INTRODUCTION
independent of image content. An important type of impulsenoise is salt-and-pepper noise. When images are corruptedby this kind of noise, only part of pixels will be changedand the noisy pixels like white and black dots will sprinkleon the images. Recently, a two-phase method was proposedfor removing salt-and-pepper impulse noise [2]. Firstly, anadaptive median filter [3] is used to identify pixels whichare likely to be contaminated. Then it will restore the noisepixels by minimizing an edge-preserving functional in thesecond phase. It restores noisy pixels one by one and hasa poor computational efficiency. Motivated to improve thetwo-phase method, an efficient spectral conjugate gradientmethod (SCG) is introduced in this paper.
Let I be the numerical image, π = {1, 2, β β β ,π} Γ{1, 2, β β β , π} be the index set of I and π© β π denote theset of indices of the noise pixels detected in the first phase.The second phase will restore the noise pixels by minimizingthe following functional:
πΉπΌ(π’) =β
(π,π)βπ©β£π’π,πβπ¦π,π β£+ π
2
β(π,π)βπ©
(2 β π 0π,π+π 1
π,π), (1)
where π > 0 is a parameter and π’ = [π’π,π ](π,π)βπ©is a column vector of length π ordered lexicographically,π is the number of elements of π© and π¦π,π denotesthe observed pixel value of the image at position (π, π).Both π 0
π,π =β
(π,π)βπ±π,πβπ© ππΌ(π’π,π β π¦π,π) and π 1π,π =β
(π,π)βπ±π,π
β©π© ππΌ(π’π,π β π’π,π) are regularization terms,where π±π,π denotes the set of the four closest neighbors ofthe pixel at position (π, π) β π. An explanation of the extra
Wei Xue, Dandan Cui and Jinhong Huang are with the School ofMathematics and Computer Sciences, GanNan Normal University, Ganzhou,Jiangxi Province, Peopleβs Republic of China (email: [email protected],[email protected], [email protected]).
This work was partly supported by the Postgraduate Student InnovationFoundation of Gannan Normal University (No. YCX10B006) and theNational Natural Science Foundation of China (No. 11001060).
factor β2β in the second summation in (1) can be seen in[4]. ππΌ is an edge-preserving functional with a regularizationparameter πΌ. Examples of such ππΌ are ππΌ(π’) =
βπΌ+ π’2
and ππΌ(π’) = β£π’β£πΌ. See [5]β[9].The two-phase method can restore large patches of noisy
pixels because the pertinent prior information is containedin the regularization term. In addition, since the adaptivemedian filter can detect almost all of the noisy pixels withvery high accuracy and the noisy pixel values are independenton the pixel values before corrupted, the data-fitting term inthe objective functional of edge-preserving regularization isno longer necessary. Furthermore, the objective functionalshould be simple enough in order to improve the algorithmon computational complexity. So we can drop the nonsmoothdata-fitting term in the second phase where only noisy pixelsare restored in the minimization. Then there exists lots ofoptimization methods which can be extended to minimize thefollowing smooth edge-preserving regularization functional[4] [10]:
πΉπΌ(π’) =β
(π,π)βπ©(2 β π 0
π,π + π 1π,π). (2)
Now we are in a position to present some properties of thesmooth edge-preserving regularization functional (2), whichhave been discussed in [10].
Proposition 1.1. If ππΌ(π’) is second order Lipschitz contin-uous, continuously differentiable, convex, strictly convex orcoercive. Then the functional πΉπΌ(π’) will be second orderLipschitz continuous, continuously differentiable, convex,strictly convex or coercive respectively .
Proposition 1.2. If ππΌ(π’) is even, continuous and strictlyincreasing with respect to β£π’β£. Then the global minimum ofπΉπΌ(π’) will exist and any global minimum π’β is in a dynamicrange, i.e., π’β β [π‘πππ, π‘πππ₯] for all (π, π) β π© .
The outline of the paper is as follows. We present thespectral conjugate gradient algorithm in the next section. InSection 3, the global convergence of the proposed algorithmis studied. Section 4 shows some numerical results illustratethat our method is very practical and promising. Finally wehave a conclusion section.
II. ALGORITHM
Conjugate gradient methods are quite useful in large scaleunconstrained optimization. For a general unconstrainedproblem
min{π(π₯) β£ π₯ β βπ} (3)
where π : βπ β β is smooth and its gradient is available.
URING the acquisition or transmission, digital images are often corrupted by impulse noise [1] which is D
816
2012 IEEE International Conference on Information Science and Technology Wuhan, Hubei, China; March 23-25, 2012
978-1-4577-0345-4/12/$26.00 Β©2012 IEEE
In order to accelerate the conjugate gradient method, somespectral conjugate gradient method have been given by Birginand Martinez [11], then studied by Andrei [12] and Yu etal. [13] [14]. In this paper we consider a spectral conjugategradient method which has the following form:
ππ =
{ βππ πππ k= 1,β 1πΏπππ + π½πππβ1 πππ kβ₯ 2,
(4)
π₯π+1 = π₯π + πΌπππ, (5)
with
πΌπ = β πππ πππΏπβ₯ππβ₯2Ξπ
, (6)
πΏπ =π¦ππβ1π πβ1
β₯π πβ1β₯2 , (7)
π½π =β₯ππβ₯2
πΏππππβ1π¦βπβ1
, (8)
π¦πβ1 = ππ β ππβ1, π πβ1 = π₯π β π₯πβ1, (9)
π¦βπβ1 = π¦πβ1 +Ξ¦πβ1, Ξ¦πβ1 = πππ₯(π£πβ1, 0)π πβ1, (10)
π£πβ1 =π(ππ + ππβ1)
π π πβ1 β π(ππ β ππβ1)
β₯π πβ1β₯2 , (11)
where πΌπ is a steplength, π½π is a scalar, a and b are twopositive parameters, ππ denotes the gradient of π(π₯π) and ππdenotes π(π₯π). The form of π¦βπβ1 has appeared in [15] andit possesses some interesting properties.
The SCG algorithm can be written as follows.Algorithm 2.1 (SCG method)Step 1. (Initial step) Choose an initial point π₯0 β π π, set
πΌ0 =β99/16 and π0 = βπ0, let π := 1. Compute
π₯π = π₯πβ1 + πΌπβ1ππβ1.Step 2. (Termination test) If β₯ππ = 0β₯, stop.Step 3. (Generating direction) Compute πΏπ and generate π½π
by the formula (7), (8), (9), (10) and (11). Set ππ =β πππΏπ
+ π½πππβ1.Step 4. (Iteration) Compute πΌπ via (6) and set π₯π+1 = π₯π+
πΌπππ. Then let π:=π+1 and go to Step 2.We choose Ξπ β‘ πΌ (the unit matrix) in this Algorithm.
III. GLOBAL CONVERGENCE
In this section, we study the global convergence of Algo-rithm 2.1. We make the following assumptions.
Assumption 3.1. The level set
Ξ© = {π₯β£π(π₯) β€ π(π₯0)}is contained in a bounded convex set π·.
Assumption 3.2. The function π in (3) is continuouslydifferentiable on π· and there exists constant πΏ > 0 suchthat
β₯ππ+1 β ππβ₯ β€ πΏβ₯π₯π+1 β π₯πβ₯
for any π₯π, π₯π+1 β π·.Assumption 3.3. The function π in (3) is strong convex,
i.e., there exists constant πΎ > 0 such that
πΎβ₯π₯π+1 β π₯πβ₯2 β€ (ππ+1 β ππ)π (π₯π+1 β π₯π)
for any π₯π, π₯π+1 β π·.Let {Ξπ} be a sequence of positive definite matrices.
Assume that there exist ππππ > 0 and ππππ₯ > 0 such thatfor all π β π π, there holds
ππππππ π β€ ππΞππ β€ ππππ₯π
π π. (12)
This condition would be satisfied, for example, if Ξπ = Ξ andΞ is a positive definite. Let the steplength πΌπ be computedby (6), where
β₯ππβ₯Ξπ:=
βπππ Ξπππ, πΏπ >
πΏ
ππππ. (13)
Note that the specification of πΏπ ensures πΏ/πΏπππππ < 1.Theorem 3.1: Suppose that π₯π is given by (4), (5) and (6).
Thenπππ+1ππ = πππ
ππ ππ (14)
will hold for all π, where ππ = 1β ππ
πΏπand
ππ =
{0 ifπΌπ = 0,
(ππ+1βππ)π (π₯π+1βπ₯π)β₯π₯π+1βπ₯πβ₯2 ifπΌπ β= 0.
(15)
Proof: The case of πΌπ=0 implies ππ=1 and ππ+1=ππ. Thus(12) is valid. We now prove for the case πΌπ β= 0. From (5)and (6), we have
πππ+1ππ = πππ ππ + (ππ+1 β ππ)π ππ
= πππ ππ + πΌβ1π (ππ+1 β ππ)
π (π₯π+1 β π₯π)= πππ ππ + πΌβ1
π ππβ₯π₯π+1 β π₯πβ₯2= πππ ππ β πππ ππ
πΏπβ₯ππβ₯2ππβ₯ππβ₯2= (1β ππ
πΏπ)πππ ππ.
(16)Corollary 3.1: If Assumption 3.2 is valid and πΌπ β= 0,
then there will hold
ππ β₯ 1β πΏ
πΏπππππ(17)
for all k; andππ β€ 1β πΎ
πΏπ(18)
will hold under Assumption 3.3. From (17) and (18), weeasily know
0 < ππ β€ 1 +πΏ
πΏπππππ. (19)
Theorem 3.2: Suppose that Assumption 3.2 is valid, π₯π isgiven by (4), (5) and (6), then we have
βππ β=0
(πππ ππ)2
β₯ππβ₯2 < β. (20)
Proof: By the mean-value theorem we obtain
π(π₯π+1)β π(π₯π) = ππ (π₯π+1 β π₯π), (21)
817
where π=βπ(π₯) for some π₯ β [π₯π, π₯π+1]. Assumption 3.1,Cauchy-Schwartz inequality, (5) and (6) yield
ππ (π₯π+1 β π₯π) = πππ (π₯π+1 β π₯π)+ (π β ππ)
π (π₯π+1 β π₯π)β€ πππ (π₯π+1 β π₯π)+ β₯π β ππβ₯β₯π₯π+1 β π₯πβ₯β€ πΌππ
ππ ππ + πΏπΌ2
πβ₯ππβ₯2= πΌππ
ππ ππ(1β πΏβ₯ππβ₯2
πΏπβ₯ππβ₯2Ξπ
)
β€ πΌππππ ππ(1β πΏ
πΏπππππ)
= β (πππ ππ)2
πΏπβ₯ππβ₯2Ξπ
(1β πΏπΏπππππ
),
(22)
i.e.,
π(π₯π+1)β π(π₯π) β€ β (πππ ππ)2
πΏπβ₯ππβ₯2Ξπ
(1β πΏ
πΏπππππ), (23)
which implies π(π₯π+1)β π(π₯π) < 0. It follows by Assump-tion 3.2 that lim
πββπ(π₯π) exists. Then we obtain
(πππ ππ)2
β₯ππβ₯2 β€ ππππ₯(πππ ππ)
2
β₯ππβ₯2Ξπ
β€ πΏπππππ₯
1βπΏ/πΏπππππ[π(π₯π)β π(π₯π+1)].
(24)
Hence the inequality (20) is valid.Theorem 3.3: Suppose that Assumptions hold and π₯π is
given by (4), (5) and (6). Then limπββ
πππβ₯ππβ₯ β= 0 implies
βππ β=0
β₯ππβ₯4β₯ππβ₯2 < β. (25)
Proof: If limπββ
πππβ₯ππβ₯ β= 0, there will exist constant π > 0
such that β₯ππβ₯ β₯ π for all k. Let ππ = β£πππ ππβ£/β₯ππβ₯, thenby Theorem 3.2, there holds ππ β€ π/4 for all large k. FromTheorem 3.1 and (19) we have
β£πππ ππβ1β£ = β£πππππβ1ππβ1β£β€ β£(1 + πΏ
πΏπππππ)πππβ1ππβ1β£
< 2β£πππβ1ππβ1β£.(26)
Considering (4) we have
ππ = πΏππ½πππβ1 β πΏπππ = πΏπ(π½πππβ1 β ππ). (27)
By multiplying ππ on both side of (27) we know that
β₯ππβ₯2 = πΏπ(π½ππππ ππβ1 β πππ ππ). (28)
(26) and (28) yield
β₯ππβ₯2
β₯ππβ₯ =πΏπ(π½ππ
ππ ππβ1βπππ ππ)β₯ππβ₯
β€ πΏπ[2β£π½πβ£β£πππβ1ππβ1β£+β£πππ ππβ£]β₯ππβ₯
= πΏπ[β£πππ ππβ£β₯ππβ₯ +
2β£πππβ1ππβ1β£β₯π½πππβ1β₯β₯ππβ1β₯β₯ππβ₯ ]
= πΏπ[ππ +2ππβ1β₯ππ+ππ/πΏπβ₯
β₯ππβ₯ ]
β€ πΏπ[ππ + 2ππβ1(1 +1πΏπ
β₯ππβ₯2
β₯ππβ₯β₯ππβ₯ )]
β€ πΏπ(ππ + 2ππβ1 + 2ππβ1β₯ππβ₯2
ππΏπβ₯ππβ₯ ),
(29)
i.e.,β₯ππβ₯2
β₯ππβ₯ β€ πΏπ(ππ + 2ππβ1 + 2( π4 )β₯ππβ₯2
ππΏπβ₯ππβ₯ )
β€ πΏπππ + 2πΏπππβ1 +β₯ππβ₯2
2β₯ππβ₯ ,(30)
i.e.,β₯ππβ₯2β₯ππβ₯ β€ 4πΏπ(ππ + ππβ1), (31)
Thenβ₯ππβ₯4β₯ππβ₯2 β€ 32πΏ2π(π
2π + π2
πβ1) (32)
holds or all sufficient large k. Now inequality (25) followsfrom (32).
Theorem 3.4: Suppose that Assumptions hold and thesequence {π₯π} be generated by Algorithm 2.1. Then we havelimπββ
πππβ₯ππβ₯ = 0.
Proof: If limπββ
πππβ₯ππβ₯ β= 0, there will exist constant π >
0 such that β₯ππβ₯ β₯ π for all k. And by Theorem 3.3 we have
βππ β=0
β₯ππβ₯4β£ππβ₯2 < β. (33)
We consider the following two cases.Firstly, if π£πβ1 < 0, i.e., π¦βπβ1 = π¦πβ1, then by (4), (8),
(18) and Theorem 3.1, we obtain
πππ ππ = πππ (β 1πΏπππ +
β₯ππβ₯2
πΏππππβ1ππβ1ππβ1)
= β 1πΏπβ₯ππβ₯2(1β ππβ1π
ππβ1ππβ1
(ππβ1β1)πππβ1ππβ1)
= β 1πΏπ(1βππβ1)
β₯ππβ₯2.(34)
(4),(8), (34) and Theorem 3.1 yield
β₯ππβ₯2 = β₯ β 1πΏπππ +
β₯ππβ₯2
πΏππππβ1π¦πβ1ππβ1β₯2
= β₯ππβ₯2
πΏ2π+ β₯ππβ₯4β₯ππβ1β₯2
πΏ2π(πππβ1π¦πβ1)2
β 2β₯ππβ₯2
πΏ2π
πππ ππβ1
πππβ1π¦πβ1
= β₯ππβ₯2
πΏ2π+ β₯ππβ₯4β₯ππβ1β₯2
πΏ2π(πππβ1π¦πβ1)2
+ 2ππβ1
1βππβ1
β₯ππβ₯2
πΏ2π
= β₯ππβ1β₯2β₯ππβ₯4
πΏ2π(ππβ1β1)2(πππ ππ)2 + β₯ππβ₯2
πΏ2π
1+ππβ1
1βππβ1
=πΏ2πβ1(1βππβ2)
2
πΏ2π(1βππβ1)2β₯ππβ1β₯2
β₯ππβ1β₯4 β₯ππβ₯4+ 1
πΏ2π
1+ππβ1
1βππβ1β₯ππβ₯2.(35)
The above relation can be re-written as
β₯ππβ₯2
β₯ππβ₯4 =πΏ2πβ1(1βππβ2)
2
πΏ2π(1βππβ1)2β₯ππβ1β₯2
β₯ππβ1β₯4 + 1πΏ2π
1+ππβ1
1βππβ11
β₯ππβ₯2 .
(36)Hence we get
β₯ππβ₯2
β₯ππβ₯4 β€ πΏ2πβ1(1βππβ2)2
πΏ2π(1βππβ1)2β₯ππβ1β₯2
β₯ππβ1β₯4 + 1πΏ2π
1+ππβ1
1βππβ1
1π2
β€ πΏ2πβ2(1βππβ3)2
πΏ2πβ1(1βππβ1)2β₯ππβ2β₯2
β₯ππβ2β₯4
+ 1πΏ2πβ1
[1βπ2
πβ1
(1βππβ1)2+
1βπ2πβ2
(1βππβ1)2] 1π2
β€ β β β β€ ( πΏ2πΏπ )
2( 1βπ11βππβ1
)2 β₯π2β₯2
β₯π2β₯4
+ 1πΏ2π[
1βπ2πβ1
(1βππβ1)2+ β β β + 1βπ2
2
(1βππβ1)2] 1π2 .
(37)
According to Corollary 3.1, we know the quantities
( 1βπ11βππβ1
)2,1βπ2
πβ1
(1βππβ1)2, β β β , 1βπ2
2
(1βππβ1)2(38)
818
have a common upper bound. Denoting this common boundby π, π = π/π2 and π = πβ₯π2β₯2/β₯π2β₯4.From (37) weobtainβ₯ππβ₯4β₯ππβ₯2 β₯ πΏ2π
(π β 2)π+ πΏ22πβ₯ πΏ2
[(π β 2)π+ πΏ2π]π2πππ, (39)
which indicates βππ β=0
β₯ππβ₯4β₯ππβ₯2 = β. (40)
Obviously, it is contradictory to (33). Thus the theorem 3.4is valid.
Secondly, if π£πβ1 > 0, i.e., π¦βπβ1 = π¦πβ1 + π£πβ1π πβ1, thenby (4), (8), (18) and Theorem 3.1, we get
πππ ππ = πππ (β 1πΏπππ +
β₯ππβ₯2ππβ1
πΏππππβ1π¦πβ1+π£πβ1πΌππΏπβ₯ππβ1β₯2 )
= ββ₯ππβ₯2
πΏπ(1β πππ ππβ1
πππβ1π¦πβ1β π£πβ1πππ
ππ
πΏπβ₯ππβ₯2Ξπ
β₯ππβ1β₯2)
= ββ₯ππβ₯2
πΏπ(1β πΏπβ₯ππβ₯2
Ξππππ ππβ1
πΏπβ₯ππβ₯2Ξππππβ1π¦πβ1βπ£πβ1β₯ππβ1β₯2πππ ππ
)
= ββ₯ππβ₯2
πΏπ[1 +
πΏπππβ1β₯ππβ₯2Ξπ
(1βππβ1)πΏπβ₯ππβ₯2Ξπ
+π£πβ1ππβ1β₯ππβ1β₯2 ]
β€ β 1πΏπβ₯ππβ₯2.
(41)Furthermore, we have
βπππ ππ β₯ β₯ππβ₯2/πΏπ =β (πππ ππ)2 β₯ β₯ππβ₯4/πΏ2π. (42)
(4),(8) and Theorem 3.1 yield
β₯ππβ₯2 = β₯ β 1πΏπππ +
β₯ππβ₯2
πΏππππβ1(π¦πβ1+π£πβ1π πβ1)ππβ1β₯2
= β₯ππβ₯2
πΏ2πβ 2β₯ππβ₯2
πΏ2π
πππ ππβ1
πππβ1(π¦πβ1+π£πβ1π πβ1)
+ β₯ππβ₯4β₯ππβ1β₯2
πΏ2π(πππβ1π¦πβ1+π£πβ1πππβ1π πβ1)2
.
(43)The above relation can be re-written as
β₯ππβ₯2
β₯ππβ₯4 = 1πΏ2πβ₯ππβ₯2
β 2ππβ1πππβ1ππβ1
(ππβ1β1)πΏ2πβ₯ππβ₯2πππβ1ππβ1+πΌπβ1π£πβ1πΏ2πβ₯ππβ₯2β₯ππβ1β₯2
+ β₯ππβ1β₯2
πΏ2π[(ππβ1β1)πππβ1ππβ1+πΌπβ1π£πβ1β₯ππβ1β₯2]2
β€ 1πΏ2πβ₯ππβ₯2 + β₯ππβ1β₯2
πΏ2π[(ππβ1β1)πππβ1ππβ1+πΌπβ1π£πβ1β₯ππβ1β₯2]2
β€ 1πΏ2π
β₯ππβ1β₯2
(1βππβ1)2(πππβ1ππβ1)2+ 1
πΏ2π
1β₯ππβ₯2 .
(44)From (42) and (44) we have
β₯ππβ₯2
β₯ππβ₯4 β€ πΏ2πβ1
πΏ2π
β₯ππβ1β₯2
(1βππβ1)2β₯ππβ₯4 + 1πΏ2π
1β₯ππβ₯2
β€ πΏ2πβ1
πΏ2π
1(1βππβ1)2
β₯ππβ1β₯2
β₯ππβ1β₯4 + 1πΏ2π
1π2
β€ πΏ2πβ2
πΏ2π
1(1βππβ1)2(1βππβ2)2
β₯ππβ2β₯2
β₯ππβ2β₯4
+ 1πΏ2π[1 + 1
(1βππβ1)2] 1π2
β€ πΏ2πβ3
πΏ2π
1(1βππβ1)2(1βππβ2)2(1βππβ3)2
β₯ππβ3β₯2
β₯ππβ3β₯4
+ 1πΏ2π[1 + 1
(1βππβ1)2+ 1
(1βππβ1)2(1βππβ2)2] 1π2
β€ β β β β€ πΏ22
πΏ2π
1βπβ2
π=1 (1βππβπ)2β₯π2β₯2
β₯π2β₯4
+ 1πΏ2π[1 +
βπβ3π=2
1βπ
π=1(1βππβπ)2] 1π2 .
(45)
According to Corollary 3.1, we know the quantities
1βπβ2π=1 (1β ππβπ)2
,1
(1β ππβ1)2,
1
(1β ππβ1)2(1β ππβ2)2
, β β β ,
1βπβ3π=1 (1β ππβπ)2
have a common upper bound. Denoting this common boundby π, from (45) we obtain
β₯ππβ₯2β₯ππβ₯4 β€ 1
πΏ2π{ππΏ
22β₯π2β₯2β₯π2β₯4 + [1 + (π β 4)π]
1
π2}. (46)
We denote ππΏ22β₯π2β₯2
β₯π2β₯4 + [1+ (πβ 4)π] 1π2 by Ξ©(π) here. Then
the above relation can be re-written as
β₯ππβ₯2β₯ππβ₯4 β€ 1
πΏ2πΞ©(π) β€ π2πππΞ©(π)
πΏ2, (47)
i.e.,β₯ππβ₯4β₯ππβ₯2 β₯ πΏ2
π2πππΞ©(π). (48)
From (48) we have βππ β=0
β₯ππβ₯4β₯ππβ₯2 = β. (49)
Obviously, it is contradictory to (33). Thus the theorem 3.4is valid. This finishes our proof completely.
IV. NUMERICAL EXPERIMENTS
In this section the numerical experiments are presentedto show the performance of the Algorithm 2.1 for salt-and-pepper noise removal. These experiments are tested in Matlab7.0 and all the test images are 512Γ512 gray level images.
According to the numerical comparison in [4], the Polak-Ribiere conjugate gradient method (PRCG) is the mostefficient method. Here, we test SCG method and compare itwith and PRCG method. The computation results are givenin Tab. 1. We report the number of iterations (NI) and theCPU-time required for the whole denoising process and thePSNR of the restored images.
It should be stressed that we are mainly concerned withthe speed of slolving the minimization of (2). We chooseππΌ(π’) =
βπΌ+ π’2 with πΌ = 100 and let the parameters
π = 3 and π = 6 in (11) during the whole tests.To assess the restoration performance qualitatively, we use
the PSNR (peak signal to noise ratio, see details in [1])defined as
ππππ = 10 log102552
1ππ
βπ,π
(π’ππ,π β π’βπ,π)
2, (50)
where π’ππ,π and π’βπ,π denote the pixel values of the restored
image and the original image respectively.The stopping cri-terion of both methods are
β₯ππβ₯ β€ 10β6(1 + β£π(π’π)β£) (51)
819
1 2 3 4 5 6 7 8 9 100
50
100
150
200
250
300
350nu
mbe
r of
iter
atio
ns
PRCGSCG
Fig. 1. Number of iterations of PRCG/SCG methods for ten test imagesin Table I.
1 2 3 4 5 6 7 8 9 100
20
40
60
80
100
120
CP
Uβ
time
PRCGSCG
Fig. 2. CPU-time of PRCG/SCG methods for ten test images in Table I.
andβ£π(π’π)β π(π’πβ1)β£
β£π(π’π)β£ β€ 10β6. (52)
Table I indicates that the proposed SCG method is aboutthree or four even five times faster than the famous PRCGmethod for all of the test images. We also find the PSNRvalues attained by the SCG and PRCG method are verysimilar in some degree. Figs.1 and 2 show the SCG methodhas less sensitivity to different problems. Fig.3 shows partoriginal test images in this paper, the corresponding noisyimages with noise level π = 70% and the restoration resultsby PRCG method and SCG method respectively. Theseresults show that the proposed spectral conjugate gradientmethod (SCG) is feasible and it can restore corrupted imagequite well in an efficient manner.
V. CONCLUSION
In this paper,we present a spectral conjugate gradientmethod (SCG) to minimize the smooth regularization func-tional for salt-and-pepper noise removal. Its global conver-gence can be established under some suitable conditions.Numerical experiments results illustrate that SCG methodcan significantly improve the CPU-time for image processingwhile obtaining the same restored image quality.
Our motivation comes from [14]-[16]. In this paper wejust let Ξπ β‘ πΌ (the unit matrix) and it brings us somegood results. So we can try adopting some new forms of Ξπ,updating by the quasi-Newton method, such as the classicalBFGS and DFP quasi-Newton update formulaes.
TABLE IPERFORMANCE OF SALT-AND PEPPER DENOISING VIA PRCG AND SCG
Test Noise PRCGImages Level PSNR (dB) CPU-time NIBoat 27.938 46.3281 194
Bridge 24.998 60.672 255Cameraman 30.755 76.2344 228
Goldhill 70% 29.792 48.1094 144Head DTI 32.892 47.8281 196
Lena 31.167 42.2813 177Pepper 30.3889 78.2188 234Texture 19.84 104.9214 306
Brain MRI 70% 35.1812 41.9688 123Brain MRI 90% 30.247 59.3125 257
SCGPSNR (dB) CPU-time NI
Boat 27.978 11.6533 45Bridge 24.853 14.461 55
Cameraman 30.799 12.012 48Goldhill 70% 29.857 9.5005 38
Head DTI 32.712 12.199 48Lena 31.207 12.4021 49
Pepper 30.417 12.729 50Texture 19.852 19.0165 56
Brain MRI 70% 35.315 9.297 37Brain MRI 90% 30.222 12.6985 50
REFERENCES
[1] A. Bovik, Handbook of Image and Video Processing. Academic, NewYork, 2000.
[2] R.H. Chan, C.W. Ho and M. Nikolova, βSalt-and-pepper noise removalby median-type noise detectors and detail-preserving regularization,βIEEE Trans, Image Process, vol. 14, pp. 1479-1485, 2005.
[3] H. Hwang and R.A. Haddad, βAdaptive median filters: New algorithmsand results,β IEEE Trans, Image Process, vol. 4, pp. 499-502, 1995.
[4] J.F. Cai, R.H. Chan and C.D. Fiore, βMinimization of a detail-preservingregularization function for impulse noise removal,β J. Math, ImagingVision, vol. 27, pp. 79-91, 2007.
[5] T. Lukiπ, J.Lindblad and N,Sladoje, βRegularized image denoisingbased on spectral gradient optimization,β Inverse Problems, vol. 27,2011.
[6] M. Black and A. Rangarajan, βOn the unification of line processes,outlier rejection, and robust statistics with applications to early vision,βInternational Journal of Computer Vision, vol. 19, pp. 57-91, 1996.
[7] C. Bouman and K. Sauer, βOn discontinuity-adaptive smoothness pri-ors in computer vision,β IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 17, pp. 576-586, 1995.
[8] P. Charbonnier, L. Blanc-Feraud, G. Aubert and M. Barlaud, βDeter-ministic edge-preserving regularization in computed imaging,β IEEETransactions on Image Processing, vol. 6, pp. 298-311, 1997.
[9] P. J. Green, βBayesian reconstructions from emission tomographydata using a modified EM algorithm,β IEEE Transactions on MedicalImaging, MI-9, pp. 84-93, 1990.
820
Fig. 3. The first line is the original images: Head DTI/Brain MRI/Lena. The second line is the noisy images with noise level r =70%: Head DTI/BrainMRI/Lena. The third line is the restored images via PRCG (the PSNR values are 32.892dB, 35.1812dB and 31.167dB severally). The fourth line is therestored images via SCG (the PSNR values are 32.712dB, 35.315dB and 31.207dB severally).
[10] J.F. Cai, R.H. Chan and B. Morini, βMinimization of an edge-preserving regularization function by conjugate gradient type methods,βin: Image Precessing baded on Partial Differential Equations, in Math-ematic and Visualization, Springer, Berlin, Heidelberg, pp. 109-122,2007.
[11] E.G. Birgin and J.M. Martinez, βA spectral conjugate gradient methodfor unconstrained optimization,β Appl. Math. Optim, vol. 43, pp. 117-128, 2001.
[12] N. Andrei, βScaled memoryless BFGS preconditioned conjugate gra-dient algorithm for unconstrained optimization,β Optim. Methods Softw,vol. 22, pp. 561-571, 2007.
[13] G.H. Yu, L.T. Guan and W.F. Chen, βSpectral conjugate gradientmethods with sufficient descent property for large-scale unconstrainedoptimization,β Optim.Methods Softw, vol. 23, pp. 275-293, 2008.
[14] G.H. Yu, J.H. Huang and Y. Zhou, βA descent spectral conjugategradient method for impulse noise removal,β Applied Mathematics
Letters, vol. 23, pp. 555-560, 2010.[15] G.Y. Li, C.M. Tang and Z.X. Wei, βNew conjugate condition and re-
lated new conjugate gradient methods for unconstrained optimization,βJournal of Computational and Applied Mathematics, vol. 202, pp. 523-529, 2007.
[16] J. Sun and J.P. Zhang, βGlobal convergence of conjugate gradientmethods without line search,β Annals of Operations Research, vol. 103,pp. 161-173, 2001.
821