Transcript

Salt-and-pepper Noise Removalby a Spectral Conjugate Gradient Method

Wei Xue, Dandan Cui, and Jinhong Huang

Abstractβ€”Denoising is an important problem in signal pro-cessing. This paper proposes an efficient two-phase methodfor salt-and-pepper noise removal. In the first phase, adaptivemedian filter is used to detect the contaminated pixels. Thenin the second phase, the candidate pixels will be restored byminimizing a regularization functional. To end of this, a spectralconjugate gradient method is considered. Its global convergenceresult can be established under some suitable conditions. Ex-perimental results show that the proposed approach is efficientand practical.

I. INTRODUCTION

independent of image content. An important type of impulsenoise is salt-and-pepper noise. When images are corruptedby this kind of noise, only part of pixels will be changedand the noisy pixels like white and black dots will sprinkleon the images. Recently, a two-phase method was proposedfor removing salt-and-pepper impulse noise [2]. Firstly, anadaptive median filter [3] is used to identify pixels whichare likely to be contaminated. Then it will restore the noisepixels by minimizing an edge-preserving functional in thesecond phase. It restores noisy pixels one by one and hasa poor computational efficiency. Motivated to improve thetwo-phase method, an efficient spectral conjugate gradientmethod (SCG) is introduced in this paper.

Let I be the numerical image, π’œ = {1, 2, β‹… β‹… β‹… ,𝑀} Γ—{1, 2, β‹… β‹… β‹… , 𝑁} be the index set of I and 𝒩 βŠ‚ π’œ denote theset of indices of the noise pixels detected in the first phase.The second phase will restore the noise pixels by minimizingthe following functional:

𝐹𝛼(𝑒) =βˆ‘

(𝑖,𝑗)βˆˆπ’©βˆ£π‘’π‘–,π‘—βˆ’π‘¦π‘–,𝑗 ∣+ πœ‡

2

βˆ‘(𝑖,𝑗)βˆˆπ’©

(2 ⋅𝑇 0𝑖,𝑗+𝑇 1

𝑖,𝑗), (1)

where πœ‡ > 0 is a parameter and 𝑒 = [𝑒𝑖,𝑗 ](𝑖,𝑗)βˆˆπ’©is a column vector of length 𝑙 ordered lexicographically,𝑙 is the number of elements of 𝒩 and 𝑦𝑖,𝑗 denotesthe observed pixel value of the image at position (𝑖, 𝑗).Both 𝑇 0

𝑖,𝑗 =βˆ‘

(π‘š,𝑛)βˆˆπ’±π‘–,π‘—βˆ–π’© πœ‘π›Ό(𝑒𝑖,𝑗 βˆ’ π‘¦π‘š,𝑛) and 𝑇 1𝑖,𝑗 =βˆ‘

(π‘š,𝑛)βˆˆπ’±π‘–,𝑗

βˆ©π’© πœ‘π›Ό(𝑒𝑖,𝑗 βˆ’ π‘’π‘š,𝑛) are regularization terms,where 𝒱𝑖,𝑗 denotes the set of the four closest neighbors ofthe pixel at position (𝑖, 𝑗) ∈ π’œ. An explanation of the extra

Wei Xue, Dandan Cui and Jinhong Huang are with the School ofMathematics and Computer Sciences, GanNan Normal University, Ganzhou,Jiangxi Province, People’s Republic of China (email: [email protected],[email protected], [email protected]).

This work was partly supported by the Postgraduate Student InnovationFoundation of Gannan Normal University (No. YCX10B006) and theNational Natural Science Foundation of China (No. 11001060).

factor β€œ2” in the second summation in (1) can be seen in[4]. πœ‘π›Ό is an edge-preserving functional with a regularizationparameter 𝛼. Examples of such πœ‘π›Ό are πœ‘π›Ό(𝑒) =

βˆšπ›Ό+ 𝑒2

and πœ‘π›Ό(𝑒) = βˆ£π‘’βˆ£π›Ό. See [5]βˆ’[9].The two-phase method can restore large patches of noisy

pixels because the pertinent prior information is containedin the regularization term. In addition, since the adaptivemedian filter can detect almost all of the noisy pixels withvery high accuracy and the noisy pixel values are independenton the pixel values before corrupted, the data-fitting term inthe objective functional of edge-preserving regularization isno longer necessary. Furthermore, the objective functionalshould be simple enough in order to improve the algorithmon computational complexity. So we can drop the nonsmoothdata-fitting term in the second phase where only noisy pixelsare restored in the minimization. Then there exists lots ofoptimization methods which can be extended to minimize thefollowing smooth edge-preserving regularization functional[4] [10]:

𝐹𝛼(𝑒) =βˆ‘

(𝑖,𝑗)βˆˆπ’©(2 β‹… 𝑇 0

𝑖,𝑗 + 𝑇 1𝑖,𝑗). (2)

Now we are in a position to present some properties of thesmooth edge-preserving regularization functional (2), whichhave been discussed in [10].

Proposition 1.1. If πœ‘π›Ό(𝑒) is second order Lipschitz contin-uous, continuously differentiable, convex, strictly convex orcoercive. Then the functional 𝐹𝛼(𝑒) will be second orderLipschitz continuous, continuously differentiable, convex,strictly convex or coercive respectively .

Proposition 1.2. If πœ‘π›Ό(𝑒) is even, continuous and strictlyincreasing with respect to βˆ£π‘’βˆ£. Then the global minimum of𝐹𝛼(𝑒) will exist and any global minimum π‘’βˆ— is in a dynamicrange, i.e., π‘’βˆ— ∈ [π‘‘π‘šπ‘–π‘›, π‘‘π‘šπ‘Žπ‘₯] for all (𝑖, 𝑗) ∈ 𝒩 .

The outline of the paper is as follows. We present thespectral conjugate gradient algorithm in the next section. InSection 3, the global convergence of the proposed algorithmis studied. Section 4 shows some numerical results illustratethat our method is very practical and promising. Finally wehave a conclusion section.

II. ALGORITHM

Conjugate gradient methods are quite useful in large scaleunconstrained optimization. For a general unconstrainedproblem

min{𝑓(π‘₯) ∣ π‘₯ ∈ β„œπ‘›} (3)

where 𝑓 : β„œπ‘› β†’ β„œ is smooth and its gradient is available.

URING the acquisition or transmission, digital images are often corrupted by impulse noise [1] which is D

816

2012 IEEE International Conference on Information Science and Technology Wuhan, Hubei, China; March 23-25, 2012

978-1-4577-0345-4/12/$26.00 Β©2012 IEEE

In order to accelerate the conjugate gradient method, somespectral conjugate gradient method have been given by Birginand Martinez [11], then studied by Andrei [12] and Yu etal. [13] [14]. In this paper we consider a spectral conjugategradient method which has the following form:

π‘‘π‘˜ =

{ βˆ’π‘”π‘˜ π‘“π‘œπ‘Ÿ k= 1,βˆ’ 1π›Ώπ‘˜π‘”π‘˜ + π›½π‘˜π‘‘π‘˜βˆ’1 π‘“π‘œπ‘Ÿ kβ‰₯ 2,

(4)

π‘₯π‘˜+1 = π‘₯π‘˜ + π›Όπ‘˜π‘‘π‘˜, (5)

with

π›Όπ‘˜ = βˆ’ π‘”π‘‡π‘˜ π‘‘π‘˜π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

, (6)

π›Ώπ‘˜ =π‘¦π‘‡π‘˜βˆ’1π‘ π‘˜βˆ’1

βˆ₯π‘ π‘˜βˆ’1βˆ₯2 , (7)

π›½π‘˜ =βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜π‘‘π‘‡π‘˜βˆ’1π‘¦βˆ—π‘˜βˆ’1

, (8)

π‘¦π‘˜βˆ’1 = π‘”π‘˜ βˆ’ π‘”π‘˜βˆ’1, π‘ π‘˜βˆ’1 = π‘₯π‘˜ βˆ’ π‘₯π‘˜βˆ’1, (9)

π‘¦βˆ—π‘˜βˆ’1 = π‘¦π‘˜βˆ’1 +Ξ¦π‘˜βˆ’1, Ξ¦π‘˜βˆ’1 = π‘šπ‘Žπ‘₯(π‘£π‘˜βˆ’1, 0)π‘ π‘˜βˆ’1, (10)

π‘£π‘˜βˆ’1 =π‘Ž(π‘”π‘˜ + π‘”π‘˜βˆ’1)

𝑇 π‘ π‘˜βˆ’1 βˆ’ 𝑏(π‘“π‘˜ βˆ’ π‘“π‘˜βˆ’1)

βˆ₯π‘ π‘˜βˆ’1βˆ₯2 , (11)

where π›Όπ‘˜ is a steplength, π›½π‘˜ is a scalar, a and b are twopositive parameters, π‘”π‘˜ denotes the gradient of 𝑓(π‘₯π‘˜) and π‘“π‘˜denotes 𝑓(π‘₯π‘˜). The form of π‘¦βˆ—π‘˜βˆ’1 has appeared in [15] andit possesses some interesting properties.

The SCG algorithm can be written as follows.Algorithm 2.1 (SCG method)Step 1. (Initial step) Choose an initial point π‘₯0 ∈ 𝑅𝑛, set

𝛼0 =√99/16 and 𝑑0 = βˆ’π‘”0, let π‘˜ := 1. Compute

π‘₯π‘˜ = π‘₯π‘˜βˆ’1 + π›Όπ‘˜βˆ’1π‘‘π‘˜βˆ’1.Step 2. (Termination test) If βˆ₯π‘”π‘˜ = 0βˆ₯, stop.Step 3. (Generating direction) Compute π›Ώπ‘˜ and generate π›½π‘˜

by the formula (7), (8), (9), (10) and (11). Set π‘‘π‘˜ =βˆ’ π‘”π‘˜π›Ώπ‘˜

+ π›½π‘˜π‘‘π‘˜βˆ’1.Step 4. (Iteration) Compute π›Όπ‘˜ via (6) and set π‘₯π‘˜+1 = π‘₯π‘˜+

π›Όπ‘˜π‘‘π‘˜. Then let π‘˜:=π‘˜+1 and go to Step 2.We choose Ξ“π‘˜ ≑ 𝐼 (the unit matrix) in this Algorithm.

III. GLOBAL CONVERGENCE

In this section, we study the global convergence of Algo-rithm 2.1. We make the following assumptions.

Assumption 3.1. The level set

Ξ© = {π‘₯βˆ£π‘“(π‘₯) ≀ 𝑓(π‘₯0)}is contained in a bounded convex set 𝐷.

Assumption 3.2. The function 𝑓 in (3) is continuouslydifferentiable on 𝐷 and there exists constant 𝐿 > 0 suchthat

βˆ₯π‘”π‘˜+1 βˆ’ π‘”π‘˜βˆ₯ ≀ 𝐿βˆ₯π‘₯π‘˜+1 βˆ’ π‘₯π‘˜βˆ₯

for any π‘₯π‘˜, π‘₯π‘˜+1 ∈ 𝐷.Assumption 3.3. The function 𝑓 in (3) is strong convex,

i.e., there exists constant 𝛾 > 0 such that

𝛾βˆ₯π‘₯π‘˜+1 βˆ’ π‘₯π‘˜βˆ₯2 ≀ (π‘”π‘˜+1 βˆ’ π‘”π‘˜)𝑇 (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜)

for any π‘₯π‘˜, π‘₯π‘˜+1 ∈ 𝐷.Let {Ξ“π‘˜} be a sequence of positive definite matrices.

Assume that there exist πœŒπ‘šπ‘–π‘› > 0 and πœŒπ‘šπ‘Žπ‘₯ > 0 such thatfor all 𝑝 ∈ 𝑅𝑛, there holds

πœŒπ‘šπ‘–π‘›π‘π‘‡ 𝑝 ≀ π‘π‘‡Ξ“π‘˜π‘ ≀ πœŒπ‘šπ‘Žπ‘₯𝑝

𝑇 𝑝. (12)

This condition would be satisfied, for example, if Ξ“π‘˜ = Ξ“ andΞ“ is a positive definite. Let the steplength π›Όπ‘˜ be computedby (6), where

βˆ₯π‘‘π‘˜βˆ₯Ξ“π‘˜:=

βˆšπ‘‘π‘‡π‘˜ Ξ“π‘˜π‘‘π‘˜, π›Ώπ‘˜ >

𝐿

πœŒπ‘šπ‘–π‘›. (13)

Note that the specification of π›Ώπ‘˜ ensures 𝐿/π›Ώπ‘˜πœŒπ‘šπ‘–π‘› < 1.Theorem 3.1: Suppose that π‘₯π‘˜ is given by (4), (5) and (6).

Thenπ‘”π‘‡π‘˜+1π‘‘π‘˜ = πœπ‘˜π‘”

π‘‡π‘˜ π‘‘π‘˜ (14)

will hold for all π‘˜, where πœπ‘˜ = 1βˆ’ πœ“π‘˜

π›Ώπ‘˜and

πœ“π‘˜ =

{0 ifπ›Όπ‘˜ = 0,

(π‘”π‘˜+1βˆ’π‘”π‘˜)𝑇 (π‘₯π‘˜+1βˆ’π‘₯π‘˜)βˆ₯π‘₯π‘˜+1βˆ’π‘₯π‘˜βˆ₯2 ifπ›Όπ‘˜ βˆ•= 0.

(15)

Proof: The case of π›Όπ‘˜=0 implies πœπ‘˜=1 and π‘”π‘˜+1=π‘”π‘˜. Thus(12) is valid. We now prove for the case π›Όπ‘˜ βˆ•= 0. From (5)and (6), we have

π‘”π‘‡π‘˜+1π‘‘π‘˜ = π‘”π‘‡π‘˜ π‘‘π‘˜ + (π‘”π‘˜+1 βˆ’ π‘”π‘˜)𝑇 π‘‘π‘˜

= π‘”π‘‡π‘˜ π‘‘π‘˜ + π›Όβˆ’1π‘˜ (π‘”π‘˜+1 βˆ’ π‘”π‘˜)

𝑇 (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜)= π‘”π‘‡π‘˜ π‘‘π‘˜ + π›Όβˆ’1

π‘˜ πœ“π‘˜βˆ₯π‘₯π‘˜+1 βˆ’ π‘₯π‘˜βˆ₯2= π‘”π‘‡π‘˜ π‘‘π‘˜ βˆ’ π‘”π‘‡π‘˜ π‘‘π‘˜

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2πœ“π‘˜βˆ₯π‘‘π‘˜βˆ₯2= (1βˆ’ πœ“π‘˜

π›Ώπ‘˜)π‘”π‘‡π‘˜ π‘‘π‘˜.

(16)Corollary 3.1: If Assumption 3.2 is valid and π›Όπ‘˜ βˆ•= 0,

then there will hold

πœπ‘˜ β‰₯ 1βˆ’ 𝐿

π›Ώπ‘˜πœŒπ‘šπ‘–π‘›(17)

for all k; andπœπ‘˜ ≀ 1βˆ’ 𝛾

π›Ώπ‘˜(18)

will hold under Assumption 3.3. From (17) and (18), weeasily know

0 < πœπ‘˜ ≀ 1 +𝐿

π›Ώπ‘˜πœŒπ‘šπ‘–π‘›. (19)

Theorem 3.2: Suppose that Assumption 3.2 is valid, π‘₯π‘˜ isgiven by (4), (5) and (6), then we have

βˆ‘π‘‘π‘˜ βˆ•=0

(π‘”π‘‡π‘˜ π‘‘π‘˜)2

βˆ₯π‘‘π‘˜βˆ₯2 < ∞. (20)

Proof: By the mean-value theorem we obtain

𝑓(π‘₯π‘˜+1)βˆ’ 𝑓(π‘₯π‘˜) = 𝑔𝑇 (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜), (21)

817

where 𝑔=βˆ‡π‘“(π‘₯) for some π‘₯ ∈ [π‘₯π‘˜, π‘₯π‘˜+1]. Assumption 3.1,Cauchy-Schwartz inequality, (5) and (6) yield

𝑔𝑇 (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜) = π‘”π‘‡π‘˜ (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜)+ (𝑔 βˆ’ π‘”π‘˜)

𝑇 (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜)≀ π‘”π‘‡π‘˜ (π‘₯π‘˜+1 βˆ’ π‘₯π‘˜)+ βˆ₯𝑔 βˆ’ π‘”π‘˜βˆ₯βˆ₯π‘₯π‘˜+1 βˆ’ π‘₯π‘˜βˆ₯≀ π›Όπ‘˜π‘”

π‘‡π‘˜ π‘‘π‘˜ + 𝐿𝛼2

π‘˜βˆ₯π‘‘π‘˜βˆ₯2= π›Όπ‘˜π‘”

π‘‡π‘˜ π‘‘π‘˜(1βˆ’ 𝐿βˆ₯π‘‘π‘˜βˆ₯2

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

)

≀ π›Όπ‘˜π‘”π‘‡π‘˜ π‘‘π‘˜(1βˆ’ 𝐿

π›Ώπ‘˜πœŒπ‘šπ‘–π‘›)

= βˆ’ (π‘”π‘‡π‘˜ π‘‘π‘˜)2

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

(1βˆ’ πΏπ›Ώπ‘˜πœŒπ‘šπ‘–π‘›

),

(22)

i.e.,

𝑓(π‘₯π‘˜+1)βˆ’ 𝑓(π‘₯π‘˜) ≀ βˆ’ (π‘”π‘‡π‘˜ π‘‘π‘˜)2

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

(1βˆ’ 𝐿

π›Ώπ‘˜πœŒπ‘šπ‘–π‘›), (23)

which implies 𝑓(π‘₯π‘˜+1)βˆ’ 𝑓(π‘₯π‘˜) < 0. It follows by Assump-tion 3.2 that lim

π‘˜β†’βˆžπ‘“(π‘₯π‘˜) exists. Then we obtain

(π‘”π‘‡π‘˜ π‘‘π‘˜)2

βˆ₯π‘‘π‘˜βˆ₯2 ≀ πœŒπ‘šπ‘Žπ‘₯(π‘”π‘‡π‘˜ π‘‘π‘˜)

2

βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

≀ π›Ώπ‘˜πœŒπ‘šπ‘Žπ‘₯

1βˆ’πΏ/π›Ώπ‘˜πœŒπ‘šπ‘–π‘›[𝑓(π‘₯π‘˜)βˆ’ 𝑓(π‘₯π‘˜+1)].

(24)

Hence the inequality (20) is valid.Theorem 3.3: Suppose that Assumptions hold and π‘₯π‘˜ is

given by (4), (5) and (6). Then limπ‘˜β†’βˆž

𝑖𝑛𝑓βˆ₯π‘”π‘˜βˆ₯ βˆ•= 0 implies

βˆ‘π‘‘π‘˜ βˆ•=0

βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 < ∞. (25)

Proof: If limπ‘˜β†’βˆž

𝑖𝑛𝑓βˆ₯π‘”π‘˜βˆ₯ βˆ•= 0, there will exist constant πœƒ > 0

such that βˆ₯π‘”π‘˜βˆ₯ β‰₯ πœƒ for all k. Let πœ†π‘˜ = βˆ£π‘”π‘‡π‘˜ π‘‘π‘˜βˆ£/βˆ₯π‘‘π‘˜βˆ₯, thenby Theorem 3.2, there holds πœ†π‘˜ ≀ πœƒ/4 for all large k. FromTheorem 3.1 and (19) we have

βˆ£π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1∣ = βˆ£πœπ‘˜π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1βˆ£β‰€ ∣(1 + 𝐿

π›Ώπ‘˜πœŒπ‘šπ‘–π‘›)π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1∣

< 2βˆ£π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1∣.(26)

Considering (4) we have

π‘”π‘˜ = π›Ώπ‘˜π›½π‘˜π‘‘π‘˜βˆ’1 βˆ’ π›Ώπ‘˜π‘‘π‘˜ = π›Ώπ‘˜(π›½π‘˜π‘‘π‘˜βˆ’1 βˆ’ π‘‘π‘˜). (27)

By multiplying π‘”π‘˜ on both side of (27) we know that

βˆ₯π‘”π‘˜βˆ₯2 = π›Ώπ‘˜(π›½π‘˜π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1 βˆ’ π‘”π‘‡π‘˜ π‘‘π‘˜). (28)

(26) and (28) yield

βˆ₯π‘”π‘˜βˆ₯2

βˆ₯π‘‘π‘˜βˆ₯ =π›Ώπ‘˜(π›½π‘˜π‘”

π‘‡π‘˜ π‘‘π‘˜βˆ’1βˆ’π‘”π‘‡π‘˜ π‘‘π‘˜)βˆ₯π‘‘π‘˜βˆ₯

≀ π›Ώπ‘˜[2βˆ£π›½π‘˜βˆ£βˆ£π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1∣+βˆ£π‘”π‘‡π‘˜ π‘‘π‘˜βˆ£]βˆ₯π‘‘π‘˜βˆ₯

= π›Ώπ‘˜[βˆ£π‘”π‘‡π‘˜ π‘‘π‘˜βˆ£βˆ₯π‘‘π‘˜βˆ₯ +

2βˆ£π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1∣βˆ₯π›½π‘˜π‘‘π‘˜βˆ’1βˆ₯βˆ₯π‘‘π‘˜βˆ’1βˆ₯βˆ₯π‘‘π‘˜βˆ₯ ]

= π›Ώπ‘˜[πœ†π‘˜ +2πœ†π‘˜βˆ’1βˆ₯π‘‘π‘˜+π‘”π‘˜/π›Ώπ‘˜βˆ₯

βˆ₯π‘‘π‘˜βˆ₯ ]

≀ π›Ώπ‘˜[πœ†π‘˜ + 2πœ†π‘˜βˆ’1(1 +1π›Ώπ‘˜

βˆ₯π‘”π‘˜βˆ₯2

βˆ₯π‘‘π‘˜βˆ₯βˆ₯π‘”π‘˜βˆ₯ )]

≀ π›Ώπ‘˜(πœ†π‘˜ + 2πœ†π‘˜βˆ’1 + 2πœ†π‘˜βˆ’1βˆ₯π‘”π‘˜βˆ₯2

πœƒπ›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯ ),

(29)

i.e.,βˆ₯π‘”π‘˜βˆ₯2

βˆ₯π‘‘π‘˜βˆ₯ ≀ π›Ώπ‘˜(πœ†π‘˜ + 2πœ†π‘˜βˆ’1 + 2( πœƒ4 )βˆ₯π‘”π‘˜βˆ₯2

πœƒπ›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯ )

≀ π›Ώπ‘˜πœ†π‘˜ + 2π›Ώπ‘˜πœ†π‘˜βˆ’1 +βˆ₯π‘”π‘˜βˆ₯2

2βˆ₯π‘‘π‘˜βˆ₯ ,(30)

i.e.,βˆ₯π‘”π‘˜βˆ₯2βˆ₯π‘‘π‘˜βˆ₯ ≀ 4π›Ώπ‘˜(πœ†π‘˜ + πœ†π‘˜βˆ’1), (31)

Thenβˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 ≀ 32𝛿2π‘˜(πœ†

2π‘˜ + πœ†2

π‘˜βˆ’1) (32)

holds or all sufficient large k. Now inequality (25) followsfrom (32).

Theorem 3.4: Suppose that Assumptions hold and thesequence {π‘₯π‘˜} be generated by Algorithm 2.1. Then we havelimπ‘˜β†’βˆž

𝑖𝑛𝑓βˆ₯π‘”π‘˜βˆ₯ = 0.

Proof: If limπ‘˜β†’βˆž

𝑖𝑛𝑓βˆ₯π‘”π‘˜βˆ₯ βˆ•= 0, there will exist constant πœƒ >

0 such that βˆ₯π‘”π‘˜βˆ₯ β‰₯ πœƒ for all k. And by Theorem 3.3 we have

βˆ‘π‘‘π‘˜ βˆ•=0

βˆ₯π‘”π‘˜βˆ₯4βˆ£π‘‘π‘˜βˆ₯2 < ∞. (33)

We consider the following two cases.Firstly, if π‘£π‘˜βˆ’1 < 0, i.e., π‘¦βˆ—π‘˜βˆ’1 = π‘¦π‘˜βˆ’1, then by (4), (8),

(18) and Theorem 3.1, we obtain

π‘”π‘‡π‘˜ π‘‘π‘˜ = π‘”π‘‡π‘˜ (βˆ’ 1π›Ώπ‘˜π‘”π‘˜ +

βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜π‘‘π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1π‘‘π‘˜βˆ’1)

= βˆ’ 1π›Ώπ‘˜βˆ₯π‘”π‘˜βˆ₯2(1βˆ’ πœπ‘˜βˆ’1𝑔

π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1

(πœπ‘˜βˆ’1βˆ’1)π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1)

= βˆ’ 1π›Ώπ‘˜(1βˆ’πœπ‘˜βˆ’1)

βˆ₯π‘”π‘˜βˆ₯2.(34)

(4),(8), (34) and Theorem 3.1 yield

βˆ₯π‘‘π‘˜βˆ₯2 = βˆ₯ βˆ’ 1π›Ώπ‘˜π‘”π‘˜ +

βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1π‘‘π‘˜βˆ’1βˆ₯2

= βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜+ βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

𝛿2π‘˜(π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1)2

βˆ’ 2βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜

π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1

π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1

= βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜+ βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

𝛿2π‘˜(π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1)2

+ 2πœπ‘˜βˆ’1

1βˆ’πœπ‘˜βˆ’1

βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜

= βˆ₯π‘‘π‘˜βˆ’1βˆ₯2βˆ₯π‘”π‘˜βˆ₯4

𝛿2π‘˜(πœπ‘˜βˆ’1βˆ’1)2(π‘”π‘‡π‘˜ π‘‘π‘˜)2 + βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜

1+πœπ‘˜βˆ’1

1βˆ’πœπ‘˜βˆ’1

=𝛿2π‘˜βˆ’1(1βˆ’πœπ‘˜βˆ’2)

2

𝛿2π‘˜(1βˆ’πœπ‘˜βˆ’1)2βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

βˆ₯π‘”π‘˜βˆ’1βˆ₯4 βˆ₯π‘”π‘˜βˆ₯4+ 1

𝛿2π‘˜

1+πœπ‘˜βˆ’1

1βˆ’πœπ‘˜βˆ’1βˆ₯π‘”π‘˜βˆ₯2.(35)

The above relation can be re-written as

βˆ₯π‘‘π‘˜βˆ₯2

βˆ₯π‘”π‘˜βˆ₯4 =𝛿2π‘˜βˆ’1(1βˆ’πœπ‘˜βˆ’2)

2

𝛿2π‘˜(1βˆ’πœπ‘˜βˆ’1)2βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

βˆ₯π‘”π‘˜βˆ’1βˆ₯4 + 1𝛿2π‘˜

1+πœπ‘˜βˆ’1

1βˆ’πœπ‘˜βˆ’11

βˆ₯π‘”π‘˜βˆ₯2 .

(36)Hence we get

βˆ₯π‘‘π‘˜βˆ₯2

βˆ₯π‘”π‘˜βˆ₯4 ≀ 𝛿2π‘˜βˆ’1(1βˆ’πœπ‘˜βˆ’2)2

𝛿2π‘˜(1βˆ’πœπ‘˜βˆ’1)2βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

βˆ₯π‘”π‘˜βˆ’1βˆ₯4 + 1𝛿2π‘˜

1+πœπ‘˜βˆ’1

1βˆ’πœπ‘˜βˆ’1

1πœƒ2

≀ 𝛿2π‘˜βˆ’2(1βˆ’πœπ‘˜βˆ’3)2

𝛿2π‘˜βˆ’1(1βˆ’πœπ‘˜βˆ’1)2βˆ₯π‘‘π‘˜βˆ’2βˆ₯2

βˆ₯π‘”π‘˜βˆ’2βˆ₯4

+ 1𝛿2π‘˜βˆ’1

[1βˆ’πœ2

π‘˜βˆ’1

(1βˆ’πœπ‘˜βˆ’1)2+

1βˆ’πœ2π‘˜βˆ’2

(1βˆ’πœπ‘˜βˆ’1)2] 1πœƒ2

≀ β‹… β‹… ⋅≀ ( 𝛿2π›Ώπ‘˜ )

2( 1βˆ’πœ11βˆ’πœπ‘˜βˆ’1

)2 βˆ₯𝑑2βˆ₯2

βˆ₯𝑔2βˆ₯4

+ 1𝛿2π‘˜[

1βˆ’πœ2π‘˜βˆ’1

(1βˆ’πœπ‘˜βˆ’1)2+ β‹… β‹… β‹…+ 1βˆ’πœ2

2

(1βˆ’πœπ‘˜βˆ’1)2] 1πœƒ2 .

(37)

According to Corollary 3.1, we know the quantities

( 1βˆ’πœ11βˆ’πœπ‘˜βˆ’1

)2,1βˆ’πœ2

π‘˜βˆ’1

(1βˆ’πœπ‘˜βˆ’1)2, β‹… β‹… β‹… , 1βˆ’πœ2

2

(1βˆ’πœπ‘˜βˆ’1)2(38)

818

have a common upper bound. Denoting this common boundby πœ›, 𝑐 = πœ›/πœƒ2 and 𝑑 = πœ›βˆ₯𝑑2βˆ₯2/βˆ₯𝑔2βˆ₯4.From (37) weobtainβˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 β‰₯ 𝛿2π‘˜

(π‘˜ βˆ’ 2)𝑐+ 𝛿22𝑑β‰₯ 𝐿2

[(π‘˜ βˆ’ 2)𝑐+ 𝐿2𝑑]𝜌2π‘šπ‘–π‘›, (39)

which indicates βˆ‘π‘‘π‘˜ βˆ•=0

βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 = ∞. (40)

Obviously, it is contradictory to (33). Thus the theorem 3.4is valid.

Secondly, if π‘£π‘˜βˆ’1 > 0, i.e., π‘¦βˆ—π‘˜βˆ’1 = π‘¦π‘˜βˆ’1 + π‘£π‘˜βˆ’1π‘ π‘˜βˆ’1, thenby (4), (8), (18) and Theorem 3.1, we get

π‘”π‘‡π‘˜ π‘‘π‘˜ = π‘”π‘‡π‘˜ (βˆ’ 1π›Ώπ‘˜π‘”π‘˜ +

βˆ₯π‘”π‘˜βˆ₯2π‘‘π‘˜βˆ’1

π›Ώπ‘˜π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1+π‘£π‘˜βˆ’1π›Όπ‘˜π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ’1βˆ₯2 )

= βˆ’βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜(1βˆ’ π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1

π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1βˆ’ π‘£π‘˜βˆ’1π‘”π‘‡π‘˜

π‘‘π‘˜

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

βˆ₯π‘‘π‘˜βˆ’1βˆ₯2)

= βˆ’βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜(1βˆ’ π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2

Ξ“π‘˜π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1

π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1βˆ’π‘£π‘˜βˆ’1βˆ₯π‘‘π‘˜βˆ’1βˆ₯2π‘”π‘‡π‘˜ π‘‘π‘˜

)

= βˆ’βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜[1 +

π›Ώπ‘˜πœπ‘˜βˆ’1βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

(1βˆ’πœπ‘˜βˆ’1)π›Ώπ‘˜βˆ₯π‘‘π‘˜βˆ₯2Ξ“π‘˜

+π‘£π‘˜βˆ’1πœπ‘˜βˆ’1βˆ₯π‘‘π‘˜βˆ’1βˆ₯2 ]

≀ βˆ’ 1π›Ώπ‘˜βˆ₯π‘”π‘˜βˆ₯2.

(41)Furthermore, we have

βˆ’π‘”π‘‡π‘˜ π‘‘π‘˜ β‰₯ βˆ₯π‘”π‘˜βˆ₯2/π›Ώπ‘˜ =β‡’ (π‘”π‘‡π‘˜ π‘‘π‘˜)2 β‰₯ βˆ₯π‘”π‘˜βˆ₯4/𝛿2π‘˜. (42)

(4),(8) and Theorem 3.1 yield

βˆ₯π‘‘π‘˜βˆ₯2 = βˆ₯ βˆ’ 1π›Ώπ‘˜π‘”π‘˜ +

βˆ₯π‘”π‘˜βˆ₯2

π›Ώπ‘˜π‘‘π‘‡π‘˜βˆ’1(π‘¦π‘˜βˆ’1+π‘£π‘˜βˆ’1π‘ π‘˜βˆ’1)π‘‘π‘˜βˆ’1βˆ₯2

= βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜βˆ’ 2βˆ₯π‘”π‘˜βˆ₯2

𝛿2π‘˜

π‘”π‘‡π‘˜ π‘‘π‘˜βˆ’1

π‘‘π‘‡π‘˜βˆ’1(π‘¦π‘˜βˆ’1+π‘£π‘˜βˆ’1π‘ π‘˜βˆ’1)

+ βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

𝛿2π‘˜(π‘‘π‘‡π‘˜βˆ’1π‘¦π‘˜βˆ’1+π‘£π‘˜βˆ’1π‘‘π‘‡π‘˜βˆ’1π‘ π‘˜βˆ’1)2

.

(43)The above relation can be re-written as

βˆ₯π‘‘π‘˜βˆ₯2

βˆ₯π‘”π‘˜βˆ₯4 = 1𝛿2π‘˜βˆ₯π‘”π‘˜βˆ₯2

βˆ’ 2πœπ‘˜βˆ’1π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1

(πœπ‘˜βˆ’1βˆ’1)𝛿2π‘˜βˆ₯π‘”π‘˜βˆ₯2π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1+π›Όπ‘˜βˆ’1π‘£π‘˜βˆ’1𝛿2π‘˜βˆ₯π‘”π‘˜βˆ₯2βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

+ βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

𝛿2π‘˜[(πœπ‘˜βˆ’1βˆ’1)π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1+π›Όπ‘˜βˆ’1π‘£π‘˜βˆ’1βˆ₯π‘‘π‘˜βˆ’1βˆ₯2]2

≀ 1𝛿2π‘˜βˆ₯π‘”π‘˜βˆ₯2 + βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

𝛿2π‘˜[(πœπ‘˜βˆ’1βˆ’1)π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1+π›Όπ‘˜βˆ’1π‘£π‘˜βˆ’1βˆ₯π‘‘π‘˜βˆ’1βˆ₯2]2

≀ 1𝛿2π‘˜

βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

(1βˆ’πœπ‘˜βˆ’1)2(π‘”π‘‡π‘˜βˆ’1π‘‘π‘˜βˆ’1)2+ 1

𝛿2π‘˜

1βˆ₯π‘”π‘˜βˆ₯2 .

(44)From (42) and (44) we have

βˆ₯π‘‘π‘˜βˆ₯2

βˆ₯π‘”π‘˜βˆ₯4 ≀ 𝛿2π‘˜βˆ’1

𝛿2π‘˜

βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

(1βˆ’πœπ‘˜βˆ’1)2βˆ₯π‘”π‘˜βˆ₯4 + 1𝛿2π‘˜

1βˆ₯π‘”π‘˜βˆ₯2

≀ 𝛿2π‘˜βˆ’1

𝛿2π‘˜

1(1βˆ’πœπ‘˜βˆ’1)2

βˆ₯π‘‘π‘˜βˆ’1βˆ₯2

βˆ₯π‘”π‘˜βˆ’1βˆ₯4 + 1𝛿2π‘˜

1πœƒ2

≀ 𝛿2π‘˜βˆ’2

𝛿2π‘˜

1(1βˆ’πœπ‘˜βˆ’1)2(1βˆ’πœπ‘˜βˆ’2)2

βˆ₯π‘‘π‘˜βˆ’2βˆ₯2

βˆ₯π‘”π‘˜βˆ’2βˆ₯4

+ 1𝛿2π‘˜[1 + 1

(1βˆ’πœπ‘˜βˆ’1)2] 1πœƒ2

≀ 𝛿2π‘˜βˆ’3

𝛿2π‘˜

1(1βˆ’πœπ‘˜βˆ’1)2(1βˆ’πœπ‘˜βˆ’2)2(1βˆ’πœπ‘˜βˆ’3)2

βˆ₯π‘‘π‘˜βˆ’3βˆ₯2

βˆ₯π‘”π‘˜βˆ’3βˆ₯4

+ 1𝛿2π‘˜[1 + 1

(1βˆ’πœπ‘˜βˆ’1)2+ 1

(1βˆ’πœπ‘˜βˆ’1)2(1βˆ’πœπ‘˜βˆ’2)2] 1πœƒ2

≀ β‹… β‹… ⋅≀ 𝛿22

𝛿2π‘˜

1βˆπ‘˜βˆ’2

𝑖=1 (1βˆ’πœπ‘˜βˆ’π‘–)2βˆ₯𝑑2βˆ₯2

βˆ₯𝑔2βˆ₯4

+ 1𝛿2π‘˜[1 +

βˆ‘π‘˜βˆ’3𝑗=2

1βˆπ‘—

𝑖=1(1βˆ’πœπ‘˜βˆ’π‘–)2] 1πœƒ2 .

(45)

According to Corollary 3.1, we know the quantities

1βˆπ‘˜βˆ’2𝑖=1 (1βˆ’ πœπ‘˜βˆ’π‘–)2

,1

(1βˆ’ πœπ‘˜βˆ’1)2,

1

(1βˆ’ πœπ‘˜βˆ’1)2(1βˆ’ πœπ‘˜βˆ’2)2

, β‹… β‹… β‹… ,

1βˆπ‘˜βˆ’3𝑖=1 (1βˆ’ πœπ‘˜βˆ’π‘–)2

have a common upper bound. Denoting this common boundby πœ”, from (45) we obtain

βˆ₯π‘‘π‘˜βˆ₯2βˆ₯π‘”π‘˜βˆ₯4 ≀ 1

𝛿2π‘˜{πœ”π›Ώ

22βˆ₯𝑑2βˆ₯2βˆ₯𝑔2βˆ₯4 + [1 + (π‘˜ βˆ’ 4)πœ”]

1

πœƒ2}. (46)

We denote πœ”π›Ώ22βˆ₯𝑑2βˆ₯2

βˆ₯𝑔2βˆ₯4 + [1+ (π‘˜βˆ’ 4)πœ”] 1πœƒ2 by Ξ©(π‘˜) here. Then

the above relation can be re-written as

βˆ₯π‘‘π‘˜βˆ₯2βˆ₯π‘”π‘˜βˆ₯4 ≀ 1

𝛿2π‘˜Ξ©(π‘˜) ≀ 𝜌2π‘šπ‘–π‘›Ξ©(π‘˜)

𝐿2, (47)

i.e.,βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 β‰₯ 𝐿2

𝜌2π‘šπ‘–π‘›Ξ©(π‘˜). (48)

From (48) we have βˆ‘π‘‘π‘˜ βˆ•=0

βˆ₯π‘”π‘˜βˆ₯4βˆ₯π‘‘π‘˜βˆ₯2 = ∞. (49)

Obviously, it is contradictory to (33). Thus the theorem 3.4is valid. This finishes our proof completely.

IV. NUMERICAL EXPERIMENTS

In this section the numerical experiments are presentedto show the performance of the Algorithm 2.1 for salt-and-pepper noise removal. These experiments are tested in Matlab7.0 and all the test images are 512Γ—512 gray level images.

According to the numerical comparison in [4], the Polak-Ribiere conjugate gradient method (PRCG) is the mostefficient method. Here, we test SCG method and compare itwith and PRCG method. The computation results are givenin Tab. 1. We report the number of iterations (NI) and theCPU-time required for the whole denoising process and thePSNR of the restored images.

It should be stressed that we are mainly concerned withthe speed of slolving the minimization of (2). We chooseπœ‘π›Ό(𝑒) =

βˆšπ›Ό+ 𝑒2 with 𝛼 = 100 and let the parameters

π‘Ž = 3 and 𝑏 = 6 in (11) during the whole tests.To assess the restoration performance qualitatively, we use

the PSNR (peak signal to noise ratio, see details in [1])defined as

𝑃𝑆𝑁𝑅 = 10 log102552

1𝑀𝑁

βˆ‘π‘–,𝑗

(π‘’π‘Ÿπ‘–,𝑗 βˆ’ π‘’βˆ—π‘–,𝑗)

2, (50)

where π‘’π‘Ÿπ‘–,𝑗 and π‘’βˆ—π‘–,𝑗 denote the pixel values of the restored

image and the original image respectively.The stopping cri-terion of both methods are

βˆ₯π‘”π‘˜βˆ₯ ≀ 10βˆ’6(1 + βˆ£π‘“(π‘’π‘˜)∣) (51)

819

1 2 3 4 5 6 7 8 9 100

50

100

150

200

250

300

350nu

mbe

r of

iter

atio

ns

PRCGSCG

Fig. 1. Number of iterations of PRCG/SCG methods for ten test imagesin Table I.

1 2 3 4 5 6 7 8 9 100

20

40

60

80

100

120

CP

Uβˆ’

time

PRCGSCG

Fig. 2. CPU-time of PRCG/SCG methods for ten test images in Table I.

andβˆ£π‘“(π‘’π‘˜)βˆ’ 𝑓(π‘’π‘˜βˆ’1)∣

βˆ£π‘“(π‘’π‘˜)∣ ≀ 10βˆ’6. (52)

Table I indicates that the proposed SCG method is aboutthree or four even five times faster than the famous PRCGmethod for all of the test images. We also find the PSNRvalues attained by the SCG and PRCG method are verysimilar in some degree. Figs.1 and 2 show the SCG methodhas less sensitivity to different problems. Fig.3 shows partoriginal test images in this paper, the corresponding noisyimages with noise level π‘Ÿ = 70% and the restoration resultsby PRCG method and SCG method respectively. Theseresults show that the proposed spectral conjugate gradientmethod (SCG) is feasible and it can restore corrupted imagequite well in an efficient manner.

V. CONCLUSION

In this paper,we present a spectral conjugate gradientmethod (SCG) to minimize the smooth regularization func-tional for salt-and-pepper noise removal. Its global conver-gence can be established under some suitable conditions.Numerical experiments results illustrate that SCG methodcan significantly improve the CPU-time for image processingwhile obtaining the same restored image quality.

Our motivation comes from [14]-[16]. In this paper wejust let Ξ“π‘˜ ≑ 𝐼 (the unit matrix) and it brings us somegood results. So we can try adopting some new forms of Ξ“π‘˜,updating by the quasi-Newton method, such as the classicalBFGS and DFP quasi-Newton update formulaes.

TABLE IPERFORMANCE OF SALT-AND PEPPER DENOISING VIA PRCG AND SCG

Test Noise PRCGImages Level PSNR (dB) CPU-time NIBoat 27.938 46.3281 194

Bridge 24.998 60.672 255Cameraman 30.755 76.2344 228

Goldhill 70% 29.792 48.1094 144Head DTI 32.892 47.8281 196

Lena 31.167 42.2813 177Pepper 30.3889 78.2188 234Texture 19.84 104.9214 306

Brain MRI 70% 35.1812 41.9688 123Brain MRI 90% 30.247 59.3125 257

SCGPSNR (dB) CPU-time NI

Boat 27.978 11.6533 45Bridge 24.853 14.461 55

Cameraman 30.799 12.012 48Goldhill 70% 29.857 9.5005 38

Head DTI 32.712 12.199 48Lena 31.207 12.4021 49

Pepper 30.417 12.729 50Texture 19.852 19.0165 56

Brain MRI 70% 35.315 9.297 37Brain MRI 90% 30.222 12.6985 50

REFERENCES

[1] A. Bovik, Handbook of Image and Video Processing. Academic, NewYork, 2000.

[2] R.H. Chan, C.W. Ho and M. Nikolova, β€œSalt-and-pepper noise removalby median-type noise detectors and detail-preserving regularization,”IEEE Trans, Image Process, vol. 14, pp. 1479-1485, 2005.

[3] H. Hwang and R.A. Haddad, β€œAdaptive median filters: New algorithmsand results,” IEEE Trans, Image Process, vol. 4, pp. 499-502, 1995.

[4] J.F. Cai, R.H. Chan and C.D. Fiore, β€œMinimization of a detail-preservingregularization function for impulse noise removal,” J. Math, ImagingVision, vol. 27, pp. 79-91, 2007.

[5] T. Luki𝑐, J.Lindblad and N,Sladoje, β€œRegularized image denoisingbased on spectral gradient optimization,” Inverse Problems, vol. 27,2011.

[6] M. Black and A. Rangarajan, β€œOn the unification of line processes,outlier rejection, and robust statistics with applications to early vision,”International Journal of Computer Vision, vol. 19, pp. 57-91, 1996.

[7] C. Bouman and K. Sauer, β€œOn discontinuity-adaptive smoothness pri-ors in computer vision,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 17, pp. 576-586, 1995.

[8] P. Charbonnier, L. Blanc-Feraud, G. Aubert and M. Barlaud, β€œDeter-ministic edge-preserving regularization in computed imaging,” IEEETransactions on Image Processing, vol. 6, pp. 298-311, 1997.

[9] P. J. Green, β€œBayesian reconstructions from emission tomographydata using a modified EM algorithm,” IEEE Transactions on MedicalImaging, MI-9, pp. 84-93, 1990.

820

Fig. 3. The first line is the original images: Head DTI/Brain MRI/Lena. The second line is the noisy images with noise level r =70%: Head DTI/BrainMRI/Lena. The third line is the restored images via PRCG (the PSNR values are 32.892dB, 35.1812dB and 31.167dB severally). The fourth line is therestored images via SCG (the PSNR values are 32.712dB, 35.315dB and 31.207dB severally).

[10] J.F. Cai, R.H. Chan and B. Morini, β€œMinimization of an edge-preserving regularization function by conjugate gradient type methods,”in: Image Precessing baded on Partial Differential Equations, in Math-ematic and Visualization, Springer, Berlin, Heidelberg, pp. 109-122,2007.

[11] E.G. Birgin and J.M. Martinez, β€œA spectral conjugate gradient methodfor unconstrained optimization,” Appl. Math. Optim, vol. 43, pp. 117-128, 2001.

[12] N. Andrei, β€œScaled memoryless BFGS preconditioned conjugate gra-dient algorithm for unconstrained optimization,” Optim. Methods Softw,vol. 22, pp. 561-571, 2007.

[13] G.H. Yu, L.T. Guan and W.F. Chen, β€œSpectral conjugate gradientmethods with sufficient descent property for large-scale unconstrainedoptimization,” Optim.Methods Softw, vol. 23, pp. 275-293, 2008.

[14] G.H. Yu, J.H. Huang and Y. Zhou, β€œA descent spectral conjugategradient method for impulse noise removal,” Applied Mathematics

Letters, vol. 23, pp. 555-560, 2010.[15] G.Y. Li, C.M. Tang and Z.X. Wei, β€œNew conjugate condition and re-

lated new conjugate gradient methods for unconstrained optimization,”Journal of Computational and Applied Mathematics, vol. 202, pp. 523-529, 2007.

[16] J. Sun and J.P. Zhang, β€œGlobal convergence of conjugate gradientmethods without line search,” Annals of Operations Research, vol. 103,pp. 161-173, 2001.

821


Recommended