25
This article was downloaded by: [Colorado College] On: 10 October 2014, At: 12:46 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gcom20 A new infeasible interior-point algorithm with full step for linear optimization based on a simple function Lipu Zhang a b & Yinghong Xu c a Department of Mathematics , Zhejiang A & F University , Zhejiang, 311300, China b Department of Mathematics , Shanghai University , Shanghai, 200444, China c Department of Mathematics , Zhejiang Sci-Tech University , Zhejiang, 310018, China Published online: 27 Jul 2011. To cite this article: Lipu Zhang & Yinghong Xu (2011) A new infeasible interior-point algorithm with full step for linear optimization based on a simple function, International Journal of Computer Mathematics, 88:15, 3163-3185, DOI: 10.1080/00207160.2011.597503 To link to this article: http://dx.doi.org/10.1080/00207160.2011.597503 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,

A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

Embed Size (px)

Citation preview

Page 1: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

This article was downloaded by: [Colorado College]On: 10 October 2014, At: 12:46Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of ComputerMathematicsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gcom20

A new infeasible interior-pointalgorithm with full step for linearoptimization based on a simplefunctionLipu Zhang a b & Yinghong Xu ca Department of Mathematics , Zhejiang A & F University ,Zhejiang, 311300, Chinab Department of Mathematics , Shanghai University , Shanghai,200444, Chinac Department of Mathematics , Zhejiang Sci-Tech University ,Zhejiang, 310018, ChinaPublished online: 27 Jul 2011.

To cite this article: Lipu Zhang & Yinghong Xu (2011) A new infeasible interior-point algorithmwith full step for linear optimization based on a simple function, International Journal of ComputerMathematics, 88:15, 3163-3185, DOI: 10.1080/00207160.2011.597503

To link to this article: http://dx.doi.org/10.1080/00207160.2011.597503

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,

Page 2: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 3: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer MathematicsVol. 88, No. 15, October 2011, 3163–3185

A new infeasible interior-point algorithm with full step for linearoptimization based on a simple function

Lipu Zhanga,b* and Yinghong Xuc

aDepartment of Mathematics, Zhejiang A & F University, Zhejiang 311300, China; bDepartmentof Mathematics, Shanghai University, Shanghai 200444, China; cDepartment of Mathematics,

Zhejiang Sci-Tech University, Zhejiang 310018, China

(Received 22 December 2010; revised version received 28 March 2011; accepted 9 June 2011)

In this paper, we design and analyse an infeasible interior-point algorithm based on a simple function forlinear optimization. The infeasible algorithm contains two types of search directions: the feasibility searchdirection and the centrality search direction. Both of the directions are determined by the simple function.The algorithm uses full step, thus no need to perform the line-search procedure. Although the proposedfunction is simple, as it will be shown, the induced infeasible algorithm enjoys the best-known iterationcomplexity for infeasible interior-point algorithm.

Keywords: linear optimization; infeasible interior-point algorithm; full step; complexity analysis

2000 AMS Subject Classifications: 90C05; 90C25; 90C51

1. Introduction

Interior-point methods (IPMs) for solving linear optimization (LO) problems were initiated byKarmarkar [7]. They not only have polynomial complexity but are also highly efficient in practice.For a survey, we refer to recent books on the subject [20,23]. One may distinguish between feasibleIPMs and infeasible IPMs (IIPMs). Feasible IPMs start with a strictly feasible interior point andThursday, June 23, 2011 at 2:16 pm maintain feasibility during the solution process. Unfortunately,it is not at all trivial how to find an initial feasible interior point. One strategy to overcome thisproblem is to use the homogeneous embedding model as introduced first by Ye et al. [26] forLO, and further developed in [20,24]. IIPMs start with an arbitrary positive point and feasibilityis reached as optimality is approached. The first IIPMs were proposed by Lustig [9] and Tanabe[22]. Global convergence was shown by Kojima et al. [8], whereas Zhang [27] proved an O(n2L)

iteration bound for IIPMs under certain conditions. Mizuno [13] introduced a primal–dual IIPMand proved global convergence of the algorithm. A detailed discussion and analysis of IIPMs canbe found in the book by Wright [23].

*Corresponding author. Email: [email protected]

ISSN 0020-7160 print/ISSN 1029-0265 online© 2011 Taylor & FrancisDOI: 10.1080/00207160.2011.597503http://www.informaworld.com

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 4: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3164 L. Zhang and Y. Xu

Recently, Roos [19] designed the first primal–dual IIPM with full-Newton steps for LOproblems. He also proved that the complexity of his algorithm coincides with the best-knownresult for the IIPMs. Later on, Mansouri [10] and Mansouri and Roos [11], Gu [5], Gu et al. [6]and Zhongyi et al. [28] modified the search direction of full-Newton step IIPM and admitted almostthe similar complexity results as given in [19]. It should be noted that all the above-mentionedalgorithms adopted some kinds of Newton step as search directions.

Recently, a new class of primal–dual IPMs was introduced. These methods do not use theclassical-Newton direction. Instead, they use a direction that can be characterized as a steepestdescent direction (in a scaled space) for some kinds of barrier function [1–4,14–17]. Any suchbarrier function is determined by a simple univariate function, called its kernel function. Theliterature about using the kernel function technique to analyse IIPM is less [21]. It should benoted that all the aforementioned kernel function-based methods use the damped step size.

A kernel function is a univariate strictly convex function which is defined for all positive realt and which is minimal at t = 1 whereas the minimal value equals 0. To use the kernel functionin the designing of the algorithm, many restrictions have been set on, for example, self-regular,exponential convexity, etc. These restrictions make the expression of the kernel function verycomplicated. The interested reader may refer [1–4,14–18] for further understanding.

In this paper, we investigate the following kernel function:

ψs(t) = (1 − t)2. (1)

Due to the simple expression of ψs(t), we call it as the simple kernel function. Using the simplekernel function on the co-ordinates, we construct a separable simple finite barrier function. Asan application, based on the simple finite barrier function, we present an IIPM for LO. Thealgorithm uses the full step, thus no need to perform the line-search procedure. In the processof the algorithm, the simple finite barrier function is used to determine the search directions andmeasure the proximity of iterates to centre path. To be more precise, the initial iterate is (very)close to the central path of a perturbed pair. After a full step (feasibility step), one obtains astrictly feasible iterate for the next perturbed pair which is close to the central path. If necessary,by accomplishing a few more full steps (centring steps) for the new perturbed pair, we obtainstrictly feasible iterates (very) close to the central path of the new perturbed pair. The algorithmstops if the norms of the residuals and the duality gap are less than the accuracy parameter ε. Wederive the iteration complexity for the algorithm and the complexity result is the best known forIIPM for LO.

The structure of the paper is as follows. In Section 2, we introduce the LO problem andits perturbed problem. Moreover, we present two types of full-Newton step, centring step andfeasibility step, which is important in full-step IIPM. In Section 3, we give new search directionsfor full-step IIPM; these directions can be interpreted as a kind of steepest descent direction of aseparable function, which is constructed by using the simple function on co-ordinates. The genericalgorithm is also presented in this section. Furthermore, we investigate some useful propertiesof the simple kernel function. These properties will be used as tools in the analysis of the IIPMproposed in this paper. Section 4 is about the centring steps. Starting from a strictly feasibleiterate, after performing a few number of centring steps, one can obtain iterates which are veryclose to the central path. In some neighbourhood of the central path, the centring step has a localquadratic convergence property according to the proximity function. In Section 5, we analysethe feasibility step used in the algorithm. It includes a feasibility condition and the effect of fullstep on the proximity function after the parameter update. After a feasibility step, one obtains astrictly feasible iterate for the perturbed LO problem. In Section 6, we derive the complexity forthe algorithm, the complexity analysis shows that the algorithm admits the best-known iterationbound for IIPM. In Section 7, we report some numerical tests for the algorithm. Finally, we endthe paper with some conclusions and remarks by Section 8.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 5: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3165

Some notation used throughout the paper are as follows. The 2-norm and the infinity norm aredenoted by ‖ · ‖ and ‖ · ‖∞, respectively. If x, s ∈ Rn, then xs denotes the component-wise (orHadamard) product of the vectors x and s. min(v) and max(v) denote the minimize and maximizecomponents of the vector v, respectively.

2. Preliminary

In this paper, we consider the LO problem in the standard form

(P) min{cTx : Ax = b, x ≥ 0},where A ∈ Rm×n, rank(A) = m, b ∈ Rm, c ∈ Rn and its dual problem

(D) max{bTy : ATy + s = c, s ≥ 0}.Without loss of generality [20], we assume that (P) and (D) satisfy the interior-point condition

(IPC), i.e. there exist x0, y0 and s0 such that

Ax0 = b, x0 > 0, ATy0 + s0 = c, s0 > 0.

2.1 The perturbed problems

We start with choosing arbitrarily x0 > 0 and y0, s0 > 0 such that x0s0 = μ0e for some (positive)number μ0. Denote the initial residual vectors r0

b and r0c , respectively, as

r0b = b − Ax0

and

r0c = c − ATy0 − s0.

For any ν with 0 < ν ≤ 1, we consider the perturbed problem (Pν), defined by

(Pν) min{(c − νr0c )

Tx : Ax = b − νr0b , x ≥ 0},

and its dual problem (Dν), which is given by

(Dν) max{(b − νr0b)

Ty : ATy + s = c − νr0c , s ≥ 0}.

Note that if ν = 1, then x = x0 yields a strictly feasible solution of (Pν) and (y, s) = (y0, s0)

yields a strictly feasible solution of (Dν), which mean that both perturbed problems satisfy thewell-known IPC.

Lemma 2.1 [25, Theorem 5.13] The original problems, (P) and (D), are feasible if and only iffor each ν satisfying 0 < ν ≤ 1 the perturbed problems (Pν) and (Dν) satisfy the IPC.

2.2 The central path of the perturbed problems

Assuming that problems (P) and (D) are feasible, it follows from Lemma 2.1 that the perturbedproblem pair (Pν) and (Dν) satisfy the IPC, for each ν ∈ (0, 1], which guarantees that the

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 6: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3166 L. Zhang and Y. Xu

following system,

b − Ax = νr0b , x ≥ 0,

c − ATy − s = νr0c , s ≥ 0,

xs = μe, (2)

has a unique solution for every μ > 0. Denoting this unique solution in the sequel as(x(μ, ν), y(μ, ν), s(μ, ν)), as a consequence, x(μ, ν) is the μ-centre of (Pν) and (y(μ, ν), s(μ, ν))is the μ-centre of (Dν). In other words, (x(μ0, 1), s(μ0, 1)) = (x0, s0). In what follows, theparameters μ and ν always satisfy the relation μ = νμ0.

2.3 Two types of full step

In what follows, we associate with any triple (x, s, μ), with x > 0, s > 0 and μ > 0, the vector

v =√

xs

μ.

Note that if (x, s) is feasible, then the pair (x, s) coincides with the μ-centre of (x(μ, ν), s(μ, ν))

if and only if v = e.

2.3.1 Centring step

In this step, we are given a positive feasible pair (x, s) and some μ > 0. Our aim is to definesearch directions (�cx, �cs) that move in the direction of the μ-centre (x(μ, ν), s(μ, ν)). In fact,we want the new iterates x + �cx and s + �cs to satisfy system (2) and be positive with respectto μ. After substitution, this yields the following conditions on (�cx, �cs):

A(x + �cx) = b − νr0b , (x + �cx) > 0,

AT(y + �cy) + (s + �cs) = c − νr0c , (s + �cs) > 0,

(x + �cx)(s + �cs) = μe.

If we neglect for the moment, the inequality constraints, then, since (x, s) are feasible for (Pν)

and (Dν), this system can be rewritten as follows:

A�cx = 0,

AT�cy + �cs = 0,

x�cs + s�cx = μe − xs. (3)

Because A has full row rank, the above system uniquely defines a search direction (�cx, �cy, �cs)for any x > 0 and s > 0, and this is the so-called classical-Newton direction. Since this directionis targeting at the μ-centre (x(μ, ν), s(μ, ν)), we called it the centring step.

Introducing the notation

A := AV−1X and V := diag(v), X := diag(x) (4)

and defining the scaled search directions dcx and dc

s according to

dcx := v�cx

xand dc

s := v�sc

s. (5)

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 7: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3167

Using these notation, system (3) can be rewritten as

Adcx = 0,

1

μAT�cy + dc

s = 0,

dcx + dc

s = v−1 − v, (6)

where the third equation is called as the scaled centring equation. Note that dcx and dc

s are orthogonalvectors, since the vector dc

x belongs to the null space and dcs to the row space of the matrix A. Hence,

we will have dcx = dc

s = 0 if and only if v−1 − v = 0, which is equivalent to v = e. We concludethat dc

x = dcs = 0 holds if and only if the pair (x, s) coincides with the μ-centre (x(μ, ν), s(μ, ν)).

2.3.2 Feasibility step

As usual, we want to decrease the duality gap and the feasibility residuals at the same rate, i.e.we reduce μ to μ+ = (1 − θ)μ and ν+ = (1 − θ)ν with θ ∈ (0, 1).

From the definition of the perturbed problems (Pν) and (Dν), it is clear that the feasibilityequations for these problems are

Ax = b − νr0b , x ≥ 0, (7)

ATy + s = c − νr0c , s ≥ 0, (8)

and those of (Pν+) and (Dν+)

Ax = b − ν+r0b , x ≥ 0,

ATy + s = c − ν+r0c , s ≥ 0.

So, assuming that (x, y, s) satisfy (7) and (8), to get iterates that are feasible for (Pν+) and (Dν+),we need search directions (�f x, �f y, �f s) such that

A(x + �f x) = b − ν+r0b ,

AT(y + �f y) + (s + �f s) = c − ν+r0c .

Since x is feasible for (7) and (y, s) is feasible for (8), it follows that �f x and �f y, �f s shouldsatisfy

A�f x = (b − Ax) − ν+r0b = νr0

b − ν+r0b = θνr0

b

and

AT�f y + �f s = (c − ATy − s) − ν+r0c = νr0

c − ν+r0c = θνr0

c .

Thus, the feasibility search direction (�f x, �f y, �f s) can be defined by the following system:

A�f x = θνr0b ,

AT�f y + �f s = θνr0c ,

s�f x + x�f s = μe − xs. (9)

Since matrix A is full row rank, system (9) uniquely defines (�f x, �f y, �f s) for any x >0 and s>0.Denoting

dfx := v�f x

x, df

s := v�f s

s, (10)

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 8: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3168 L. Zhang and Y. Xu

where v = √xs/μ, one may easily check that the system (9) which defines the search directions

(�f x, �f y, �f s) can be expressed as follows:

Adfx = θνr0

b ,

1

μAT�f y + df

s = θvνs−1r0c ,

dfx + df

s = v−1 − v, (11)

where V , X and A are defined in Equation (4).

3. The new search direction and the algorithm

Letting ψs(t) defined as Equation (1), one has

ψ ′s(t) = −2(1 − t) and ψ ′′

s (t) = 2. (12)

It is easily verified that ψs(1) = ψ ′s(1) = 0 and ψs(t) are strictly convex. Using the simple function

on the co-ordinates, we construct a separable simple barrier function

�s(v) =n∑

i=1

ψs(vi) =n∑

i=1

(1 − vi)2. (13)

It is easily verified that �s(v) is strictly convex. Moreover, since ∇�s(e) = 0, it follows that �s(v)attains its minimal value at v = e. Thus, �s(v) is non-negative everywhere and vanishes if andonly if v = e, i.e. if and only if x = x(μ, ν) and s = s(μ, ν) for (Pν) and (Dν). The μ-centre of(Pν) and (Dν) can therefore be characterized as the minimizers of �s(v).

3.1 Two new directions

For centring steps, we replace the right-hand side of the scaled centring equation in Equation (6)by −∇�s(v) and obtain

Adcx = 0,

1

μAT�cy + dc

s = 0,

dcx + dc

s = −∇�s(v). (14)

The search directions dcx and dc

s are obtained by solving Equation (14). So �cx and �cs arecomputed via Equation (5). Similarly, we replace the right-hand side of the scaled centring equationin Equation (11) by −∇�s(v) and obtain

Adfx = θνr0

b ,

1

μAT�f y + df

s = θvνs−1r0c ,

dfs + df

x = −∇�s(v). (15)

The search directions dfx and df

s are obtained by solving Equation (15). So that �f x and �f sare computed via Equation (10). Thus, we obtain new search directions for centring steps andfeasibility step, respectively.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 9: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3169

3.2 Proximity function

To measure the quality of any approximation (x, s) of (x(μ, ν), s(μ, ν)), we introduce a proximityfunction �s(x, s; μ) that vanishes if (x, s) = (x(μ, ν), s(μ, ν)) and is positive otherwise. Since�s(v) is strictly convex and minimal at v = e, we have

�s(v) = 0 ⇔ v = e.

It will be clear that the proximity function �s(v) is a natural candidate for measuring the closenessto the pair of μ-centres. In this case, one has

�s(x, s; μ) = �s(v).

3.3 Description of the full-step IIPM

As usual for IIPMs, let us assume that the initial iterates

x0 = s0 = ξe, y0 = 0 and μ0 = ξ 2, (16)

where μ0 is the initial parameter and ξ > 0 is such that

‖x∗ + s∗‖∞ ≤ ξ (17)

for some optimal solutions (x∗, y∗, s∗).Initially, x0 = s0 = ξe and μ0 = ξ 2, whence v0 = e and �s(v0) = 0. In the sequel, it is assumed

that at the start of each iteration, �s(v) is smaller than or equal to a (small) threshold value τ > 0.So, this is certainly true at the start of the first iteration.

Now, we describe one (main) iteration of our algorithm. Suppose that for some μ ∈ (0, μ0), wehave (x, s) satisfying the feasibility condition of (Pν) and (Dν) for ν = μ/μ0, and xTs ≤ nμ and�s(x, s; μ) ≤ τ . We reduce μ to μ+ = (1 − θ)μ, with θ ∈ (0, 1), and find new iterates (x+, s+)

that satisfy (Pν) and (Dν), with μ replaced by μ+ and ν by ν+ = μ+/μ0, and such that xTs ≤ nμ+and �s(x+, s+; μ+) ≤ τ . Note that ν+ = (1 − θ)ν.

To be more precise, this is achieved as follows. Each main iteration consists of a feasibilitystep and a few centring steps. The feasibility step serves to generate strictly feasible iteratesfor (Pν+) and (Dν+) and close to its μ+-centre (x(μ+, ν+), s(μ+, ν+)). After completing thefeasibility step, if necessary, it proceeds to perform centring steps, by accomplishing a few centringsteps for (Pν+) and (Dν+), and obtains strictly feasible iterates close enough to the μ+-centre(x(μ+, ν+), s(μ+, ν+)), i.e. the condition �s(x, s; μ) ≤ τ can be restored. The algorithm stops ifthe norms of the residuals and the duality gap are less than the accuracy parameter ε.

3.4 The algorithm

In the case of an infeasible method, we call the triple (x, y, s) an ε-solution of (P) and (D) if the2-norm of the residual vectors b − Ax and c − ATy − s does not exceed ε, and also xTs ≤ ε. �f xand �f s denote the feasibility step, while �cx and �cs denote the centring step.

Algorithm 1. Full-step IIPM for LO based on the simple function

Input:A threshold parameter τ > 0;an accuracy parameter ε > 0;

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 10: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3170 L. Zhang and Y. Xu

a fixed barrier update parameter θ , 0 < θ < 1;bound parameter ξ ;

beginx := ξe; s := ξe; ν := 1;

while max{xTs, ‖s − Mx − q‖} ≥ ε do

beginfeasibility step: solving (15) and obtain

(x, s) := (x, s) + (�f x, �f s)

;μ and ν-update:

μ := (1 − θ)μ, ν := (1 − θ)ν;centring steps:while �s(x, s; μ) > τ do

solving (14) and obtain(x, s) := (x, s) + (�cx, �cs);

endwhileend

end

The most important matter in Algorithm 1 is how to choose the parameters that control thealgorithm, i.e. the threshold parameter τ and the barrier update parameter θ , so as to minimizethe iteration complexity.

3.5 Some more properties about the simple function

It is straightforward that

ψs(t) = 1

4ψ ′

s(t)2, −∇�s(v) = 2(e − v). (18)

The following lemma estimates a lower bound for the components of v and an upper bound forthe 2-norm of vector v in terms of �s(v), respectively.

Lemma 3.1 One has

(i) min(v) ≥ 1 − √�s(v),

(ii) ‖v‖ ≤ √n + √

�s(v).

Proof For (i), using Equations (13) and (18), one has

�s(v) =n∑

i=1

ψs(vi) = 1

4

n∑i=1

ψ ′s(vi)

2 = 1

4‖∇�s(v)‖2. (19)

Thus,

2√

�s(v) = ‖∇�s(v)‖ ≥ |ψ ′s(min(v))| ≥ −ψ ′

s(min(v)).

Using Equation (12), one has

ψ ′s(min(v)) = −2(1 − min(v)),

thus √�s(v) ≥ 1 − min(v).

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 11: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3171

This completes (i). Using Equation (13), one has

�s(v) =n∑

i=1

(1 − vi)2 = ‖v‖2 − 2eTv + n ≥ (‖v‖ − ‖e‖)2.

This implies

‖v‖ ≤ ‖e‖ + √�s(v) = √

n + √�s(v),

which completes the proof of (ii). �

4. Analysis of the centring step

The aims of this section are to find conditions for strict feasibility and to derive a local quadraticconvergence property for centring step. For that purposes, some tools are needed.

4.1 Some useful tools

It follows from Equation (13) and systems (14), (18) and (19) that

�s(v) = 1

4‖dc

x + dcs ‖2. (20)

We denote

pv = dcx + dc

s and qv = dcx − dc

s . (21)

Thus, one has

�s(v) = 1

4‖pv‖2. (22)

Lemma 4.1 Let pv and qv be defined as in Equation (21), one has

‖pv‖ = ‖qv‖.

Proof Using the orthogonality property of dx and ds and Equation (21), we have

‖pv‖2 = eT(dcx + dc

s )2

= ‖dcx‖2 + 2(dc

x )Tdc

s + ‖dcs ‖2

= ‖dcx‖2 − 2(dc

x )Tdc

s + ‖dcs ‖2

= eT(dcx − dc

s )2

= ‖qv‖2,

which completes the proof. �

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 12: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3172 L. Zhang and Y. Xu

4.2 The feasibility

We denote the result of a full step given by system (14) at (x, y, s) by (x+, y+, s+), i.e.

x+ := x + �cx, y+ := y + �cy, s+ := s + �cx.

Using Equation (5), one has

x+ = x + �cx = x

(e + dc

x

v

)= x

v(v + dc

x )

and

s+ = s + �cs = s

(e + dc

s

v

)= s

v(v + dc

s ).

We associate with any triple (x+, s+, μ), with x+ > 0, s+ > 0 and μ > 0, the vector

v+ =√

x+s+

μ. (23)

Remember that, by the third equation of system (14), we have dcx + dc

s = −∇�s(v). It followsfrom Equation (18) that

(v+)2 = x+s+

μ

= (v + dcx )(v + dc

s )

= v2 + v(dcs + dc

x ) + dcx dc

s

= e − (e − v)2 + dcx dc

s . (24)

Thus, the condition for feasibility step follows.

Lemma 4.2 The new iterates (x+, y+, s+) are strictly feasible if

�s(v) < 1.

Proof For the proof, we introduce a step length α ∈ [0, 1] and define

x(α) = x + α�x, y(α) = y + α�y, s(α) = s + α�s.

We then have x(0) = x, x(1) = x+ and similarly s(0) = s, s(1) = s+. Hence, we have

x0s0 = xs > 0.

We write

x(α)s(α) = (x + α�x)(s + α�s)

= xs + α(x�s + s�x) + α2�x�s.

Thus,x(α)s(α)

μ= v2 + αv(dc

x + dcs ) + α2dc

x dcs .

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 13: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3173

Using Equation (18), i.e. v(dcx + dc

s ) = 2(v − v2), we obtain

x(α)s(α)

μ= (1 − α)v2 + α(e − (v − e)2) + α2dc

x dcs .

Therefore, the components of x(α)s(α) will be positive if

(1 − α)v2 + α(e − (v − e)2) + α2dcx dc

s > 0. (25)

Since α ∈ [0, 1], we have that (1 − α)v2 ≥ 0. Thus, inequality (25) is true for

e − (v − e)2 + αdcx dc

s > 0. (26)

Using Equation (21), and by Equation (12) and system (14), one has

pv = 2(e − v) and p2v − q2

v = 4dcx dc

s . (27)

Thus, inequality (26) can be interpreted as

e − p2v

4+ α

p2v − q2

v

4> 0,

which means

(1 − α)p2

v

4+ α

q2v

4< e.

Thus, one concludes that the components of x(α)s(α) will be positive if

max

((1 − α)

p2v

4+ α

q2v

4

)< 1. (28)

Since ∥∥∥∥(1 − α)p2

v

4+ α

q2v

4

∥∥∥∥ ≤ (1 − α)‖p2

v‖4

+ α‖q2

v‖4

≤ (1 − α)‖pv‖2

4+ α

‖qv‖2

4

= ‖pv‖2

4,

the last equation follows from Lemma 4.1. Using Equation (22), we have∥∥∥∥(1 − α)p2

v

4+ α

q2v

4

∥∥∥∥ ≤ �s(v).

Assuming that �s(v) < 1, one has ∥∥∥∥(1 − α)p2

v

4+ α

q2v

4

∥∥∥∥ ≤ 1.

Since

max

((1 − α)

p2v

4+ α

q2v

4

)≤

∥∥∥∥(1 − α)p2

v

4+ α

q2v

4

∥∥∥∥ ,

we arrive that

max

((1 − α)

p2v

4+ α

q2v

4

)< 1

certainly holds. Hence, the components of x(α)s(α) are positive for α ∈ [0, 1]. Since x(0) ands(0) are positive and since x(α) and s(α) depend continuously on α, it follows that x(1) and s(1)

are also positive. This completes the proof. �

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 14: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3174 L. Zhang and Y. Xu

4.3 Duality gap

In general, the new iterates x+ and s+ do not coincide with μ-centres. But, we have the surprisingproperty that the duality gap is less than the value at the μ-centres, where the duality gap equals nμ.

Lemma 4.3 The duality gap after a full step satisfies

(x+)Ts+ ≤ μn.

Proof It follows from Equations (24) and (27) that we have

(v+)2 = e − (v − e)2 + dcx dc

s

= e − p2v

4+ p2

v − q2v

4

= e − q2v

4. (29)

Using Equation (23) gives

(x+)Ts+ = μeT(v+)2 = μeT

(e − q2

v

4

)

= μ

(n − ‖qv‖2

4

)≤ μn,

which completes the proof. �

4.4 Quadratic convergence

To prove the effect of a full step on proximity function, the following lemma is needed.

Lemma 4.4 Let �s(v) < 1. Then, one has

min(v+) ≥ √1 − �s(v).

Proof Using Equation (29), one has

min(v+)2 = min

(e − q2

v

4

)≥ 1 − max(q2

v)

4.

Since, by Equation (22), one has

�s(v) = ‖qv‖2

4≥ max(q2

v)

4,

the inequality

min(v+)2 ≥ 1 − �s(v)

certainly holds. This completes the proof. �

Theorem 4.5 If �s(v) < 1 and v+ is defined as Equation (23), then we have

�s(v+) < �s(v)

2.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 15: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3175

Proof Using Equation (29), we have

n∑i=1

(1 − v+i )2 =

n∑i=1

(1 − (v+i )2)2

(1 + v+i )2

≤ ‖e − (v+)2‖2

(1 + min(v+))2

≤ (‖q2v‖/4)2(

1 + √1 − �s(v)

)2

≤ (‖qv‖2/4)2(1 + √

1 − �s(v))2 .

Thus, by Equation (22), we have

�s(v+) =

n∑i=1

(1 − v+i )2 ≤ �s(v)2

(1 + √1 − �s(v))2

< �s(v)2,

and the results follow. �

Remark 4.6 Theorem 4.5 implies that after a full centring step, the proximity to the μ-centreis smaller than the square of the proximity before the full step. In other words, the full step isquadratically convergent. Moreover, the theorem defines a neighbourhood of the μ-centre wherethe quadratic convergence occurs, namely

�s(v) < 1.

This result is extremely important. It implies that when the present iterate is close to the μ-centre,only a small number of full steps bring us very close to the μ-centre.

Remark 4.7 In our analysis of IIPM, after the feasibility step, we force the new iterate to lie in theneighbourhood where the quadratic convergence occurs. Thus, we can perform several centringsteps to obtain the iterate very close to the μ-centre.

5. Analysis of the feasibility step

We denote the result of the full step given by system (15) at (x, y, s) by (xf , yf , sf ), i.e.

xf := x + �f x, y+ := y + �f y, s+ := s + �f s.

Using Equation (10), we may also write

xf = x + �f x = x + xdfx

v= x

v(v + df

x ), (30)

sf = s + �f s = s + sdfs

v= s

v(v + df

s ). (31)

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 16: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3176 L. Zhang and Y. Xu

5.1 The feasibility

To keep the new iterates after the feasibility step is strictly feasible, the following lemma is needed.

Lemma 5.1 The new iterates (xf , yf , sf ) are strictly feasible if

‖dfx df

s ‖∞ < 1 − �s(v).

Proof For the proof, we introduce a step length α ∈ [0, 1] and define

xf (α) = x + α�f x, yf (α) = y + α�f y, sf (α) = s + α�f s.

We then have xf (0) = x, xf (1) = xf and sf (0) = s, sf (1) = sf . Hence, we have

xf (0)sf (0) = xs > 0.

Since

xf (α)sf (α)

μ= (v + αdf

x )(v + αdfs )

= v2 + αv(dfx + df

s ) + α2dfx df

s .

Using system (15) and by Equation (18), one has v(dfx + df

s ) = 2(v − v2), then

xf (α)sf (α)

μ= (1 − α)v2 + α(e − (v − e)2) + α2df

x dfs .

Therefore, the components of xf (α)sf (α) will be positive if

(1 − α)v2 + α(e − (v − e)2) + α2dfx df

s > 0. (32)

Since α ∈ [0, 1], we have that (1 − α)v2 ≥ 0. Thus, inequality (32) is true for

e − (v − e)2 + αdfx df

s > 0,

which suffices for

1 − max(v − e)2 − ‖dfx df

s ‖∞ > 0. (33)

Remember that

max(v − e)2 ≤n∑

i=1

(1 − vi)2 = �s(v).

Inequality (33) suffices for

1 − �s(v) − ‖dfx df

s ‖∞ > 0.

Now, suppose ‖dfx df

s ‖∞ < 1 − �s(v), the components of xf (α)sf (α) are certainly positive forα ∈ [0, 1]. Since xf (0) and sf (0) are positive and since xf (α) and sf (α) depend continuously onα, it follows that xf (1) and sf (1) are also positive. This completes the proof. �

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 17: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3177

In the sequel, we denote

ω(v) := 1

2

√‖df

x‖2 + ‖dfs |2.

It is trivial that ‖dfx‖ ≤ 2ω(v) and ‖df

s ‖ ≤ 2ω(v), and moreover,

‖dfx df

s ‖ ≤ 1

2(‖df

x‖2 + ‖dfs ‖2) = 2ω(v)2, (34)

‖dfx df

s ‖∞ ≤ 1

2(‖df

x‖2 + ‖dfs ‖2) = 2ω(v)2. (35)

In what follows, we denote

vf =√

xf sf

μand �s(x

f , sf ; μ+) = �s

(vf

√1 − θ

).

Using Equations (30) and (31), we have

(vf )2 = (v + dfx )(v + df

s )

= v2 + v(dfx + df

s ) + dfx df

s

= e − (v − e)2 + dfx df

s . (36)

Lemma 5.2 If ω(v) <√

2(1 − �s(v))/2, then the iterates (xf , yf , sf ) are strictly feasible.Moreover, one has

min(vf ) ≥√

1 − �s(v) − 2ω(v)2.

Proof Let ω(v) <√

2(1 − �s(v))/2. Then, Equation (35) implies that ‖dfx df

s ‖∞ < 1 − �s(v).By Lemma 5.1, the first statement in the lemma is implied.

By Equation (36), we have

min(vf )2 = min(e − (v − e)2 + dfx df

s ) ≥ 1 − max(v − e)2 − ‖dfx df

s ‖∞.

Since max(v − e)2 ≤ �s(v) and by Equation (35), the second statement easily follows. �

Assuming ω(v) <√

2(1 − �s(v))/2, which guarantees the strict feasibility of the iterates(xf , yf , sf ), we proceed to derive an upper bound for �s(xf , sf ; μ+).

5.2 An upper bound for proximity function after a feasibility step

Theorem 5.3 If the new iterates (xf , yf , sf ) are strictly feasible, then one has

�s(xf , sf ; μ+) ≤ 1

1 − θ

(θ√

n + �s(v) + 2ω(v)2)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 18: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3178 L. Zhang and Y. Xu

Proof Using Equations (34) and (36) and Lemma 5.2, due to the Cauchy–Schwarz inequality,we have

n∑i=1

(√

1 − θ − vfi )

2 =n∑

i=1

(1 − θ − (vfi )

2)2

(√

1 − θ + vfi )

2

≤∑n

i=1(−θ + 1 − (vfi )

2)2

(√

1 − θ + min(vf ))2

(√∑ni=1 θ2 +

√∑ni=1(1 − (vf

i )2)2

)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

(θ√

n +√∑n

i=1((vi − 1)2 − dfxi d

fsi)

2

)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

(θ√

n + √∑ni=1(vi − 1)4 +

√∑ni=1(d

fxi d

fsi)

2

)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

≤(θ√

n + ∑ni=1(vi − 1)2 + ‖df

x dfs ‖

)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

≤ (θ√

n + �s(v) + 2ω(v)2)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

.

Since

�s(xf , sf ; μ+) = �s

(vf

√1 − θ

)

=n∑

i=1

(1 − vf

i√1 − θ

)2

= 1

1 − θ

n∑i=1

(√

1 − θ − vfi )

2,

one concludes that

�s(xf , sf ; μ+) ≤ 1

1 − θ

(θ√

n + �s(v) + 2ω(v)2)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

.

This completes the proof. �

We proceed by considering the value ω(v) in more detail.

5.2.1 An upper bound for ω(v)

Denoting the null space of the matrix A as L, so

L := {ξ ∈ Rn : Aξ = 0}.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 19: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3179

Obviously, the affine space {ξ ∈ Rn : Aξ = θνr0b} equals df

x + L. Note that due to a well-knownresult from linear algebra, the row space of A equals the orthogonal complement L⊥ of L. There-fore, df

s ∈ θvνs−1r0c + L⊥. Also note that L ∩ LT = 0, and as a consequence, the affine spaces

dfx + L and df

s + LT meet in a unique point. This point is denoted by q.

Lemma 5.4 Let q be the (unique) point in the intersection of the affine spaces dfx + L and

dfs + L⊥. Then,

2ω(v) ≤√

‖q‖2 + (‖q‖ + 2√

�s(v))2.

Proof To simplify the notation, we denote r = −∇�s(v). Since L⊥ + L = Rn, there existq1, r1 ∈ L, and q2, r2 ∈ L⊥, such that

q = q1 + q2, r = r1 + r2.

On the other hand, since dfx − q ∈ L and df

s − q ∈ L⊥, there must exist l1 ∈ L and l2 ∈ L⊥such that

dfx = q + l1, df

s = q + l2.

From the third equation of Equation (15), it follows that r = 2q + l1 + l2, which implies

(2q1 + l1) + (2q2 + l2) = r1 + r2.

Since the decomposition L⊥ + L = Rn is unique, we conclude that

l1 = r1 − 2q1, l2 = r2 − 2q2.

Hence, we obtain

dfx = q + r1 − 2q1 = (r1 − q1) + q2,

dfs = q + r2 − 2q2 = (r2 − q2) + q1.

Since the spaces L and L⊥ are orthogonal, we conclude that

4ω(v)2 = ‖dfx‖2 + ‖df

s ‖2

= ‖r1 − q1‖2 + ‖q2‖2 + ‖q1‖2 + ‖r2 − q2‖2

= ‖q − r‖2 + ‖q‖2.

Assuming q �= 0, since ‖r‖ = ‖∇�s(v)‖, the right-hand side is maximal if r =−(‖∇�s(v)‖q/‖q‖), thus we obtain

4ω(v)2 ≤∥∥∥∥(

1 + ‖∇�s(v)‖‖q‖

)q

∥∥∥∥2

+ ‖q‖2

= (‖q‖ + ‖∇�s(v)‖)2 + ‖q‖2

= ‖q‖2 + (‖q‖ + 2√

�s(v))2,

where the last inequality follows from Equation (19), which implies the inequality in the lemma ifq �= 0. Since the inequality in the lemma holds with equality if q = 0, this completes the proof. �

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 20: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3180 L. Zhang and Y. Xu

5.2.2 An upper bound for ‖q‖We proceed to derive an upper bound for ‖q‖. Before doing this, we choose the initial point inthe usual way as defined in Equation (16).

Recall Lemma 5.4 that q is the (unique) solution of the system

Aq = θνr0b ,

ATξ + q = θνvs−1r0c ,

which is the same as the condition of Lemma 2.6 by Mansouri in [12].

Lemma 5.5 [12, Lemma 2.6] Let (x0, y0, s0) be an initial point as defined in Equations (16) and(17), we have

‖q‖ ≤ θ

ξ min(v)(‖x‖1 + ‖s‖1).

The next lemma is a part of the proof for Lemma 4.3 in [6]. We also give a simple proof here.

Lemma 5.6 Let x and (y, s) be feasible for the perturbed problems (Pν) and (Dν), respectively,and (x0, y0, s0) as defined in Equation (16). Then, for any primal–dual optimal solution (x∗, y∗, s∗),we have

ν(xTs0 + sTx0) = sTx + ν2(s0)Tx0 + ν(1 − ν)((s0)Tx∗ + (x0)Ts∗)

− (1 − ν)(sTx∗ + xTs∗).

Proof Let (x∗, y∗, s∗) be an optimal solution satisfying Equation (17). Then, from the first twoequations of system (8), i.e. feasibility conditions of the perturbed problems (Pν) and (Dν), it iseasily seen that

A[x − νx0 − (1 − ν)x∗] = 0,

AT[y − νy0 − (1 − ν)y∗] + [s − νs0 − (1 − ν)s∗] = 0.

This implies that x − νx0 − (1 − ν)x∗ and s − νs0 − (1 − ν)s∗ belong to the null space and rowspace of A, respectively. Thus,

[x − νx0 − (1 − ν)x∗]T[s − νs0 − (1 − ν)s∗] = 0.

By expanding the above equality and using the fact that (x∗)Ts∗ = 0, the result easily follows. �

We now give an upper bound for ‖x‖1 + ‖s‖1 in terms of �s(v) as follows.

Lemma 5.7 Let x and (y, s) be feasible for the perturbed problems (Pν) and (Dν), respectively,x0 = s0 = ξe, where ξ > 0 is a constant such that ‖x∗ + s∗‖∞ ≤ ξ for some primal–dual optimalsolution (x∗, y∗, s∗). Then, we have

‖x‖1 + ‖s‖1 ≤ ((√

n + √�s(v))

2 + n)ξ .

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 21: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3181

Proof Since x, s, x∗ and s∗ are non-negative, Lemma 5.6 implies that

(xTs0 + sTx0) ≤ sTx

ν+ ν(s0)Tx0 + (1 − ν)((s0)Tx∗ + (x0)Ts∗). (37)

Since x0 = s0 = ξe and ‖x∗ + s∗‖∞ ≤ ξ , we have

(x0)Ts∗ + (s0)Tx∗ = ξeT(x∗ + s∗) ≤ ξeT(‖x∗ + s∗‖∞)e

= ξ‖x∗ + s∗‖∞(eTe) ≤ nξ 2.

Also by using (x0)Ts0 = nξ 2 in Equation (37), we get

xTs0 + sTx0 ≤ sTx

ν+ nξ 2 = μ(eTv2)

ν+ nξ 2 = ξ 2(eTv2) + nξ 2,

where for the last equality, we use ν = μ/μ0 and μ0 = ξ 2. By Lemma 3.1, we obtain

xTs0 + sTx0 ≤ ((√

n + √�s(v))

2 + n)ξ 2.

Since x0 = s0 = ξe, we have

xTs0 + sTx0 = ξ(eTx + eTs) = ξ(‖x‖1 + ‖s‖1).

Hence, it follows that

‖x‖1 + ‖s‖1 ≤ ((√

n + √�s(v))

2 + n)ξ ,

which proves the lemma. �

Lemma 5.8 We have

‖q‖ ≤ ((√

n + √�s(v))2 + n)θ

1 − √�s(v)

.

Proof Combining Lemmas 3.1, 5.5 and 5.7, the result easily follows. �

5.3 Analysis of the parameter

5.3.1 Fixing the parameters

From the analysis for Lemma 5.2, we know that if ω(v) <√

2(1 − �s(v))/2, then after thefeasibility step, the iterate (xf , yf , sf ) is strictly feasible. Using Lemmas 5.4 and 5.8, we have

ω(v) ≤ 1

2

√(((

√n + √

�s(v))2 + n)θ

1 − √�s(v)

)2

+(

((√

n + √�s(v))2 + n)θ

1 − √�s(v)

+ 2√

�s(v)

)2

,

which means that the strict feasibility suffices for

1

2

√(((

√n + √

�s(v))2 + n)θ

1 − √�s(v)

)2

+(

((√

n + √�s(v))2 + n)θ

1 − √�s(v)

+ 2√

�s(v)

)2

<

√2(1 − �s(v))

2.

The left-hand side of the above inequality is monotonically increasing with respect to �s(v), whilethe right-hand side is monotonically decreasing with respect to �s(v). Given a threshold τ , for

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 22: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3182 L. Zhang and Y. Xu

�s(v) ≤ τ , it suffices for

1

2

√(((

√n + √

τ)2 + n)θ

1 − √τ

)2

+(

((√

n + √τ)2 + n)θ

1 − √τ

+ 2√

τ

)2

<

√2(1 − τ)

2. (38)

At this stage, we assume

τ = 1

16, θ = 1

6n. (39)

An upper bound for the left-hand side of inequality (38) is 0.6058, while a lower bound for theright-hand side of inequality (38) is 0.6847, which means inequality (38) suffices hold. In thiscase, we conclude that the iterate (xf , yf , sf ) is strictly feasible.

5.3.2 An upper bound for proximity function

In Section 5.3.1, we have fixed θ = 1/6n and τ = 1/16. To complete the algorithm, we needto calculate the number of centring steps which are needed to restore the condition �s(v) ≤ τ .Remember that the first centring step starts from the new iterate point (xf , yf , sf ); thus, an upperbound for �s(xf , sf ; μ+) is needed.

We specify an upper bound for ‖q‖ first. Using Lemma 5.8 and Equation (39), we have

‖q‖ ≤ ((√

n + √�s(v))2 + n)θ

1 − √�s(v)

≤ 0.5694.

Thus, an upper bound for ω(v)2 can be obtained by Lemma 5.4 as follows:

ω(v)2 ≤ 1

4(‖q‖2 + (‖q‖ + 2

√�s(v))

2) ≤ 0.3670. (40)

Then, after one feasibility step, we can derive an upper bound for �s(xf , sf ; μ+). UsingTheorem 5.3 and Equations (39) and (40), we have

�s(xf , sf ; μ+) ≤ 1

1 − θ

(θ√

n + �s(v) + 2ω(v)2)2

(√

1 − θ + √1 − �s(v) − 2ω(v)2)2

≤ 0.5983. (41)

6. Complexity analysis

6.1 The number of centring steps

After the feasibility step, we have derived an upper bound for the �s(xf , sf ; μ+), i.e.

�s(xf , sf ; μ+) ≤ 0.5983.

We perform the centring steps in order to get the iterates (x+, y+, s+) that satisfy (x+)Ts+ ≤nμ+ and

�s(x+, s+; μ+) = �s

(v+

√1 − θ

)≤ 1

16,

where v+ = √(x+s+)/μ. By the quadratic convergence property of the centring step (see

Theorem 4.5, Remarks 4.6 and 4.7), one concludes that at most five centring steps then suf-fice to get iterates (x+, s+) that satisfy �s(x+, s+; μ+) ≤ τ again. So each main iteration consists

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 23: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3183

of at most six so-called inner iterations, in each of which we need to compute a search direction(for either a feasibility step or a centring step).

6.2 Iteration complexity

In the previous sections, we have found that, for the worst cases, if at the start of an iteration theiterates satisfy �s(x, s; μ) ≤ τ with τ and θ as defined in Equation (39), then after the feasibilitystep and the μ-update the iterates satisfy �s(xf , sf ; μ+) ≤ 0.5983. The algorithm proceeds toperform five centring steps to obtain the new iterate (x+, y+, s+) satisfying �s(x+, s+; μ+) ≤ τ .

In each iteration, both the duality gap and the norms of the residual vectors are reduced by factor1 − θ . Hence, using (x0)Ts0 = nξ 2, the total number of main iterations is bounded above by

1

θlog

max{nξ 2, ‖r0b‖, ‖r0

c ‖}ε

.

Since

θ = 1

6n,

the total number of inner iterations is bounded above by

36n logmax{nξ 2, ‖r0

b‖, ‖r0c ‖}

ε.

Thus, we may state without further proof the main result of the paper.

Theorem 6.1 If (P) and (D) have optimal solutions x∗ and (y∗, s∗) such that ‖x∗ + s∗‖∞ ≤ ξ ,then after at most

36n logmax{nξ 2, ‖r0

b‖, ‖r0c ‖}

ε

iterations, the algorithm finds an ε-solution of (P) and (D).

7. Numerical experiments

Now, we deal with numerical tests using Algorithm 1. The numerical results are summarized inTable 1 for different tested problems, where Iter and Max-�s(x, s; μ) denote the number of mainiteration and the maximum value of the proximity function �s(x, s; μ) in the processing of thealgorithm, respectively.

We consider the LO problem min{cTx : Ax = b, x ≥ 0}, whose data are given as follows:

Example 7.1 The following simple example comes from part II, chapter 6, section 6.7 in [20].

A =(

1 −1 00 0 1

), c = (

1 1 1)T

, b = (1 1

)

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 24: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

3184 L. Zhang and Y. Xu

Table 1. Numerical results for Algorithm 1.

Problems θ Iter Max-�s(x, s; μ) Duality gap

Problem 1 1/6n 182 0.00244898 0.000096321/4n 120 0.00567107 0.000095381/2n 58 0.02479330 0.000091241/n 26 0.15109861 0.00007522

3/2n 15 0.50259418 0.00007625

Problem 2 1/6n 3117 0.00017434 0.000089471/3n 1556 0.00070027 0.000089281/n 514 0.00640923 0.000091493/n 167 0.06209893 0.000095706/n 81 0.28659305 0.000076198/n 59 0.55702304 0.00007556

Example 7.2

A =

⎛⎜⎜⎜⎜⎜⎝

1 0 · · · 0 0 10 2 · · · · · · 00 1 · · · 0 0 −2 10 · · · · · · 0...

.... . .

......

.... . .

. . .. . .

...0 0 · · · 1 0 0 0 · · · 10 20 0 · · · 0 1 0 0 · · · −2 10

⎞⎟⎟⎟⎟⎟⎠

20×40

c = (1, 1, . . . , 1)T40, b = (1, 1, . . . , 1)T

20.

The initial points are given as

x0 = s0 = ξe, y0 = 0 and μ0 = ξ 2,

where ξ = 1. We use ε = 10−4 as the accuracy parameter. Moreover, we adjust θ to make it largerthan or equal to θ = 1/6n, while keeping the iteration strictly feasible, i.e. inequality (41) holds.

Displayed in Table 1 are numerical results for Algorithm 1 for the above two problems. Thecomputational results show that an increase in θ will induce a reduction in iteration number, ifthe condition that (xf , yf , sf ) belongs to the quadratic convergence neighbourhood with respect tothe μ+-centre of (Pν+) and (Dν+) holds.

Our analysis for the parameters θ and τ in Section 5.3 makes it clear that the default valueθ = 1/6n guarantees that �s(xf , sf ; μ+) ≤ 0.5983 will hold. But actually, the new proximity maybe much smaller than 0.5983.

This opens a way to speed up the algorithm without degrading the iteration bound. For if wetake θ larger than the default value θ = 1/6n, thus, enforcing a deeper update of the barrierparameter in such a way that �s(xf , sf ; μ+) ≤ 0.5983 still holds, then the analysis in the proof ofTheorem 6.1 remains valid but the number of iterations decreases.

8. Conclusions

The full-step IIPM based on the simple function admits the best-known iteration bound, but from apractical perspective a severe shortcoming is its worst-case-oriented nature: it will always performaccording to its worst-case theoretical complexity bound.

An obvious way to reduce the number of iterations is to make larger updates of the barrierparameter while keeping the iterates in the region of quadratic convergence. For this reason, our

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4

Page 25: A new infeasible interior-point algorithm with full step for linear optimization based on a simple function

International Journal of Computer Mathematics 3185

further research line may focus on doing the numerical tests for a few improvements of algorithmsand exploiting its performance for real-world problems.

Acknowledgements

The research is supported by National Natural Science Foundation of China (No. 11071221, 11071158) and key disciplinesof Shanghai Municipality Discipline Project (No. S30104).

References

[1] Y. Bai, M. El Ghami, and C. Roos, A new efficient large-update primal–dual interior-point method based on a finitebarrier, SIAM J. Optim. 13 (2002), pp. 766–782.

[2] Y. Bai, C. Roos, and M. Ghami, A primal–dual interior-point method for linear optimization based on a new proximityfunction, Optim. Methods Softw. 17 (2002), pp. 985–1008.

[3] Y. Bai, M. El Ghami, and C. Roos, A comparative study of kernel functions for primal–dual interior-point algorithmsin linear optimization, SIAM J. Optim. 15 (2004), pp. 101–128.

[4] M. El Ghami, New primal–dual interior-point methods based on kernel functions, Ph.D. thesis, Delft University ofTechnology, 2005.

[5] G. Gu, Full-step interior-point methods for symmetric optimization, Ph.D. thesis, Delft University of Technology,2009.

[6] G. Gu, H. Mansouri, M. Zangiabadi, Y. Bai, and C. Roos, Improved full-Newton step O(nL) infeasible interior-pointmethod for linear optimization, J. Optim. Theory Appl. 145 (2010), pp. 271–288.

[7] N. Karmarkar, A new polynomial-time algorithm for linear programming, Proceedings of the Sixteenth Annual ACMSymposium on Theory of Computing, New York, NY, USA, 1984, pp. 302–311.

[8] M. Kojima, N. Megiddo, and S. Mizuno, A primal–dual infeasible interior-point algorithm for linear programming,Math. Program. 61 (1993), pp. 263–280.

[9] I. Lustig, Feasibility issues in a primal–dual interior-point method for linear programming, Math. Program. 49(1990), pp. 145–162.

[10] H. Mansouri, Full-Newton step interior-point methods for conic optimization, Ph.D. thesis, Delft University ofTechnology, 2008.

[11] H. Mansouri and C. Roos, A new full-Newton step O(n) infeasible interior-point algorithm for semidefiniteoptimization, Numer. Algorithms 52 (2009), pp. 225–255.

[12] H. Mansouri, M. Zangiabadi, Y. Bai, and C. Roos, An infeasible interior-point algorithm with full-Newton step forlinear optimization, 2008. Available at http://www.optimization-online.org/DB_HTML/2008/07/2052.html

[13] S. Mizuno, Polynomiality of infeasible interior-point algorithms for linear programming, Math. Program. 67 (1994),pp. 109–119.

[14] J. Peng, C. Roos, and T. Terlaky, New complexity analysis of the primal–dual Newton method for linear optimization,Ann. Oper. Res. 99 (2000), pp. 23–39.

[15] J. Peng, C. Roos, and T. Terlaky, A new and efficient large-update interior-point method for linear optimization,Vychisl. Tekhnol. 6 (2001), pp. 61–80.

[16] J. Peng, C. Roos, and T. Terlaky, Self-regular functions and new search directions for linear and semidefiniteoptimization, Math. Program. 93 (2002), pp. 129–171.

[17] J. Peng, C. Roos, and T. Terlaky, Self-regularity: A New Paradigm for Primal-Dual Interior-point Algorithms,Princeton University Press, Princeton, NJ, 2002.

[18] J. Peng, T. Terlaky, and Y. Zhao, A predictor–corrector algorithm for linear optimization based on a specific self-regular proximity function, SIAM J. Optim. 15 (2005), pp. 1105–1127.

[19] C. Roos, A full-Newton step O(n) infeasible interior-point algorithm for linear optimization, SIAM J. Optim. 16(2006), pp. 1110–1136.

[20] C. Roos, T. Terlaky, and J. Vial, Theory and Algorithms for Linear Optimization: An Interior Point Approach, JohnWiley & Son Ltd., Chichester, UK, 1997.

[21] M. Salahi, T. Terlaky, and G. Zhang, The complexity of self-regular proximity based infeasible IPMs, Comput. Optim.Appl. 33 (2006), pp. 157–185.

[22] K. Tanabe, Centered Newton method for mathematical programming, Syst. Model. Optim. 113 (1988), pp. 197–206.[23] S. Wright, Primal-Dual Interior-Point Methods, Society for Industrial Mathematics, SIAM, Philadelphia, 1997.[24] F. Wu, S. Wu, and Y. Ye, On quadratic convergence of the OnL-iteration homogeneous and self-dual linear

programming algorithm, Ann. Oper. Res. 87 (1999), pp. 393–406.[25] Y. Ye, Interior Point Algorithms: Theory and Analysis, John Wiley and Sons, New York, 1997.[26] Y. Ye, M. Todd, and S. Mizuno, An OnL-iteration homogeneous and self-dual linear programming algorithm, Math.

Oper. Res. 19 (1994), pp. 53–67.[27] Y. Zhang, On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity

problem, SIAM J. Optim. 4 (1994), pp. 208–227.[28] L. Zhongyi, W. Sun, and T. Fangbao, A full-Newton step infeasible interior-point algorithm for linear programming

based on a kernel function, Appl. Math. Optim. 60 (2009), pp. 237–251.

Dow

nloa

ded

by [

Col

orad

o C

olle

ge]

at 1

2:46

10

Oct

ober

201

4