6
An alternating direction method for second-order conic programming $ Xuewen Mu a,n , Yaling Zhang b a Department of Mathematics, Xidian University, Xi’an, 710071, China b Department of Computer Science, Xi’an Science and Technology University, Xi’an, 710054, China article info Available online 18 January 2013 Keywords: Second-order cone programming Dual augmented Lagrangian method Alternating direction method Primal-dual interior point method abstract An alternating direction dual augmented Lagrangian method for second-order cone programming (SOCP) problems is proposed. In the algorithm, at each iteration it first minimizes the dual augmented Lagrangian function with respect to the dual variables, and then with respect to the dual slack variables while keeping the other two variables fixed, and then finally it updates the Lagrange multipliers. Convergence result is given. Numerical results demonstrate that our method is fast and efficient, especially for the large-scale second-order cone programming. & 2013 Elsevier Ltd. All rights reserved. 1. Introduction Second-order cone programming (SOCP) is to minimize a linear function over the intersection of an affine set and the product of second-order cones. SOCP is nonlinear convex pro- gramming problem, and the linear and convex quadratic pro- grams are special cases. We consider the primal SOCP problem ðPÞ minfc T x : Ax ¼ b, x A Kg, where A ¼ðA 1 , A 2 , ... , A N Þ A R mn , c ¼ðc T 1 , c T 2 , ... , c T N Þ T A R n , A i A R mn i , c i A R n i , b A R m are scalars, and x ¼ðx T 1 , x T 2 , ... , x T N Þ T A R n , x i A R n i is variable. In the same time, K ¼ K 1 K 2 K N and P N i ¼ 0 n i ¼ n. In addition, x i A K i , and K i is the standard second-order cone of dimension n i , which is defined as K i ¼ x i ¼ x i1 x i0 ! A R ðn i 1Þ R : Jx i1 J 2 rx i0 ( ) , where J J 2 is the standard Euclidean norm, i:e: JuJ 2 ¼ðu T uÞ 1=2 for u A R n . Without loss the generality, we assume that N ¼ 1, K 1 ¼ K . The dual problem of the primal SOCP is [1,2] ðDPÞ maxfb T y : A T y þ z ¼ c, z A Kg, where y A R m are vector variables. There are many applications in the engineering for the SOCP, such as FIR filter design, antenna array weight design, truss design [25]. There have been many methods proposed for solving SOCP and second-order cone complementarity problem. They include interior-point methods [610], the smoothing Newton methods [11,12], the merit function method [13] and the semismooth Newton method [14], where the last three kinds of methods are all based on an SOCP complementarity function or a merit function. The primal-dual interior point method is a second-order method, so at each iteration we need compute a equation system. The alternating direction method (ADM) has been an effective first-order approach for solving large optimization problems with vector variables. It probably first arises from partial differential equations (PDEs) [15,16]. In these methods, the variables are partitioned into several blocks, and then the function is mini- mized with respect to each block by fixing all other blocks at each inner iteration. The idea has been developed for many optimization problems, such as, variational inequality problems [1719], linear programming [20], semidefinite (SDP) programming [31, 38, 27], non- linear convex optimization [22, 25, 26, 28], and nonsmooth l 1 mini- mization arising from compressive sensing [29, 30]. In [31], an alternating direction method for SDP is presented by reformulating the complementary condition as a projection equation. In paper [38], based on a single inexact metric projection onto the positive semidefinite cone at each iteration, the author proposes a modified alternating direction method for convex quadratically constrained quadratic semidefinite programs. In paper [32], an alternating direc- tion dual augmented Lagrangian method for solving semidefinite programming problems in standard form is presented and is extended to separately minimize the dual augmented Lagrangian function SDP with inequality constraints and positivity constraints. These new research results show that the alternating direction method are very efficient for some optimization pro- blems. But to now, there is not any alternating direction method for second-order cone programming. Although second-order cone Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/caor Computers & Operations Research 0305-0548/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cor.2013.01.010 $ The short running title is ADM for SOCP. n Corresponding author. Tel./fax: þ86 29 88202860. E-mail addresses: [email protected] (X. Mu), [email protected] (Y. Zhang). Computers & Operations Research 40 (2013) 1752–1757

An alternating direction method for second-order conic programming

  • Upload
    yaling

  • View
    220

  • Download
    0

Embed Size (px)

Citation preview

Page 1: An alternating direction method for second-order conic programming

Computers & Operations Research 40 (2013) 1752–1757

Contents lists available at SciVerse ScienceDirect

Computers & Operations Research

0305-05

http://d

$Then Corr

E-m

zyldella

journal homepage: www.elsevier.com/locate/caor

An alternating direction method for second-orderconic programming$

Xuewen Mu a,n, Yaling Zhang b

a Department of Mathematics, Xidian University, Xi’an, 710071, Chinab Department of Computer Science, Xi’an Science and Technology University, Xi’an, 710054, China

a r t i c l e i n f o

Available online 18 January 2013

Keywords:

Second-order cone programming

Dual augmented Lagrangian method

Alternating direction method

Primal-dual interior point method

48/$ - see front matter & 2013 Elsevier Ltd. A

x.doi.org/10.1016/j.cor.2013.01.010

short running title is ADM for SOCP.

esponding author. Tel./fax: þ86 29 88202860

ail addresses: [email protected] (X.

@xust.edu.cn (Y. Zhang).

a b s t r a c t

An alternating direction dual augmented Lagrangian method for second-order cone programming

(SOCP) problems is proposed. In the algorithm, at each iteration it first minimizes the dual augmented

Lagrangian function with respect to the dual variables, and then with respect to the dual slack variables

while keeping the other two variables fixed, and then finally it updates the Lagrange multipliers.

Convergence result is given. Numerical results demonstrate that our method is fast and efficient,

especially for the large-scale second-order cone programming.

& 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Second-order cone programming (SOCP) is to minimize alinear function over the intersection of an affine set and theproduct of second-order cones. SOCP is nonlinear convex pro-gramming problem, and the linear and convex quadratic pro-grams are special cases.

We consider the primal SOCP problem

ðPÞ minfcT x : Ax¼ b,xAKg,

where A¼ ðA1,A2, . . . ,ANÞARm�n, c¼ ðcT1,cT

2, . . . ,cTNÞ

T ARn, AiARm�ni ,ciARni ,bARm are scalars, and x¼ ðxT

1,xT2, . . . ,xT

NÞT ARn, xiARni is

variable. In the same time, K ¼ K1 � K2 � � � �KN andPN

i ¼ 0 ni ¼ n.In addition, xiAKi, and Ki is the standard second-order cone ofdimension ni, which is defined as

Ki ¼ xi ¼xi1

xi0

!ARðni�1Þ

� R : Jxi1J2rxi0

( ),

where J � J2 is the standard Euclidean norm, i:e: JuJ2 ¼ ðuT uÞ1=2 for

uARn. Without loss the generality, we assume that N¼ 1,K1 ¼ K .The dual problem of the primal SOCP is [1,2]

ðDPÞ maxfbT y : AT yþz¼ c,zAKg,

where yARm are vector variables.There are many applications in the engineering for the SOCP,

such as FIR filter design, antenna array weight design, truss design[2–5]. There have been many methods proposed for solving SOCP

ll rights reserved.

.

Mu),

and second-order cone complementarity problem. They includeinterior-point methods [6–10], the smoothing Newton methods[11,12], the merit function method [13] and the semismoothNewton method [14], where the last three kinds of methodsare all based on an SOCP complementarity function or a meritfunction. The primal-dual interior point method is a second-ordermethod, so at each iteration we need compute a equation system.

The alternating direction method (ADM) has been an effectivefirst-order approach for solving large optimization problems withvector variables. It probably first arises from partial differentialequations (PDEs) [15,16]. In these methods, the variables arepartitioned into several blocks, and then the function is mini-mized with respect to each block by fixing all other blocks at eachinner iteration. The idea has been developed for many optimizationproblems, such as, variational inequality problems [17–19], linearprogramming [20], semidefinite (SDP) programming [31,38,27], non-linear convex optimization [22,25,26,28], and nonsmooth l1 mini-mization arising from compressive sensing [29,30]. In [31], analternating direction method for SDP is presented by reformulatingthe complementary condition as a projection equation. In paper [38],based on a single inexact metric projection onto the positivesemidefinite cone at each iteration, the author proposes a modifiedalternating direction method for convex quadratically constrainedquadratic semidefinite programs. In paper [32], an alternating direc-tion dual augmented Lagrangian method for solving semidefiniteprogramming problems in standard form is presented and isextended to separately minimize the dual augmented Lagrangianfunction SDP with inequality constraints and positivity constraints.

These new research results show that the alternatingdirection method are very efficient for some optimization pro-blems. But to now, there is not any alternating direction methodfor second-order cone programming. Although second-order cone

Page 2: An alternating direction method for second-order conic programming

X. Mu, Y. Zhang / Computers & Operations Research 40 (2013) 1752–1757 1753

programming is a special case of semidefinite programming, theprojection to the second-cone is different from the projection tosemidefinite cone. Based on the new results about EuclideanJordan algebras [35–37], we can obtain the simple projection onthe second-order cone. The projection on the second-order cone issimple and costs less computation time. Based on the projectionresults, we can extend the method in paper [32] to the SOCP.

Our algorithm applies the alternating direction method withina dual augmented Lagrangian framework. At each iteration, thealgorithm minimizes the augmented Lagrangian function for thedual SOCP problem sequentially, first with respect to the dualvariables corresponding to the linear constraints, and then withrespect to the dual slack variables, while in each minimizationkeeping the other variables fixed, after which it updates theprimal variables. Numerical experiments on, for example, randomsecond-order cone programming problems, show that the perfor-mance of our method can be significantly better than the primal-dual interior point method, especially for the large-scaleproblems.

The paper is organized as follows. In Section 2 we introducethe Jordan projection on the second-order cone. In Section 3 wegive the fast projection and contraction method for second-ordercone programming. In Section 4, we give the analysis of conver-gence of the method. We introduce the testing problems andreport our numerical results in Section 5, and make someconcluding remarks in Section 6.

2. Jordan projection on the second-order cone

We make the following assumption throughout ourpresentation.

Assumption 1. The matrix A has full row rank and the feasibleset of primal SOCP is not null; that is, there exists a vector xAK

satisfying Ax¼b.

Based on the duality theorem, solving the primal and dualprogram of SOCP is equivalent to solve the following system [1]:

Ax¼ b, xAK ,

AT yþz¼ c, zAK ,

xT z¼ 0:

8><>: ð1Þ

Equation system (1) is the optimization condition for the SOCP.For the purpose of studying the metric projection operator

over second-order cone, we need some knowledge aboutEuclidean Jordan algebras, which can be found from the standardreferences [35–37].

Let zARn, then the spectral decomposition of z is as follows[35,36]:

z¼ l1ðzÞc1ðzÞþl2ðzÞc2ðzÞ ð2Þ

where liðzÞ ¼ z0þð�1ÞiJz1J2 for i¼1, 2, and

ciðzÞ ¼

1

2ð�1Þi

z1

Jz1J2,1

� �T

, z1a0

1

2ðð�1Þiw,1ÞT , z1 ¼ 0,

8>>><>>>:

where w is any vector in Rn�1 satisfying JwJ2 ¼ 1.For any zARn, let PK þ ðzÞ be the projection of z onto the second-

order cone K. For any sAR, let sþ :¼ maxð0,sÞ and s� :¼ minð0,sÞ.Then, we have [35,36]

PK þ ðzÞ ¼ ðl1ðzÞÞþ c1ðzÞþðl2ðzÞÞþ c2ðzÞ: ð3Þ

Obviously, PK þ ðzÞAK.

We also define

PK� ðzÞ ¼ ðl1ðzÞÞ�c1ðzÞþðl2ðzÞÞ�c2ðzÞ:

Next we will give the following important conclusion.

Lemma 1. Assume zARn, then we have

�PK� ðzÞAK , ð4Þ

z1 ¼ PK þ ðzÞ,z2 ¼�PK� ðzÞ3z¼ z1�z2,z1,z2AK and zT1z2 ¼ 0: ð5Þ

Proof. Based on (2), we have that zAK if and only if liðzÞZ0 fori¼1, 2. Because

�PK� ðzÞ ¼�ðl1ðzÞÞ�c1ðzÞ�ðl2ðzÞÞ�c2ðzÞ,

and

�ðliðzÞÞ�Z0, i¼ 1,2,

so we obtain

�PK� ðzÞAK:

Since z1 ¼ PK þ ðzÞ,z2 ¼�PK� ðzÞ, it is easy to verify that [35,36]

z¼ z1�z2, and zT1z2 ¼ 0:

If z¼ z1�z2,z1,z2AK and zT1z2 ¼ 0, by the Theorem 3.2.5 in [34],

we obtain

z1 ¼ PK þ ðzÞ, z2 ¼�PK� ðzÞ: &

3. An alternating direction augmented Lagrangian methodfor SOCP

In this section, we give an alternating direction augmentedLagrangian method for SOCP.

The augmented Lagrangian function for the dual SOCP isdefined as

Llðx,y,zÞ ¼ �bT yþxT ðAT yþz�cÞþ1

2lJAT yþz�cJ2

2, ð6Þ

where xARn and l40.Suppose the initial point is x0, at the k-th iteration, the

augmented Lagrangian method solves

minyARm ,zARn

Llðxk,y,zÞ, s:t: zAK , ð7Þ

for ykþ1 and zkþ1, then updates the primal variable xkþ1 by

xkþ1 ¼ xkþ1

lðAT ykþ1þzkþ1�cÞ: ð8Þ

Since solving problem (7) requires jointly minimizing Llðxk,y,zÞ

with respect to y and z, it can be very time consuming. However,the alternating direction augmented Lagrangian method does notsolve problem (7) exactly. The augmented Lagrangian function isminimized with respect to y and z one after the other. Specifically,we replace (7) and (8) by the following:

ykþ1 ¼ arg minyARm

Llðxk,y,zkÞ, ð9aÞ

zkþ1 ¼ arg minzARn

Llðxk,ykþ1,zÞ, s:t: zAK , ð9bÞ

xkþ1 ¼ xkþ1

lðAT ykþ1þzkþ1�cÞ: ð9cÞ

The first-order optimality conditions for (9a) are

ryLlðxk,ykþ1,zkÞ ¼ Axk

�bþ1

lAðAT ykþ1þzk�cÞ:

Page 3: An alternating direction method for second-order conic programming

X. Mu, Y. Zhang / Computers & Operations Research 40 (2013) 1752–17571754

Since AAT is invertible by Assumption 1, we obtain ykþ1 ¼ yðzk,xkÞ,where

yðz,xÞ ¼�ðAATÞ�1ðlðAx�bÞþAðz�cÞÞ: ð10Þ

For problem (9b), it is easily verified that it is equivalent to

minzARn

Jz�vkþ1J22, zAK , ð11Þ

where vkþ1 ¼ vðzk,xkÞ and the function vðz,xÞ is defined as

vðz,xÞ ¼ c�AT yðz,xÞ�lx: ð12Þ

Hence, we obtain the solution zkþ1 ¼ vkþ11 ¼ PK þ ðv

kþ1Þ.It follows from the updating Eq. (9c) that

xkþ1 ¼ xkþ1

lðAT ykþ1þzkþ1�cÞ ¼

1

lðzkþ1�vkþ1Þ ¼

1

lvkþ1

2 ,

where vkþ12 ¼�PK� ðv

kþ1Þ.From the above observation, we give the alternating direction

augmented Lagrangian method as follows:The Alternating Direction Augmented Lagrangian Method

Given x0AK , z0AK , and l0.For k¼ 0,1,2, . . ., thenStep 1. Compute ykþ1 according to (10).Step 2. Compute vkþ1 and its projection, and setzkþ1 ¼ PK þ ðv

kþ1Þ.Step 3. Compute xkþ1 ¼ 1

l ðzkþ1�vkþ1Þ.

If AAT¼ I, step (10) is very inexpensive. If AAT a I, we can

compute AAT and its inverse (or its Cholesky factorization). Ifcomputing the Cholesky factorization of AAT is very expensive, theiterative method in paper [22,23] is used to solve the system oflinear equations corresponding to (10).

4. The convergence result

The convergence results of the alternating direction methodscan be obtained for variational inequalities in paper [18,17],which also can be applied to our method. However, here weexpend the fixed point method in paper [32] to our method.

For a vector vARn, let the vector PK þ ðvkþ1Þ, �PK� ðv

kþ1Þ bedenoted by operator PðvÞ. Hence, each iteration of alternatingdirection augmented Lagrangian method can be expressed as

ykþ1 ¼ yðzk,xkÞ and ðzkþ1,lxkþ1Þ ¼Pðvkþ1Þ ¼Pðvðzk,xkÞÞ: ð13Þ

The next lemma shows that the fixed point of Eq. (13) is theoptimal solution of (P) and (DP).

Lemma 2. Suppose Assumption1 holds. If ðx,y,zÞ satisfies the

conditions

y¼ yðz,xÞ and ðz,lxÞ ¼Pðvðz,xÞÞ: ð14Þ

Then, ðx,y,zÞ is the primal and dual optimal solution for (P) and (DP).

Proof. Since ðx,y,zÞ satisfies the conditions (14), we havez¼ PK þ ðvÞ,lx¼�PK� ðvÞ. By Lemma 1, we obtain zTx¼0. Sincez�lx¼ vðz,xÞ, it follows from vðz,xÞ ¼ c�AT y�lx that z¼ c�AT y.From y¼ yðz,xÞ, we obtain

ðAATÞy¼ lðb�AxÞþAðc�zÞ ¼ lðb�AxÞþðAAT

Þy,

which implies that Ax¼b. Hence ðx,y,zÞ is the primal and dualoptimal solution point for (P) and (DP). &

We now show that the operator PðvÞ is nonexpansive.

Lemma 3. For any v,vARn,

JPðvÞ�PðvÞJ2rJv�vJ2, ð15Þ

with equality holding if and only if vT1v2 ¼ 0 and vT

2v1 ¼ 0.

Proof. Here v1 ¼ PK þ ðvÞ,v2 ¼�PK� ðvÞ,v1 ¼ PK þ ðvÞ,v2 ¼�PK� ðvÞ,so we have vT

1v2 ¼ 0,vT1v2 ¼ 0. Since v1,v2,v1,v2AK , then

vT1v2Z0 and v

T1v2Z0. We have the following result:

Jv�vJ22 ¼ ðv�vÞT ðv�vÞ

¼ JPK þ ðvÞþPK� ðvÞ�PK þ ðvÞ�PK� ðvÞJ22

¼ ðv1�v1ÞTðv1�v1Þþðv2�v2Þ

Tðv2�v2Þþ2vT

1v2þvT1v2

¼ JPðvÞ�PðvÞJ22þ2vT

1v2þvT1v2

ZJPðvÞ�PðvÞJ22,

which proves (14). &

The following lemma shows that the operator vðz,xÞ isnonexpansive.

Lemma 4. For any z,x,z,xAK ,

Jvðz,xÞ�vðz,xÞJ2rJðz�z,lðx�xÞÞJ2, ð16Þ

with equality holding if and only if v�v ¼ z�z�lðx�xÞ.

Proof. From the definition of yðz,xÞ in (10), we have

yðz,xÞ�yðz,xÞ ¼ ðAATÞ�1ðlAðx�xÞþAðz�zÞÞ,

which together with (12) gives

vðz,xÞ�vðz,xÞ ¼ ðc�AT yðz,xÞ�lxÞ�ðc�AT yðz,xÞ�lxÞ

¼ ATðyðz,xÞ�yðz,xÞÞþlðx�xÞ

¼ �lðI�MÞðx�xÞþMðz�zÞ, ð17Þ

where M¼ ATðAATÞ�1A. Since M is an orthogonal projection matrix

whose spectral radius is 1, we obtain from (17) that

Jvðz,xÞ�vðz,xÞJ22 ¼ JlðI�MÞðx�xÞ�Mðz�zÞJ2

2

rJlðx�xÞJ22þJðz�zÞJ2

2

¼ Jlðx�xÞ,ðz�zÞJ22, ð18Þ

which prove (16). &

If the equality in (16) holds, it also holds in (18); that is,

JlðI�MÞðx�xÞJ22þJMðz�zÞJ2

2 ¼ Jlðx�xÞJ22þJðz�zÞJ2

2:

This implies that Mðx�xÞ ¼ 0 and ðI�MÞðz�zÞ ¼ 0. Using thisrelations in (17), we obtain

v�v ¼ z�z�lðx�xÞ,

which proves the result.

Lemma 5. Let ðxn,yn,znÞ, where yn ¼ yðzn,xnÞ, be a primal and dual

optimal solution of (P) and (DP). Suppose Assumption1 holds. If

JPðvðz,xÞÞ�Pðvðzn,xnÞÞJ2 ¼ Jz�zn,lðx�xnÞJ, ð19Þ

then, ðz,lxÞ is a fixed point, that is ðz,lxÞ ¼Pðvðz,xÞÞ, and hence,ðx,y,xÞ, where y¼ yðz,xÞ, is a primal and dual solutions of (P)and (DP).

Proof. From Lemma 2, we have ðzn,lxnÞ ¼ Pðvðzn,lxnÞÞ. FromLemmas 3 and 4, we have

vðz,xÞ�vðzn,xnÞ ¼ z�zn�lðx�xnÞ,

which implies that vðz,xÞ ¼ z�lx. Since z,xAK ,zT x¼ 0, we obtainfrom Lemma 1 that ðz,lxÞ ¼Pðvðz,xÞÞ. &

Theorem 1. The sequence fðxk,yk,zkÞg generated by the alternating

direction augmented Lagrangian method from any starting point

ðx0,y0,z0Þ converges to a primal and dual optimal solution ðxn,yn,znÞof (P) and (DP).

Page 4: An alternating direction method for second-order conic programming

Table 1Comparative results for small scale problems.

n ADM SeDuMi

CPU Number CPU Number

10 0.0199 61 0.1357 11

20 0.0263 52 0.1700 12

30 0.0346 64 0.1794 10

40 0.0388 55 0.1950 11

50 0.0474 57 0.2340 12

60 0.0575 56 0.2418 11

70 0.0695 62 0.2543 11

80 0.0736 57 0.2761 11

90 0.0878 58 0.2792 13

Table 2Comparative results for medium scale problems.

n ADM SeDuMi

CPU Number CPU Number

100 0.0926 55 0.2886 12

200 0.1404 58 0.4618 13

300 0.2808 53 0.9095 11

400 0.5304 60 1.5397 13

500 0.6240 40 2.2979 11

600 0.8736 33 4.1481 12

700 1.7472 46 5.8032 13

800 2.9796 59 7.6737 13

900 4.0560 53 10.5347 12

X. Mu, Y. Zhang / Computers & Operations Research 40 (2013) 1752–1757 1755

Proof. Since both Pð�Þ and vð�,�Þ are non-expansive, Pðvð�,�ÞÞ is alsononexpansive. Therefore, fzk,lxkg lies in a compact set and musthave a limit point, say z ¼ limj-1zkj and x ¼ limj-1xkj . Also, for aprimal and dual optimal solution ðxn,yn,znÞ,

Jðzkþ1,lxkþ1Þ�ðzn,lxnÞJ2 ¼ JPðvðzk,lxkÞÞ�Pðvðzn,lxnÞÞJ2

rJvðzk,lxkÞ�vðzn,lxnÞJ2

¼ Jðzk,lxkÞ�ðzn,lxnÞJ2,

which means that the sequence fJðzk,lxkÞ�ðzn,lxnÞJ2g is mono-tonically nonincreasing. Therefore,

limk-1

Jðzk,lxkÞ�ðzn,lxnÞJ2 ¼ Jðz,lxÞ�ðzn,lxnÞJ2, ð20Þ

where ðz,lxÞ can be any limit point of fðzk,lxkÞg. By the continuityof Pðvð�,�ÞÞ, the image of ðz,lxÞ,

Pðvðz,lxÞÞ ¼ limj-1Pðvðzkj ,lxkj ÞÞ ¼ lim

j-1ðzkjþ1,lxkjþ1Þ

is also a limit of ðzk,lxkÞ. Therefore, we have

JPðvðzk,lxkÞÞ�Pðvðzn,lxnÞJ2 ¼ Jðz,lxÞ�ðzn,lxnÞJ2,

which allows us to apply Lemma 5 to get that ðz,y,lxÞ, wherey ¼ yðz,xÞ, is an optimal solution to problems (P) and (DP). Finally,by setting ðzn,lxnÞ ¼ ðz,lxÞ in ð20Þ, we get that

limk-1

Jðzk,lxkÞ�ðz,lxÞJ2 ¼ limj-1

Jðzkj ,lxkj Þ�ðz,lxÞJ2 ¼ 0,

i.e., fðzk,lxkÞg converges to its unique limit of ðz,lxÞ. &

5. Simulation experiments

In the section, we give some simulation experiments. All thealgorithms are run in the MATLAB 7.0 environment on a InterCore personal computer with 2.00 GHz CPU processor and2.00 GB RAM.

In the simulation experiments, we compare the performancesof the proposed alternate direction method with the primal-dualinterior point method for the second-order cone programmingproblem. As is know to all, the primal-dual interior point methodshave proved to be one of the most efficient class of methods forSOCP. Here the Matlab program codes for primal-dual interiorpoint method is designed from the software package by Sedumi[39]. The primal-dual interior point algorithm implemented inSeDuMi is described in [40]. The algorithm has an worst casebound Oð

ffiffiffinp

log EÞ, and treats initialization issues by means of theself-dual embedding technique of [41].

5.1. Numerical results for the random test problems

The first test problems are formulated by random method. Thetest problems are divided into small scale problems, mediumscale problems and large scale problems. The small scale problemis about from 10 to 90, the medium scale problem is about from100 to 900, and the large scale problem is about from 1000 to6000. The test problems are generated by the steps as follows:

Step 1: Let n¼2m, then generate random matrix A with fullrow rank and random vector xAK. In addition, generate randomvector yARm, which satisfies c�AT yAK .

Step 2: Compute b¼Ax.In the alternate direction method, we update the Lagrange

multiplier by adding the step size, that is, we update (9c)

xkþ1 ¼ xkþr1

lðAT ykþ1þzkþ1�cÞ,

where rAð0,ð1þffiffiffi5pÞ=2Þ. The terminated criterion is

JbT y�cT xJr10�6. In the SeDuMi software, we choose the same

terminated criterion as our algorithm. Here we set r¼ 1:6,m¼ 0:5.We choose the initial point x0,y0 by random method.

The test results are shown in Tables 1–3. In tables, ‘‘ADM’’presents for alternating direction method, ‘‘SeDuMi’’ for theprimal-dual interior point method by SeDuMi Software, ‘‘CPU’’for the average CPU time (second), and ‘‘Number’’ for the averagenumber of iteration. In Table 3, ‘‘/’’ denotes that the method doesnot work in our personal computer because the method is ‘‘out ofmemory’’.

The results in tables show that the primal-dual interior pointmethod costs more CPU time than the alternating directionmethod for the three kinds of problems. We also can see theiteration number of primal-dual interior point method is less thanthat of the alternating direction method. Furthermore, whenn45000, the primal-dual interior point method does not workbecause of ‘‘out of memory’’ in our personal computer, so ourmethod needs less memory.

5.2. Numerical results for the realistic test problems

The second test problems are realistic problems. The setsof realistic problems for SOCP are available. For example, in [9],Cai and Toh use two sets of realistic test problems:

(a)

The first set consists of 18 SOCPs in the DIMACS librarycollected by Pataki andSchmieta, available at http://dimacs.rutgers.edu/Challenges/Seventh/Instances/.

(b)

The second set consists of 10 SOCPs from the FIR FilterOptimization Tool-box of Scholnik and Coleman, available athttp://www.csee.umbc.edu/dschol2/opt.html.

Here we select 10 problems from the set (a) and 10 problemsfrom set (b). The problems are shown in Table 4. In Table 4, anentry of the form ‘‘793�3’’ in the ‘‘SOC’’ column means that thereare 793 3-dimensional second order cones. The numbers under

Page 5: An alternating direction method for second-order conic programming

Table 3Comparative results for large scale problems.

n ADM SeDuMi

CPU Number CPU Number

1000 5.257 56 13 16

2000 40.73 77 97 16

3000 68.51 35 350 17

4000 231.2 56 678 17

5000 477.3 59 1363 18

6000 591.1 40 – –

Table 4The selected realistic problems.

Problems Sparsity Scale Type

m n SOCP LIN

nb 0.643 123 2383 793�3 4

nb-L1 0.0662 915 3176 793�3 797

nb-L2 0.780 123 4195 1�1677; 838�3 4

nb-L2-bessel 0.653 123 2641 1�123; 838�3 4

nql30 0.00115 3680 6302 900�3 3602

nql60 0.000293 14 560 25 202 3600�3 14 402

nql180 0.000312 130 080 226 802 32400�3 129 602

qssp30 0.00132 3691 7566 1891�4 2

qssp60 0.000347 14 581 29 526 7381�4 2

qssp180 0.000167 130 141 261 366 65 341�4 2

wbNRL 0.948 460 19 903 2�453; 7�260 17 177

firL2a 0.500 1002 1003 1�1003 0

firL2 0.500 102 103 1�103 0

dsNRL 0.667 406 15 897 1�138; 5424�3 0

firL2Linfalph 0.656 203 9029 1�203; 2942�3 0

firL2L1alph 0.00704 5868 9822 1�3845; 1992�3 1

firL1Linfalph 0.0327 3074 17 532 5844�3 0

firLinf 0.664 402 11 886 3962�3 0

firL2L1eps 0.0323 4124 8969 1�203; 3922�3 0

firL1Linfeps 0.00582 7088 13933 4644�3 1

Table 5Comparative results for selected realistic problems.

Problems ADM SeDuMi

CPU Accuracy CPU Accuracy

nb 4.88 9.6�10�11 6.09 1.6�10�11

nb-L1 7.00 6.7�10�10 7.11 9.3�10�9

nb-L2 1.71 1.3�10�9 10.5 6.0�10�9

nb-L2-bessel 1.25 8.6�10�11 6.94 7.0�10�10

nql30 1.07 1.3�10�9 2.21 8.4�10�9

nql60 3.47 1.3�10�10 7.20 9.0�10�9

nql180 33.9 6.2�10�9 127 6.5�10�8

qssp30 2.10 3.6�10�10 3.88 2.1�10�10

qssp60 7.16 1.4�10�9 15.7 1.5�10�9

qssp180 56.2 9.4�10�8 300 5.3�10�8

wbNRL 27.7 5.7�10�11 496 9.5�10�11

firL2a 11.3 8.9�10�11 12.6 1.7�10�11

firL2 0.80 3.2�10�13 0.20 1.5�10�13

dsNRL 3.27 1.7�10�4 509 3.0�10�11

firL2Linfalph 8.48 4.3�10�4 47.7 9.4�10�9

firL2L1alph 67.1 6.5�10�4 18.7 1.5�10�9

firL1Linfalph 583 9.7�10�4 105 1.1�10�9

firLinf 61.0 2.7�10�4 335 1.1�10�7

firL2L1eps 15.9 3.9�10�4 68.9 1.9�10�8

firL1Linfeps 84.4 3.3�10�4 64.1 1.5�10�9

X. Mu, Y. Zhang / Computers & Operations Research 40 (2013) 1752–17571756

the ‘‘LIN’’ column are the number of linear variables. ‘‘Sparsity’’means the sparsity of constraints matrices.

In the alternate direction method, we define

pinf ¼JAx�bJ2

1þJbJ2, dinf ¼

Jc�z�AT yJ2

1þJcJ2, gap¼

9bT y�cT x9

1þ9bT y9þ9cT x9:

We stop our algorithm when

Accuracy¼maxfpinf ,dinf ,gapgrE,

for E40. Here we set r¼ 1:6,m¼ 0:5. We choose the initial pointx0,y0 by random method. In the SeDuMi software, we choose thesame terminated criterion as our algorithm. The test results areshown in Table 5.

The results in Table 5 show that the primal-dual interior pointmethod costs more CPU time than the alternating directionmethod at the same accuracy E for the 10 problems in set (a)and the first 2 problems in set (b). For problem ‘‘firL2’’, ouralgorithm costs more CPU time than the primal-dual interiorpoint method.

In addition, For the 10 FIR filter optimization problems in set(b), ADM can attain same accuracy as the primal-dual interiorpoint method for the problem ‘‘firL2’’, ‘‘firL2a’’ and ‘‘wbNRL’’, butfor the last 7 problems, ADM cannot attain the higher accuracy.Because they belong to the same kind of problems, then thenonlinear quality of objectives is almost similar. Maybe, thenumber of the second-order cone affects the simulation results,because the first 3 problems have only 9, 1 and 1 second-ordercones differently, but for the last 7 problems, these problems have

5425, 2943, 1993, 5844, 3962, 3923 and 4644 second-order cones.Compared with the 3 ‘‘nql-xxx’’ problems and 3 ‘‘qssp-xxx’’problems, the FIR filter optimization problems have comparativedenser constraints matrices. In addition, the 4 ‘‘nb-xxx’’ problemsare low dimension problems and their numbers of the second-order cone are less than that of last 7 FIR filter optimizationproblems.

ADM appears to be beneficial for most problems, expect forproblems with both a large number of second-order cones anddense constraints matrices. If you have a problem with thatcharacteristic, it appears preferable to use primal-dual interiorpoint methods.

6. Conclusion

The paper extends the alternating direction method fromsemidefinite programming to second-order cone programming.The numerical results for the realistic test problems show that ourmethod is efficient for some second-order cone programmingproblems, but for some FIR filter optimization problems, ADMcannot attain higher accuracy. ADM is a first-order method, whileSeDuMi is a second-order method. Therefore, it is not completelysurprising that SeDuMi can obtain higher accuracy for someproblems. For future research, the idea of a hybrid algorithmcan be used. That is, start out with the first-order ADM, and then,as you approach the neighborhood of a solution, switch over to asecond-order primal-dual interior point method.

Acknowledgment

We would like to thank the anonymous reviewers for theirconstructive comments, and we benefit much from the sugges-tions. The work of Xuewen Mu is supported by National ScienceFoundations for Young Scientists of China (Grant nos. 11101320and 61201297) and the Fundamental Research Funds for theCentral Universities (Grant no. K50511700007). The work ofYaling Zhang is supported by Xi’an University of Science andTechnology Cultivation Foundation in Shaan Xi Province of China(Program no. 2010032).

Page 6: An alternating direction method for second-order conic programming

X. Mu, Y. Zhang / Computers & Operations Research 40 (2013) 1752–1757 1757

References

[1] Lobo MS, Vandenberghe L, Boyd S, Lebret H. Application of second order coneprogramming. Linear Algebra and Its Applications 1998;284:193–228.

[2] Lebret H, Boyd S. Antenna array pattern synthesis via convex optimization.IEEE Transactions on Signal Processing 1997;45:526–32.

[3] Lu WS, Hinamoto T. Optimal design of IIR digital filters with robust stabilityusing conic-quadratic-programming updates. IEEE Transactions on SignalProcessing 2003;51:1581–92.

[4] Luo ZQ. Applications of convex optimization in signal processing and digitalcommunication. Mathematical Programming 2003;97B:177–207.

[5] Rika Ita, Fujie T, Suyama K, Hirabayashi R. Design methods of FIR filters withsigned power of two coefficients using a new linear programming relaxationwith triangle inequalities. International Journal of Innovative Computing,Information and Control 2006;2:441–8.

[6] Monteiro RDC, Takashi Tsuchiya. Polynomial convergence of primal-dualalgorithms for the second-order cone program based on the MZ-family ofdirections. Mathematical Programming 2000;88:61–83.

[7] Alizadeh F, Goldfarb D. Second-order cone programming. MathematicalProgramming 2003;95:3–51.

[8] Kuo Yuju, Mittelmann Hans D. Interior point methods for second-order coneprogramming and OR applications. Computational Optimization and Applica-tion 2004;28:255–85.

[9] Cai Z, Toh K-C. Solving SOCP via a reduced augmented system. SIAM Journalon Optimization 2006;17:711–37.

[10] Vanderbei RJ, Yurttan H. Using LOQO to solve SOCP problems, /http://reference.kfupm.edu.sa/content/u/s/using_loqo_to_solve_second_order_cone_pr_110912.pdfS.

[11] Chen XD, Sun D, Sun J. Complementarity functions and numerical experi-ments for second-order cone complementarity problems. ComputationalOptimization and Applications 2003;25:39–56.

[12] Fukushima M, Luo ZQ, Tseng P. Smoothing functions for second-order conecomplementarity problems. SIAM Journal on Optimization 2002;12:436–60.

[13] Chen JS, Tseng P. An unconstrained smooth minimization reformulation ofthe second-order cone complementarity problems. Mathematical Program-ming 2005;104:293–327.

[14] Kanzow C, Ferenczi I, Fukushima M. On the local convergence of semismoothNewton methods for linear and nonlinear second-order cone programswithout strict complementarity. SIAM Journal on Optimization 2009;20:297–320.

[15] Gabay D. Applications of the method of multipliers to variational inequalities.In: Fortin M, Glowinski R, editors. Augmented Lagrangian methods: applica-tions to the numerical solution of boundary-value problem. Amsterdam:North-Holland; 1983. p. 299–331.

[16] Gabay D, Mercier B. A dual algorithm for the solution of nonlinear variationalproblems via finite element approximations. Computers and Mathematicswith Applications 1976;2:17–40.

[17] He B, Liao L-Z, Han D, Yang H. A new inexact alternating directions methodfor monotone variational inequalities. Mathematical Programming 2002;92:103–18.

[18] He BS, Yang H, Wang SL. Alternating direction method with self-adaptivepenalty parameters for monotone variational inequalities. Journal of Opti-mization Theory and Applications 2000;106:337–56.

[19] Ye C, Yuan X. A descent method for structured monotone variationalinequalities. Optimization Methods and Software 2007;22:329–38.

[20] Eckstein J, Bertsekas DP. An alternating direction method for linear program-

ming. LIDS-P, Cambridge, MA, Laboratory for Information and DecisionSystems, Massachusetts Institute of Technology; 1967.

[22] Chen G, Teboulle M. A proximal-based decomposition method for convexminimization problems. Mathematical Programming 1994;64:81–101.

[23] Eckstein J, Bertsekas DP. On the Douglas–Rachford splitting method and theproximal point algorithm for maximal monotone operators. MathematicalProgramming 1992;55:293–318.

[25] Kiwiel KC, Rosa CH, Ruszczynski A. Proximal decomposition via alternatinglinearization. SIAM Journal on Optimization 1999;9:668–89.

[26] Kontogiorgis S, Meyer RR. A variable-penalty alternating directions methodfor convex optimization. Mathematical Programming 1998;83:29–53.

[27] Malick J, Povh J, Rendl F, Wiegele A. Regularization methods for semidefiniteprogramming. SIAM Journal on Optimization 2009;20:336–56.

[28] Tseng P. Alternating projection-proximal methods for convex programmingand variational inequalities. SIAM Journal on Optimization 1997;7:951–65.

[29] Wang Y, Yang J, Yin W, Zhang Y. A new alternating minimization algorithmfor total variation image reconstruction. SIAM Journal on Imaging Sciences

2008;1:248–72.[30] Yang J, Zhang Y, Yin W. An efficient tvl1 algorithm for deblurring multi-

channel images corrupted by impulsive noise. SIAM Journal on ScientificComputing 2008;31:2842–65.

[31] Yu Z. Solving semidefinite programming problems via alternating directionmethods. Journal of Computational and Applied Mathematics 2006;193:437–45.

[32] Wen Z, Goldfarb D, Yin W. Alternating direction augmented Lagrangianmethods for semidefinite programming. Mathematical Programming

Computation 2010;2:203–30.[34] Hiriart-Urruty J-B, Lemarchal C. Convex analysis and minimization algo-

rithms. I. Fundamentals. In: Grundlehren der Mathematischen Wissenschaften.Fundamental principles of mathematical sciences, vol. 305. Berlin: Springer; 1993.

[35] Faraut U, Koranyi A. Analysis on symmetric cone, Oxford mathematicalmonographs. New York: Oxford University Press; 1994.

[36] Outrata Jir�ı V, Sun Defeng. On the coderivative of the projection operator onto

the second-order cone. Set-Valued and Variational Analysis 2008;16:999–1014.

[37] Kong Lingchen, Tuncel Levent, Xiu Naihua. Clarke generalized Jacobian of theprojection onto symmetric cones. Set-Valued and Variational Analysis

2009;17:135–51.[38] Sun Jie, Zhang Su. A modified alternating direction method for convex

quadratically constrained quadratic semidefinite programs. European Journal

of Operational Research 2010;207:1210–20.[39] Sturm JF. Using SeDuMi 1.02, a MATLAB toolbox for optimization over

symmetric cones. Optimization Methods and Software 1999;11–12:625–53.[40] Sturm JF. Central region method. In: High performance optimization. Kluwer

Academic Publishers; 2000. p. 157–94.[41] Ye Y, Todd MJ, Mizuno S. An Oð

ffiffiffinp

LÞ-iteration homogeneous and self-dual

linear programming algorithm. Mathematics of Operations Research1994;19:53–67.