19
Journal of Algorithms 43 (2002) 201–219 www.academicpress.com Improved approximation of Max-Cut on graphs of bounded degree Uriel Feige, a Marek Karpinski, b and Michael Langberg a,a Department of Computer Science and Applied Mathematics, the Weizmann Institute of Science, Rehovot 76100, Israel b Department of Computer Science, University of Bonn, Bonn, Germany Received 7 November 2000; received in revised form 9 December 2001; accepted 6 March 2002 Abstract Let α 0.87856 denote the best approximation ratio currently known for the Max-Cut problem on general graphs. We consider a semidefinite relaxation of the Max-Cut problem, round it using the random hyperplane rounding technique of Goemans and Williamson [J. ACM 42 (1995) 1115–1145], and then add a local improvement step. We show that for graphs of degree at most , our algorithm achieves an approximation ratio of at least α + , where > 0 is a constant that depends only on . Using computer assisted analysis, we show that for graphs of maximal degree 3 our algorithm obtains an approximation ratio of at least 0.921, and for 3-regular graphs the approximation ratio is at least 0.924. We note that for the semidefinite relaxation of Max-Cut used by Goemans and Williamson the integrality gap is at least 1/0.885, even for 2-regular graphs. 2002 Elsevier Science (USA). All rights reserved. 1. Introduction Given a graph G = (V,E), the Max-Cut problem on G is the problem of finding a partition (X, Y ) of the vertex set V which maximizes the number of edges with one endpoint in X and another in Y . An r approximation algorithm for the Max-Cut problem is an algorithm that when evoked on a graph G yields * Corresponding author. E-mail address: [email protected] (M. Langberg). 0196-6774/02/$ – see front matter 2002 Elsevier Science (USA). All rights reserved. PII:S0196-6774(02)00005-6

Improved approximation of Max-Cut on graphs of bounded degree

Embed Size (px)

Citation preview

Journal of Algorithms 43 (2002) 201–219

www.academicpress.com

Improved approximation of Max-Cuton graphs of bounded degree

Uriel Feige,a Marek Karpinski,b and Michael Langberga,∗

a Department of Computer Science and Applied Mathematics, the Weizmann Institute of Science,Rehovot 76100, Israel

b Department of Computer Science, University of Bonn, Bonn, Germany

Received 7 November 2000; received in revised form 9 December 2001; accepted 6 March 2002

Abstract

Let α � 0.87856 denote the best approximation ratio currently known for the Max-Cutproblem on general graphs. We consider a semidefinite relaxation of the Max-Cut problem,round it using the random hyperplane rounding technique of Goemans and Williamson[J. ACM 42 (1995) 1115–1145], and then add a local improvement step. We show thatfor graphs of degree at most∆, our algorithm achieves an approximation ratio of at leastα+ ε, whereε > 0 is a constant that depends only on∆. Using computer assisted analysis,we show that for graphs of maximal degree 3 our algorithm obtains an approximation ratioof at least 0.921, and for 3-regular graphs the approximation ratio is at least 0.924. Wenote that for the semidefinite relaxation of Max-Cut used by Goemans and Williamsonthe integrality gap is at least 1/0.885, even for 2-regular graphs. 2002 Elsevier Science(USA). All rights reserved.

1. Introduction

Given a graphG = (V ,E), the Max-Cut problem onG is the problem offinding a partition(X,Y ) of the vertex setV which maximizes the number ofedges with one endpoint inX and another inY . An r approximation algorithmfor the Max-Cut problem is an algorithm that when evoked on a graphG yields

* Corresponding author.E-mail address:[email protected] (M. Langberg).

0196-6774/02/$ – see front matter 2002 Elsevier Science (USA). All rights reserved.PII: S0196-6774(02)00005-6

202 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

a partition(X,Y ) of value at leastr times the value of the optimal partition (suchan algorithm is also said to have an approximation ratio ofr).

The currently best known approximation algorithm for the Max-Cut problemis the algorithm of Goemans and Williamson [1]. We denote this algorithm as theGW algorithm. Given a graphG with vertex sizen, the GW algorithm has twosteps. In the first step a semidefinite relaxation of the Max-Cut problem onG issolved yielding a set of unit vectors{v1, . . . , vn} in Rn. In the second step thisset of vectors is randomly rounded into a partition(X,Y ). The GW algorithm isshown to have an expected approximation ratio of at leastα � 0.87856.

GivenG, let (X,Y ) be the partition obtained by the GW algorithm. It is evidentfrom [1] that it might be the case that a vertex ofG and more than half itsneighbors end up on the same side of the partition(X,Y ). We denote such verticesasmisplacedvertices. Clearly in such a case the partition(X,Y ) can be improvedby moving the misplaced vertex from one side of the partition to the another, thusyielding a new partition cutting more edges. In the following work, we considerenhancing the GW algorithm with an additional local step that moves misplacedvertices from one side of the partition to the other, until no such vertices are left.We prove that the enhanced algorithm improves upon the original approximationratio ofα by a constant fraction when the maximal degree∆ of the given graph isconstant. Namely, we prove an approximation ratio of 0.921 on graphs of maximaldegree three, of 0.924 on regular graphs of degree three, and ofα +�(1/∆4) ongraphs of maximal degree∆. Our results apply to graphsG for which we assumethat there are no weights associated with the edge setE. Our analysis can beextended to graphs with edge weights as well, although the results we obtain areweaker (we briefly discuss this in Section 5).

The Max-Cut problem on bounded degree graphs is mentioned in [2] where itis shown that it is NP-hard to approximate the Max-Cut problem on regular graphsof degree 3 beyond the ratio of 0.997. To the best of our knowledge, the bestknown approximation ratio for the Max-Cut problem on bounded degree graphsis that of the general problem which isα � 0.87856. The Max-Cut problemrestricted to 3 regular graphs has been previously studied in [3] (where a parallelapproximation algorithm of ratio 3/4 is presented), and in [4] (where it is shownthat a local maximum can be found in polynomial time).

Our results are achieved using an algorithm that differs from the one presentedin [1] not only in the additional improvement step, but also in the fact thatan improvedsemidefinite relaxation of the Max-Cut problem is used as a baseof our algorithm. Specifically, we addtriangle constraints (mentioned in [5])to our relaxation. These additional constraints have been studied in the past inthe context of several problems including the Max-Cut problem. The resultsof [6] imply that the value of the Max-Cut semidefinite relaxation with triangleconstraints isequal to the value of the optimal cut on planar graphs. Withoutthese constraints it is shown in [7] that the original semidefinite relaxation hasan integrality gap arbitrarily close to 1/α. In [1] it is shown that the original

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 203

relaxation has an integrality gap of least 1/0.885 even on two regular graphs(the 5 cycle). In [8] these constraints are used in order to achieve an improvedapproximation ratio for the Max-Cut problem on graphs which have a fractionaltriangle covering. The addition of such triangle constraints to the semidefiniterelaxation of the Max-Cut problem on general graphs is not known to improve theworst caseintegrality gap of such a relaxation nor is it known to yield an improvedapproximation algorithm. In [9] it is shown that the addition of such constraintswill not improve the approximation ratio of the algorithm presented in [1] asis. In our work, the addition of such triangle constraints to the semidefiniterelaxation of the Max-Cut problem on graphs of maximal degree three yieldsan improved approximation algorithm with a ratio of 0.921. As the originalsemidefinite relaxation of [1] has an integrality gap of at least 1/0.885 on a tworegular graph, we conclude that the addition of such triangle constraints to oursemidefinite relaxation is crucial to the success of our algorithm.

As in the algorithm of [1], many other algorithms based on semidefiniteprogramming use therandom hyperplanerounding technique, and are analyzedin a local manner (for instance [5,8,10]). That is, for each edge a localexpected approximation ratio is computed. This ratio then holds as an expectedapproximation ratio for the algorithm as a whole due to linearity of expectation.1

In our algorithm for the Max-Cut problem on graphs with maximal degree three,we present an analysis in which we compute a local approximation ratio noton single edges but on clusters of edges. In particular, we concentrate on pairsof edges which share a common vertex. Such an analysis allows us to evaluatethe contribution of our additional improvement step as a function of the vectorconfiguration obtained by the semidefinite relaxation.

The paper is structured as follows. In Section 2 we briefly review the Goemansand Williamson algorithm [1] for the Max-Cut problem. In Section 3 we presentour enhanced algorithm and its analysis for graphs of maximal degree three. InSection 4 we present our results for graphs of maximal degree∆.

2. Preliminaries

2.1. Notation

Throughout our work we assume thatG= (V ,E) is an undirected graph, withvertex setV = {1, . . . , n}. We assume that there are no weights associated with

1 We would like to note that other techniques have been used in the analysis of approximationalgorithms based on semidefinite programming. For example, Nesterov [11] presents a generalframework for analyzing the quality of semidefinite relaxations to quadratic optimization problems.Applying his techniques for the semidefinite relaxation of Max-Cut results in an approximation ratioof 2/π � 0.6366.

204 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

the edge setE. In general our results generalize to the case in whichG has edgeweights, we briefly discuss this in Section 5. For any setX ⊆ V let w(X) be thenumber of edges with one end point inX and another inV \X. Denote the totalnumber of edges by|E|; and the number of edges cut by the optimal partitionof G asOpt(G). w(X) is referred to as thevalueor weightof the cut(X,Y ). Forany natural numbern we denote the set{1, . . . , n} by [n]. Finally, throughout ourwork α will represent the expected approximation ratio of the GW algorithm forMax-Cut [1].

2.2. The GW algorithm

The Max-Cut problem on a graphG= (V ,E) with |V | = n can be representedas a quadratic integer program:

(QI-MC) Maximize∑eij∈E

1− xi · xj2

subject to: xi ∈ {−1,1} for i ∈ [n].The above program can be understood as follows. With each vertexi ∈ V we

associate a variablexi ; the value ofxi (which is either+1 or −1) indicates inwhich side of the cut the respective vertex is placed. For each edgeeij ∈ E, ifxi �= xj (corresponding to the case in whicheij is in a cut) then the value of(1− xi · xj )/2 is 1, and ifxi = xj then this value is 0.

The requirementxi ∈ {−1,1} can be relaxed by representing each variablexi by a unitn-dimensional vectorvi ∈ Sn (hereSn is the unit sphere) and themultiplicationxi · xj by the inner product〈vi , vj 〉:

(SDP-MC) Maximize∑eij∈E

1− 〈vi, vj 〉2

subject to: vi ∈ Sn for i ∈ [n].As every solution of (QI-MC) is also a solution of (SDP-MC), the value of

(SDP-MC) is at least as large as that of (QI-MC). (SDP-MC) can be solved (up toarbitrary precision) in polynomial time using semidefinite programming (see [1]).

The algorithm suggested in [1] for the Max-Cut problem has two steps. In thefirst step the relaxation (SDP-MC) is solved. A solution to (SDP-MC) is a set ofunit vectorsv1, . . . , vn in Rn, rather than a cut ofG. In the second step, a cut(X,Y ) of G is obtained byroundingthe set of optimal vectorsv1, . . . , vn usingthe random hyperplanerounding technique. That is, a random vectorr ∈ Sn ischosen and the setX is defined to be the set of verticesi with correspondingvectorsvi that have a positive inner product withr. The vectorr can be viewedas the normal to a random hyperplane passing through the origin.

It is shown in [1] that the expected contribution of each edgeeij ∈ E to thecut (X,Y ) is θij /π , whereθij is the angle between the vectorsvi andvj . As the

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 205

contribution of each edge to the objective function of (SDP-MC) is(1−cosθij )/2we conclude that the expected approximation ratio achieved on each edge is(2θij )/(π(1 − cosθij )). This expected ratio is minimal whenθij = θ0 � 2.3311and in this case obtains the valueα � 0.87856. We conclude that the expectedweightw(X) of the partition(X,Y ) obtained by the random hyperplane roundingtechnique is at leastαOpt(G), and will be exactlyαOpt(G)when for all edgeseijwe have that the angleθij is θ0 or zero.

3. Max-Cut on graphs of maximal degree 3

We present an algorithm which givenG = (V ,E) as above will outputa partition(X,Y ) of V such thatw(X)� rOpt(G) for r � 0.921. Our algorithmis an extension of the Max-Cut algorithm presented in [1].

In the first step of our algorithm an enhanced semidefinite relaxation of theMax-Cut problem onG is solved to achieve a set ofn unit vectors{v1, . . . , vn}.These vectors are then rounded into a cut(X,Y ) of G using the randomhyperplane rounding technique of [1]. As mentioned in the introduction, it maybe the case that there are misplaced vertices in the cut(X,Y ) (i.e., vertices withmore neighbors on their side of the cut than on the opposite side). In the last stepof our algorithm, we move these vertices from one side to the other until no moremisplaced vertices remain. This is done in agreedymanner. Following we presentthe semidefinite relaxation used in the first step of our algorithm and define thegreedy procedure used in the last step.

3.1. The semidefinite relaxation

Let T be the set of triplets(i, j, k) in which i, j, k are vertices inV andboth j andk are neighbors ofi (for technical reasons we also assumej < k).The semidefinite relaxation we use in our algorithm is

(SDP-Cut) Maximize∑eij∈E

1− 〈vi, vj 〉2

subject to:

(1) vi ∈ Sn for i ∈ [n];(2) 〈vi, vj 〉 + 〈vi , vk〉 + 〈vj , vk〉 � −1,

〈vi, vj 〉 − 〈vi , vk〉 − 〈vj , vk〉 � −1 for all i, j, k ∈ [n];(3) 〈vi, vj 〉 + 〈vi , vk〉 + 〈vj , vk〉 = −1 for all (i, j, k) ∈ T .

Notice that we have added two families of constraints to the original relaxationof [1]: the triangle constraints (2) and thetight triangle constraints (3). Thetriangle constraints are valid when the vectors{v1, . . . , vn} are restricted to beone dimensional and thus can be added to (SDP-Cut). It is shown in [1] that

206 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

the original semidefinite relaxation SDP-MC (without triangle constraints) has anintegrality gap of at least 1/0.885 even on 2 regular graphs (the 5 cycle). Hencethese additional triangle constraints are crucial to our analysis which results in anapproximation ratio of 0.921 on such graphs.

For the validity of the tight triangle constraints consider anoptimalpartition(X,Y ) of G = (V ,E). As G has maximal degree three, for every vertexv itcannot be the case thatv and two of its neighbors lie on the same side of thepartition (i.e.v is misplaced). In such a case movingv to the other side of thepartition would increase the weight of the partition, which is a contradiction toits optimality. Hence we may add a corresponding constraint to (SDP-Cut) whichrules out the possibility of misplaced vertices when the corresponding vectors of(SDP-Cut) are restricted to be one dimensional. This constraint is the tight triangleconstraint.

3.2. The greedy phase

As mentioned earlier after the first step of our algorithm it might be the casethat for some triplet(i, j, k) ∈ T , defined above, the verticesi, j and k lie onthe same side of the partition. In such a case, moving the misplaced vertexi

to the other side of the partition will increase its weight. We call such a triplet(i, j, k) ∈ T in which i is misplaced as agoodtriplet.

Given a partition(X,Y ) we are interested in moving all misplaced verticesuntil none are left. In general at each step of this greedy process we could decideto move the one misplaced vertex that increases the weight of the partition bymost. But as moving one misplaced vertex affects other vertices, we are alsointerested that the vertex moved does notdestroymany good triplets (a goodtriplet is destroyed if it is no longer good). In order to combine these two interests,at each stage of our greedy process we move the vertex for which the ratio betweenthe weight added to the partition by moving it from one side of the partition tothe other and the number of good triplets destroyed by this act, is maximal. Wedenote the resulting algorithm asAcut.

Algorithm Acut.

1. Solve (SDP-Cut) to obtain an optimal vector configuration{v1, . . . , vn}of valueZ. Round the vector configuration using the random hyperplanerounding technique from [1]. Denote the partition obtained by(X,Y ).

2. Greedily move misplaced vertices from one side of the partition to the otheraccording to the procedure above.

Theorem 3.1. Acut has an expected approximation ratio ofr � 0.921.

Proof. First we note that w.l.o.g. we may assume thatG does not have anyvertices of degree 1. Otherwise we may run algorithmAcut on the graphG

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 207

obtained by iteratively removing all vertices of degree one inG. It can be seen thatan approximation algorithm with ratior on G yields an approximation algorithmwith ratio at leastr onG.

AssumeG is as above, let{v1, . . . , vn} be the optimal vector configuration,and let(X,Y ) be the partition after step (1) of algorithmAcut. LetW =w(X) bethe weight of this partition. Recall, using the analysis of [1], that in expectationW � αOpt(G). Furthermore, the expected value ofW is αOpt(G) only if foreach edgeeij in G the angle between its corresponding vectorsvi and vj iseither 0 orθ0 � 2.3311. Our main observation is the fact that in both theseworstcases (and in the case in which we arecloseto being in the worst case) there issome constant probability that a vertex and two of its neighbors lie on the sameside of the partition(X,Y ). In such cases moving the vertex from one side of thepartition to the other will increase the weight of the partition.

Geometrically speaking, one can view our observation in the followingway. Let v be some vertex inV and y1, y2 be two of its neighbors. Denotethe corresponding vectors in the optimal vector configuration asv, y1, y2,respectively. Assume that the angles between the vectorsv andy1, v andy2 areexactlyθ0 (a similar analysis can be given for the case in which these angles areclose toθ0). Then by constraint (2) of (SDP-Cut), itcannotbe the case that allthree vectors lie on the same plane. Furthermore, this constraint implies that theangle between the vectorv and the plane containing the vectorsy1 andy2 is atleast 0.57, which in turn implies that with probability at least 0.07 the vertexvwill be misplaced (after the randomized rounding).

The upcoming lemmas extend this observation. We use a refinement of thesetT . For each edgeeij let dij be the number of triplets inT in whicheij appears.It can be seen that in a graph of minimum degree 2 and maximum degree 3,dij = 2 if the degree ofvi andvj are 2,dij = 3 if the degree ofvi differs fromthe degree ofvj , anddij = 4 otherwise. LetT be the set of triplets inG; forl = 4,5, . . . ,8 denote the set of triplets(i, j, k) ∈ T in whichdij + dik = l by Tl .Given a partition(X,Y ) we denote bySl the number of good triplets inTl .

Lemma 3.2. Let (X,Y ) be some partition inG of weightW . Let T , Tl, andSlfor l = 4, . . . ,8 be defined as above. Executing step(2) of algorithmAcut on thepartition (X,Y ) will yield a new partition of weight at least

W + 23S4 + 1

2S5 + 25S6 + 3

8S7 + 13S8.

Proof. Roughly speaking, we define the contribution of each triplet(i, j, k) thatis good in(X,Y ) as the number of edges it adds to the partition the moment it isdestroyed by the greedy phase of our algorithm. Letτ be one of the verticesi, j ,or k. If (i, j, k) is destroyed by the movement of the vertexτ , and by this actpedges are added to the partition andq triplets (including(i, j, k)) are destroyed,we fix the contribution of the triplet(i, j, k) to bep/q . Hence, due to the nature

208 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

of our greedy phase, we may bound the contribution of the good triplet(i, j, k)

from below by computing the ratio between the number of edges added to thepartition and the number of triplets destroyed by the act of moving any one ofthe verticesi, j , or k from one side of the partition to the other. By findingsuch a lower bound for each good triplet inTl (l = 4, . . . ,8) we conclude ourassertion.

Note that for any good triplet(i, j, k) and any vertexτ as above, thecontribution of movingτ from one side of the partition to the other is at least oneedge, while this act will destroy at most 9 triplets. It follows that we may triviallyclaim that executing step (2) of algorithmAcut on the given partition(X,Y ) willyield a new partition of weight at least

W + 19(S4 + S5 + S6 + S7 + S8).

The following case analysis refines this trivial analysis and provides a full proofof the asserted lemma.

Case 1. Let (i, j, k) be a good triplet inT4, i.e. the verticesi, j and k are ofdegree 2, and all lie on the same side of the partition(X,Y ). As the greedy step (2)ofAcut continues as long as there is some misplaced vertex, the triplet(i, j, k)willbe destroyed sometime during our greedy procedure. Denote the vertex movedwhen(i, j, k) is destroyed byτ (τ is eitheri, j or k). Let p be the number ofedges added to the partition when(i, j, k) is destroyed, andq be the number ofgood triplets destroyed along with(i, j, k).

As mentioned above the degrees ofi, j andk are 2. Hence, if(i, j, k) is a goodtriplet, then by moving the vertexi we destroy at most 3 triplets and increasethe weight of the partition by exactly 2 edges. Due to the nature of our greedyphase, we conclude that by movingτ from one side of the partition to the other,the ratio between the number of edges added to the partition and the number oftriplets destroyed is at least 2/3. We conclude that when(i, j, k) is destroyed itcontributes at least 2/3 to the partition.

Case 2. Let (i, j, k) be a good triplet inT5; using the same line of analysisdisplayed in the previous case, it is enough to analyze the movement of the vertexi

from one side of the partition to the other. Assume that the vertexj is of degree 3and verticesi, k are of degree 2 (it cannot be the case thati is of degree 3). InFig. 1 a schematic view of our case is presented. Solid lines represent edges thatpreservesides of the partition (for instance the verticesi, j andk lie on the sameside of the partition), dotted lines represent edges that arenot known to preservesides of the partition, and bold lines represent edges that are knownnot to preservesides of the partition. It can be seen that movingi from one side of the partitionto the other we gain exactly 2 edges to the partition, but destroy at most 4 triplets.We conclude that(i, j, k) contributes at least 1/2 edges to the partition.

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 209

Fig. 1. Case 2.

Fig. 2. Case 3.

Case 3. Let (i, j, k) be a good triplet inT6. There are two possibilities: either thevertexi is of degree 2 and the verticesj, k are of degree 3, or the vertexi is ofdegree 3 andj , k are of degree 2. Both cases are presented in Fig. 2. In the firstcase (a), movingi to the other side of the partition increases the weight of thepartition by 2 edges and destroys at most 5 triplets. Hence, if we are in this casethen(i, j, k) contributes 2/5 when it is destroyed.

In the second case we consider two sub-cases:(b1) the case in which one of the verticesj and k is on the same side as its

additional neighborx;(b2) the case in which the additional neighbors ofj andk are both on the opposite

side of the partition.

210 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

Fig. 3. Case 4.

In the first case we have that movingj from one side of the partition to the otherwill contribute 2 edges to the cut and destroy at most 5 triplets, yielding a ratioof 2/5, and in the second we obtain a ratio of 3/5 by again movingj (note thatin the case (b2) one must fix the side of the third neighborx of i and only thenanalyze); thus we conclude our assertion.

Case 4. Let (i, j, k) be a good triplet inT7. It must be the case that the vertexiis of degree 3 and the verticesj, k are of different degree (we assume thatj is ofdegree 2). We consider three cases that are presented in Fig. 3. In the first case (a)all three neighbors ofi are on the same side asi. In such a case one can see thatin moving i a ratio of at least 3/8 is achieved. In the second and third cases weassume that the additional neighbor ofi is on the opposite side of the partition.

In the second case (b) we assume that the additional neighborx of j is on thesame side asj . In this case by movingj we achieve a ratio of at least 2/4.

At last in the third case (c1) and (c2) we assume that the additional neighborof j is also on the opposite side of the partition, and we consider two situations:the first when all additional neighbors ofk are on the same side of the partitionask, and the second when there is some neighborx of k on a different side. Itcan be seen that movingk in the first case results in a ratio of at least 3/8, andmovingi in the second results in a ratio of at least 1/2.

Case 5. Let (i, j, k) be a good triplet inT8. It must be the case that all threevertices are of degree 3. In Fig. 4 three sub-cases are considered. In case (a) allthree neighbors ofi are on the same side of the partition asi; in this case by

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 211

Fig. 4. Case 5.

movingi we obtain a ratio of 3/9. In the remaining cases (b), (c) we assume thatthe additional neighbor ofi is on the opposite side of the partition. In case (b)we assume that for one of the verticesj or k, all three of its neighbors are onthe same side of the partition as it is (in Fig. 4 we assume the above forj ). Insuch a case movingj results in a ratio� 3/8. In the last case (c)i, j , andk haveat least one neighbor on an opposite side. In this case after movingi we havea ratio� 1/3. ✷Corollary 3.3. LetW be the weight of the partition(X,Y ) obtained by step(2) ofalgorithmACut, and letSl (l = 4, . . . ,8) be the number of good triplets in(X,Y )according to the above definition. The expected weight of the partition receivedby algorithmACut is

E[w(ACut)] = E[W + 2

3S4 + 12S5 + 2

5S6 + 38S7 + 1

3S8]

= E[W ] +8∑l=4

αlE[Sl]

=∑eij

Pr(eij is cut)

+8∑l=4

∑(i,j,k)∈Tl

αl Pr(vi , vj , vk are not separated)

212 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

=8∑l=4

∑(i,j,k)∈Tl

(Pr(eij is cut)

dij+ Pr(eik is cut)

dik

+ αl Pr(vi, vj , vk are not separated)

),

where(α4, α5, . . . , α8)= (2/3,1/2,2/5,3/8,1/3).

Proof. All probabilities above are taken over random hyperplanes cutting the unitsphere ofRn. The second and third equalities are due to linearity of expectation,and the fourth is due to the fact that each edgeeij is in exactlydij triplets inT . ✷Lemma 3.4. Let l ∈ {4, . . . ,8}. For every triplet(i, j, k) ∈ Tl we have that

Pr(eij is cut)

dij+ Pr(eik is cut)

dik+ αl Pr(vi, vj , vk are not separated)

� r

(1− 〈vi , vj 〉

2dij+ 1− 〈vi , vk〉

2dik

)for r = 0.921.

Proof. Let l = dij + dik . Define the following functions:

f (θij , θik) = θij

dij π+ θik

dikπ+ αl

(1− 1

(θij + θik

+ arccos(−1− cos(θij )− cos(θik)

))),

g(θij , θik) = 1− cos(θij )

2dij+ 1− cos(θik)

2dik,

h(θij , θik) = f (θij , θik)

g(θij , θik),

whereθij (θik) is the angle betweenvi andvj (vi andvk ).In [1] it is shown that the probability that three unit vectorsvi , vj , andvk are

not separated by the random hyperplane rounding technique is 1− (θij + θik +θjk)/(2π). Furthermore recall that for every triplet(i, j, k) ∈ T it is the case thatcos(θij )+ cos(θik)+ cos(θjk)= −1 (constraint (3) of (SDP-Cut)). We concludethat the functionf (θij , θik) represents the expected contribution of the triplet(i, j, k) to the cut. Similarly, the functiong(θij , θik) represents the contributionof each triplet(i, j, k) to the objective function of (SDP-Cut).

In order to prove the above claim it is suffice to show thath(θij , θik) > r

in the rangeθij ∈ [0,π], θik ∈ [π − θij ,π] (the triangle constraints added to(SDP-Cut) imply that cos(θij )+ cos(θik)� 0). By evaluating the value ofh overa number of points in the above range and bounding the derivative ofh over thewhole range our lemma is proven (we use the facts thatdij , dik ∈ {2,3,4} andl = dij + dik ∈ {4, . . . ,8}). ✷

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 213

By Corollary 3.3 and Lemma 3.4 we have that the expected weight of thepartition obtained by algorithmACut is

E[w(ACut)

]� r

8∑l=4

∑(i,j,k)∈Tl

(1− 〈vi, vj 〉

2dij+ 1− 〈vi, vk〉

2dik

)= rZ

� rOpt(G),

for r = 0.921.This completes the proof of Theorem 3.1. The same line of analysis yields a

0.924 approximation ratio on 3 regular graphs.✷

4. Max-Cut on graphs of maximal degree ∆

In the following section we consider the Max-Cut problem on graphs ofmaximal degree∆. We present a slightly different algorithm and analysis than theones presented in the case where∆= 3, and achieve an improved approximationratio ofα +�(1/∆4).

As in the previous section, the starting point of our enhanced algorithm isthe following semidefinite relaxation of the Max-Cut problem on a given graphG= (V ,E):

(SDP-Cut) Maximize∑eij∈E

1− 〈vi, vj 〉2

subject to: (1) vi ∈ Sn for i ∈ [n];(2) 〈vi, vj 〉 + 〈vi , vk〉 + 〈vj , vk〉 � −1,

〈vi, vj 〉 − 〈vi , vk〉 − 〈vj , vk〉 � −1for all i, j, k ∈ [n].

Note that constraint (3) used in the previous section is no longer valid. Asearlier, solving (SDP-Cut) on a given graphG, we obtain a set ofn vectorscorresponding to the vertices ofG. We will concentrate on the case in whichfor all edgeseij in G the angle between the corresponding vectorsvi andvj isequal toθ0 � 2.3311, for in this case the random hyperplane technique will yielda partition(X,Y ) of expected weight no better thanα times the weight of theoptimal partition inG.2

In this worst case, we will show that there is some probability (depending on∆ alone) that a vertex and more than half its neighbors lie on the same side of

2 As noted earlier, there are other worst cases for the [1] algorithm, namely when for some edgeseij the angles between the corresponding vectors is zero. In [8] it is shown how to deal with thesecases by the use of an improved rounding technique which usesoutward rotations. We deal with suchcases in a different manner which will be explained in the proof of Theorem 4.3.

214 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

the partition obtained by the random hyperplane rounding technique. As earlier,we denote such vertices as misplaced, and by adding an additional step whichmoves misplaced vertices from one side of the partition to the other, we areable to achieve an improved approximation ratio. In Theorem 4.3 we show thatimproving the approximation ratio for this worst case scenario suffices to obtainan improved approximation ratio when the anglesθij corresponding to edgeseijare not necessarily all of valueθ0.

Consider the following algorithm for the Max-Cut problem on graphs withbounded degree∆.

Algorithm ACut.

1. Solve the semidefinite relaxation (SDP-Cut) and round the resulting optimalvector configuration using the random hyperplane rounding technique.Denote the partition obtained by(X,Y ).

2. Move misplaced vertices in(X,Y ) from one side of the partition to the otheruntil no such vertices are left.

Let {v1, . . . , vn} be the vector configuration obtained after solving thesemidefinite relaxation (SDP-Cut) on a given graphG. Let (X,Y ) be the partitionobtained after rounding this vector configuration using the random hyperplanerounding technique [1]. Define an edgeeij to bebad if the inner product〈vi , vj 〉is equal to cos(θ0) � −0.688 (i.e.θij = θ0). Define a vertexi to beall bad iffor all neighborsj of i we have that the edgeeij is bad. Denote byN(i) theset of vertices adjacent toi. Following we show that an ‘all bad’ vertex willbe misplaced in(X,Y ) with probability�(1/∆2). We will need the followinglemma.

Lemma 4.1. Let v be some vector inRn of norm‖v‖, and letr = (r1, . . . , rn) ∈Rn be the normal of a random hyperplane inRn (i.e. eachri has standard normaldistributionN(0,1)). The size of the projection ofv on r can be bounded asfollows:

Pr[∣∣〈v, r〉∣∣ � ε‖v‖] = Pr

[|r1| � ε]� 1− ε,

Pr[〈v, r〉 ∈ [0, ε]‖v‖] = Pr

[r1 ∈ [0, ε]] ∈ [ε/8, ε/2],

whereε ∈ [0,1].

Proof. As the distribution ofr is spherically symmetric [12, Theorem IV.16.3],we may assume thatv is the vector(‖v‖,0, . . . ,0); thus 〈v, r〉 is exactlyr1‖v‖. ✷Lemma 4.2. Let (X,Y ) be the partition obtained after step(1) of algorithmACut.Let i be anall badvertex of degreed . With probability at least�(1/d2) it is the

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 215

case thatmorethan half of the vertices inN(i) lie on the same side of the partition(X,Y ) as the vertexi.

Proof. LetN = {u1, . . . , ud } be the vectors corresponding to the vertices inN(i)

and denote the vectorvi corresponding to the vertexi as v. As the vectorsv, {u1, . . . , ud} lie in a common (d + 1)-dimensional space, the followingrepresentation ofv, {u1, . . . , ud } may be assumed without loss of generality.Let v = (1,0, . . . ,0) ∈ Rd+1 anduj = (αj ,βj , γj ) for j ∈ [d], whereαj ∈ R,βj ∈ R, γj ∈ Rd−1, andαj , βj , γj are vectors in mutually orthogonal subspaces.Furthermore, we may assume for the vectoru1 thatγ1 = 0. Letr = (r1, . . . , rd+1)

be a (d + 1)-dimensional random variable representing the normal to a randomhyperplane; i.e. everyri is an independent standard normal random variable. Letr be defined as(αr , βr , γr) whereαr = r1, βr = r2, andγr = (r3, . . . , rd+1).

In general we intend to prove that with probabilityδ =�(1/d2) (overr), thevectorv and more than half its neighbors lie on the same side of the hyperplanecorresponding tor. Let δ1 be the probability that the vectorv lies very closebut abovethe random hyperplane corresponding tor; i.e. the inner product〈v, r〉is small but positive. Note that this probability is dependent onr1 alone, andcorresponds to a positiver1 of low magnitude. Conditioning on suchr (i.e. r1),we are interested in the probability that more than half of the vectors inN alsolie above the hyperplane corresponding tor. As r1 is small, this is roughly theprobability that more than half of the projected vectors(βj , γj ) ∈ Rd for j ∈ [d]will be above the random hyperplane corresponding to the random vector(r2, r3).

If the number of vectors inN is odd (i.e.d is odd), then the above will happenwith probability 1/2, and we may conclude thatδ is roughlyδ1/2. Otherwise wecondition on an additional projection. Letδ2 be the probability that the vectoru1 is also very close but above a random hyperplane corresponding to a randomvectorr. As before this probability is dependent onr1 andr2 alone (recall thatu1 = (α1, β1,0)). Now conditioning the random vectorr on r1 andr2 (in orderto ensure that bothv and u1 lie close to and above the random hyperplanecorresponding tor), we are interested in the probability that more than half ofthe remaining vectors inN also lie above the hyperplane corresponding tor. Asabove, due to the fact thatr1 and r2 are of small magnitude this probability isroughly the probability that more than half of the projected vectorsγj ∈Rd−1 forj ∈ {2, . . . , d} will be above the random hyperplane corresponding to the randomvectorr3. We are now in a case similar to the case in whichd is odd, and concludethatδ is roughlyδ1δ2/2. Detailed proof follows.

The vertexi is all bad; thus for all j ∈ [d] we have that the inner product〈v,uj 〉 = αj equals cos(θ0)� −0.688, and that the vector(βj , γj ) ∈ Rd satisfies‖(βj , γj )‖ � 0.725 (foru1 this implies thatβ1 � 0.725). Fori, j ∈ [d] the triangleconstraints (〈v,ui〉 + 〈v,uj 〉 + 〈ui, uj 〉 � −1) of (SDP-Cut) yield〈ui, uj 〉 =αiαj + βiβj + 〈γi, γj 〉 � 0.376. Henceβiβj + 〈γi, γj 〉 � −0.097 meaning that

216 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

the angle between the vectors(βi, γi), (βj , γj ) ∈ Rd is at most 1.763 (boundedaway fromπ ).

Let ε > 0 be some small value which will be fixed later in the proof. ByLemma 4.1 we have that with probability at leastε/16 (overαr ∈ R) it is thecase that〈v, r〉 = αr ∈ [ε/8, ε], and with probability at leastε/2 (overβr ∈ R) wehave thatβr ∈ [ε,8ε] (we assume thatε < 1/8). We conclude that with probabilityat leastε2/25 over the random vectorr we have that bothv andu1 lie above thehyperplane corresponding tor.

Furthermore, considering a vectorγj , we conclude using Lemma 4.1 thatwith probability at least 1− 1/(4d) (over γr ) the value of|〈γj , γr〉| is at least‖γj‖/(4d). Hence with probability at least 3/4 (overγj ) we have for all vectorsγj that|〈γj , γr 〉| � ‖γj‖/(4d).

Finally, consider the probability overγr thatmorethan half the inner products〈γj , γr 〉 are non-negative (j ∈ [d]). As described above, this probability is at leasthalf, due to the fact that for anyγr we have〈γ1, γr 〉 = 0. We conclude, that withprobability 1/4 overγr , we are in the case in which〈γr , γj 〉 � ‖γj‖/(4d) formorethan half of the vectorsγj . Denote the set of these vectors asN . Note thatu1 ∈ N .

Let r be a random vectorr = (αr , βr , γr) with αr , βr, γr as above (thishappens with probability at leastε2/27). We now claim that choosing an ap-propriateε, we obtain for all vectorsuj in N that the inner product〈uj , r〉> 0,as|N |> d/2 this completes the proof of our lemma.

Let uj = (αj ,βj , γj ) be some vector inN . If βj � 0.7 then we have that

〈uj , r〉 = αjαr + βjβr + 〈γj , γr 〉 � (βj − 0.688)ε+ 1

4d‖γj‖> 0.

Otherwise assume that foruj ∈ N we haveβj < 0.7. As we have shown earlier,‖(βj , γj )‖ � 0.725 for all vectors inN ; thus we conclude that in this case‖γj‖must be at least 1/8. Furthermore, for any vectoruj ∈ N the inner product〈u1, uj 〉 = α1αj + β1βj is at least 0.376, thus implying thatβj must be at least−0.14 (recall thatu1 = (α1, β1,0)). We conclude that

〈uj , r〉 = αjαr + βjβr + 〈γj , γr〉 � (−1.12− 0.688)ε+ 1

4d‖γj‖

>1

25d− 2ε.

Settingε to be less than 1/(26d) the above inner product is positive.All in all, the proof of our lemma is complete withδ = ε2/27 =�(1/d2). ✷

Theorem 4.3. There exists a semidefinite based algorithm that for every∆ > 0approximates the Max-Cut problem on graphs with bounded degree∆ within anexpected approximation ratio ofα +�(1/∆4).

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 217

Proof. Consider the optimal vector configuration{v1, . . . , vn}, and the partition(X,Y ) achieved after step (1) of algorithmACut. Let Z be the value of theabove optimal vector configuration, andw(X) the weight of the partition(X,Y ).Using the analysis of [1] it can be seen that the expected value ofw(X) is atleastαZ. Define an edgeeij to bebad if the inner product〈vi, vj 〉 is closetocos(θ0)� −0.688, i.e.〈vi, vj 〉 ∈ [−0.688− 0.01,−0.688+ 0.01].

Let Zgood be the contribution of the edges inG which are good (i.e. not bad)to the valueZ defined above, and letε > 0 be some small constant that will bedetermined later. We consider two cases. IfZgood is greater thanεZ, using thestandard analysis of [1], we have that

E[w(X)] � α(Z −Zgood)+(α + 1

212

)Zgood�

(α + ε

212

)Z.

Otherwise, the contribution of the good edges,Zgood, is less thatεZ. In sucha case consider the graphG obtained by removing these good edges fromG.In G all vertices are all bad and have degree at most∆. Furthermore,G has atleast(1 − ε)Z/∆ vertices of degree� 1. Using Lemma 4.2, we have that eachsuch vertex inG is misplaced in(X,Y ) (with respect toG) with probability atleastδ =�(1/∆2). Note that in Lemma 4.2 we dealt with the case in whicheijwas assumed to be bad if〈vi, vj 〉 was exactly cos(θ0)� −0.688. Our analysis inLemma 4.2 was slack; thus it can be seen that we obtain the same results for theabove definition of a bad edge as well. As each vertex moved may effect at most∆ other vertices, we conclude that running the greedy step of our algorithmACuton G has an expected contribution of

(1− ε)Z

219∆3(∆+ 1)

edges. Therefore in such a case, we obtain a final cut ofG (which is also a cutof G) of expected value at least(

α+ 1

219∆3(∆+ 1)

)(1− ε)Z.

Settingε to be 1/(220∆3(∆+ 1)) makes our proof complete.✷

5. Concluding remarks

Throughout our work, we consider the unweighted version of the Max-Cutproblem. Our analysis can be extended to graphs with edge weights as well,although the results we obtain are weaker. In Section 3, when consideringgraphs without edge weights, a vertexi is said to be misplaced if any triplet ofvertices(i, j, k) (wherej andk are adjacent toi) lay on the same side of the cut.In the case in which we allow edge weights, a vertexi and two of its neighbors

218 U. Feige et al. / Journal of Algorithms 43 (2002) 201–219

may be on the same side of the cut withouti being misplaced. Thus in this casewe consider only the triplet(i, j, k) in which the edgeseij andeik are the heaviestedges adjacent toi. In Section 4, for every vertexv regardless of its degreed , usethe part of Lemma 4.2 addressing the case in which the degreed is even, andchoose the neighboru1 of v to be such that the edge(v,u1) is the heaviest edgeadjacent tov.

In general, when considering graphs with edge weights, we stop the greedyplacement step after a polynomial number of steps and not necessarily at a localoptimum. Our extended analysis holds under this additional restriction.

In a recent paper Halperin et al. [13] improve over the approximation ratioof 0.921 that we present for graphs of degree at most three (without edge weights),and obtain a ratio of 0.9326. Their algorithm follows the general framework wepresent (the addition of a local improvement step to the GW algorithm) and differsfrom ours in the greedy procedure analyzed.

Acknowledgments

We thank Monique Laurent for commenting on an early version of ourpaper. The first author is the Incumbent of the Joseph and Celia Reskin CareerDevelopment Chair. This research was supported in part by a Minerva grant, andby DFG grant 673/4-1, Esprit BR grants 7079, 21726, and EC-US 030, and by theMax-Planck Research Prize.

References

[1] M.X. Goemans, D.P. Williamson, Improved approximation algorithms for maximum cut andsatisfiability problems using semidefinite programming, J. ACM 42 (1995) 1115–1145.

[2] P. Berman, M. Karpinski, On some tighter inapproximability results, further improvements,ECCC, TR98-065, 1998. Extended abstract appears in ICALP 1999, pp. 200–209.

[3] T. Calamoneri, I. Finocchi, Y. Manoussakis, R. Petreschi, Approximation algorithm for the MaxCut problem on cubic graphs, in: Advances in Computing Science, ASIAN’99, Lecture Notes inComput. Sci. 1742, Springer-Verlag, 1999, pp. 27–36.

[4] S. Poljak, Local search for Max-Cut, SIAM J. Comput. 24 (1995) 822–839.[5] U. Feige, M.X. Goemans, Approximating the value of two prover proof systems with applications

to Max-2-Sat and Max-Dicut, in: Proc. 3rd Israel Symposium on Theory of Computing andSystems, 1995, pp. 182–189.

[6] F. Barahona, A.R. Mahjoub, On the cut polytope, Math. Program. 36 (1986) 157–173.[7] U. Feige, G. Schechtman, On the optimality of the random hyperplane rounding technique

for MAX CUT, in: Proc. 33rd Annual ACM Symposium on the Theory of Computing, 2001,pp. 433–442.

[8] U. Zwick, Outward rotations: a new tool for rounding solutions of semidefinite programmingrelaxations, with application to Max-Cut and other problems, in: Proc. 31th ACM Symposiumon Theory of Computing, 1999, pp. 679–687.

[9] H.J. Karloff, How good is the Goemans–Williamson MAX CUT algorithm?, in: Proc. 28thAnnual ACM Symposium on Theory of Computing, 1996, pp. 427–434.

U. Feige et al. / Journal of Algorithms 43 (2002) 201–219 219

[10] H. Karloff, U. Zwick, A 7/8-approximation algorithm for Max-3-Sat?, in: Proc. 38th AnnualIEEE Symposium on Foundations of Computer Science, 1997, pp. 406–415.

[11] Y.E. Nesterov, Semidefinite relaxation and nonconvex quadratic optimization, Optim. MethodsSoftw. 9 (1998) 141–160.

[12] A. Renyi, Probability Theory, Elsevier, New York, 1970.[13] E. Halperin, D. Livnat, U. Zwick, Max-Cut in cubic graphs, in: Proc. of 13th SODA, 2002,

pp. 506–513.