30
Journal of Algorithms 42, 173–202 (2002) doi:10.1006/jagm.2001.1202, available online at http://www.idealibrary.com on Improved Approximation Algorithms for MAX SAT 1 Takao Asano Department of Information and System Engineering, Chuo University, Bunkyo-ku, Tokyo 112-8551, Japan E-mail: [email protected] and David P. Williamson IBM Almaden Research Center, 650 Harry Road, K53/B1, San Jose, California 95120 E-mail: [email protected]; www.almaden.ibm.com/cs/people/dpw Received October 12, 2001 MAX SAT (the maximum satisfiability problem) is stated as follows: given a set of clauses with weights, find a truth assignment that maximizes the sum of the weights of the satisfied clauses. In this paper, we consider approximation algo- rithms for MAX SAT proposed by Goemans and Williamson and present a sharp- ened analysis of their performance guarantees. We also show that these algorithms, combined with recent approximation algorithms for MAX 2SAT, MAX 3SAT, and MAX SAT due to Feige and Goemans, Karloff and Zwick, and Zwick, respectively, lead to an improved approximation algorithm for MAX SAT. By using the MAX 2SAT and 3SAT algorithms, we obtain a performance guarantee of 0.7846, and by using Zwick’s algorithm, we obtain a performance guarantee of 0.8331, which improves upon the performance guarantee of 0.7977 based on Zwick’s conjecture. The best previous result for MAX SAT without assuming Zwick’s conjecture is a 0.770-approximation algorithm of Asano. Our best algorithm requires a new fam- ily of 3 4 -approximation algorithms that generalize a previous algorithm of Goemans and Williamson. 2002 Elsevier Science 1 A preliminary version of this paper appeared in the “Proceedings of the 11th ACM-SIAM Symposium of Discrete Algorithms,” pp. 96–105, 2000. 173 0196-6774/02 $35.00 2002 Elsevier Science All rights reserved

Improved Approximation Algorithms for MAX SAT

Embed Size (px)

Citation preview

Page 1: Improved Approximation Algorithms for MAX SAT

Journal of Algorithms 42, 173–202 (2002)doi:10.1006/jagm.2001.1202, available online at http://www.idealibrary.com on

Improved Approximation Algorithms for MAX SAT1

Takao Asano

Department of Information and System Engineering, Chuo University,Bunkyo-ku, Tokyo 112-8551, JapanE-mail: [email protected]

and

David P. Williamson

IBM Almaden Research Center, 650 Harry Road, K53/B1,San Jose, California 95120

E-mail: [email protected]; www.almaden.ibm.com/cs/people/dpw

Received October 12, 2001

MAX SAT (the maximum satisfiability problem) is stated as follows: given a setof clauses with weights, find a truth assignment that maximizes the sum of theweights of the satisfied clauses. In this paper, we consider approximation algo-rithms for MAX SAT proposed by Goemans and Williamson and present a sharp-ened analysis of their performance guarantees. We also show that these algorithms,combined with recent approximation algorithms for MAX 2SAT, MAX 3SAT, andMAX SAT due to Feige and Goemans, Karloff and Zwick, and Zwick, respectively,lead to an improved approximation algorithm for MAX SAT. By using the MAX2SAT and 3SAT algorithms, we obtain a performance guarantee of 0.7846, andby using Zwick’s algorithm, we obtain a performance guarantee of 0.8331, whichimproves upon the performance guarantee of 0.7977 based on Zwick’s conjecture.The best previous result for MAX SAT without assuming Zwick’s conjecture is a0.770-approximation algorithm of Asano. Our best algorithm requires a new fam-ily of 3

4 -approximation algorithms that generalize a previous algorithm of Goemansand Williamson. 2002 Elsevier Science

1 A preliminary version of this paper appeared in the “Proceedings of the 11th ACM-SIAMSymposium of Discrete Algorithms,” pp. 96–105, 2000.

173

0196-6774/02 $35.00 2002 Elsevier Science

All rights reserved

Page 2: Improved Approximation Algorithms for MAX SAT

174 asano and williamson

1. INTRODUCTION

MAX SAT (the maximum satisfiability problem) is stated as follows: givena set of clauses with weights, find a truth assignment that maximizes thesum of the weights of the satisfied clauses. More precisely, an instanceof MAX SAT is defined by ��� w�, where � is a set of boolean clausessuch that each clause C ∈ � is a disjunction of literals with a positiveweight w�C�. We sometimes write � instead of ��� w� if the weight functionw is clear from the context. Let X = �x1� � � � � xn� be the set of booleanvariables in the clauses of �. A literal is a variable x ∈ X or its negationx. For simplicity we assume that xn+i = xi �xi = xn+i�. Thus, X = �x x ∈X� = �xn+1� xn+2� � � � � x2n� and X ∪ X = �x1� � � � � x2n�. We assume thatno literals with the same variable appear more than once in a clause in�. For each xi ∈ X, let xi = 1 �xi = 0� resp�� if xi is true (false, resp.).Then, xn+i = xi = 1− xi and a clause Cj = xj1

∨ xj2∨ · · · ∨ xjkj

∈ � can beconsidered to be a function on x = �x1� � � � � x2n� as follows:

Cj = Cj�x� = 1−kj∏i=1

�1− xji��

Thus, Cj = Cj�x� = 0 or 1 for any truth assignment x ∈ �0� 1�2n with xi +xn+i = 1 �i = 1� 2� � � � � n� and Cj is satisfied if Cj�x� = 1. The value of atruth assignment x is defined to be

F��x� =∑

Cj∈�w�Cj�Cj�x��

That is, the value of x is the sum of the weights of the clauses in � satisfiedby x. The goal of MAX SAT is to find an optimal truth assignment, that is,a truth assignment of maximum value. We will also consider MAX kSAT, arestricted version of the problem in which each clause has at most k literals.MAX SAT is well known to be NP-hard and thus many researchers have

proposed approximation algorithms for MAX SAT. Let A be an algorithmfor MAX SAT and let F��xA���� be the value of a truth assignment xA���produced by A for an instance �. If F��xA���� is at least α times thevalue F��x���� of an optimal truth assignment x��� for any instance �,then A is called an approximation algorithm with performance guarantee α.A polynomial-time approximation algorithm A with performance guaranteeα is called an α-approximation algorithm.Johnson [10] gave the first approximation algorithm for MAX SAT in

1974. It is a greedy algorithm whose performance guarantee was shownto be 1

2 .2 In the early 1990s, Yannakakis [15] and then Goemans and

2In 1997, Chen et al. [3] improved the analysis of the performance guarantee to 23 .

Page 3: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 175

Williamson [6] proposed 0.75-approximation algorithms. Later, Goemansand Williamson proposed a 0.878-approximation algorithm for MAX 2SATbased on semidefinite programming [5]. They showed that their 0.878-approximation algorithm, combined with their earlier 0.75-approximationalgorithm for MAX SAT, leads to a 0.7584-approximation algorithm forMAX SAT [7]. Asano et al. proposed a semidefinite programming approachto MAX SAT [2] and obtained a 0.765-approximation algorithm by com-bining it with Yannakakis’ 0.75-approximation algorithm as well as thealgorithm of Goemans and Williamson. Asano [1] later presented a refine-ment of Yannakakis’ algorithm based on network flows and showed thatit leads to a 0.770-approximation algorithm. Using semidefinite program-ming, Karloff and Zwick [11] gave a 7

8 -approximation algorithm for MAX3SAT, and Halperin and Zwick [9] gave an approximation algorithm forMAX 4SAT that numerical evidence shows has a performance guaranteeof 0.8721. Finally, Zwick [16] recently made a conjecture about anotherapproximation algorithm which, if true, leads to a 0.7977-approximationalgorithm for MAX SAT; his conjecture is supported by numerical exper-iments. The conjecture states that the worst-case ratio of the value of asemidefinite programming relaxation of MAX SAT to the value of a cer-tain algorithm for MAX SAT is achieved at a particular point. We sum-marize these results in Fig. 1. Håstad [8] has shown that no approximationalgorithm for MAX 3SAT (and thus MAX SAT) can achieve a performanceguarantee better than 7

8 unless P = NP; thus the Karloff and Zwick resultis tight, and the Halperin and Zwick result is nearly so.In this paper, we present a sharpened analysis of the 0.75-approximation

algorithms for MAX SAT proposed by Goemans and Williamson. Theimproved analysis allows us to give better bounds on how these algo-rithms perform on clauses of length k, and this leads to a better overall

FIG. 1. Summary of performance guarantees for MAX SAT. † indicates that the result isbased on numerical evidence given in [9]. ‡ indicates that the result is based on a conjecturein [16].

Page 4: Improved Approximation Algorithms for MAX SAT

176 asano and williamson

approximation algorithm for MAX SAT in combination with other recentresults. In particular, by combining our algorithm with the Feige–Goemansalgorithm for MAX 2SAT and the Karloff–Zwick algorithm for MAX3SAT, we obtain a performance guarantee of 0.7846 for MAX SAT. Ifwe use Zwick’s approximation algorithm for MAX SAT and its perfor-mance guarantee is as conjectured, we obtain a performance guarantee of0.8331.3 This last algorithm requires a new family of 0.75-approximationalgorithms for MAX SAT which generalizes an algorithm of Goemans andWilliamson.To explain our result in more detail, we briefly review the 0.75-

approximation algorithms of Yannakakis and Goemans and Williamson.Both of them are based on the probabilistic method. Let xp = �xp

1 � � � � � xp2n�

be a random truth assignment with 0 ≤ xpi = pi ≤ 1 �xp

n+i = 1 − xpi =

1 − pi = pn+i�. That is, xp is obtained by setting independently eachvariable xi ∈ X to be true with probability pi (and xn+i = xi to betrue with probability pn+i = 1 − pi). Then the probability that a clauseCj = xj1

∨ xj2∨ · · · ∨ xjkj

∈ � is satisfied by the random truth assignment

xp = �xp1 � � � � � x

p2n� is

Cj�xp� = 1−kj∏i=1

�1− xpji�� (1)

Thus, the expected value of the random truth assignment xp is

F��xp� = ∑Cj∈�

w�Cj�Cj�xp�� (2)

The probabilistic method assures that there is a truth assignmentxq ∈ �0� 1�2n of value at least F��xp�. Such a truth assignment xp canbe obtained in polynomial time by the method of conditional probabilities[6, 15]. The 0.75-approximation algorithm of Yannakakis finds a vector psuch that the expected value F��xp� of a random truth assignment xp is atleast

0�75W1 + 0�75W2 + 0�75W3 + 0�765 W4 + 0�762W5

+ 0�822W6 +∑k≥7

�1− �0�75�k�Wk�

where Wk = ∑C∈�k

w�C�C�x� and F��x� =∑

k≥1 Wk for an optimal truthassignment x of � ��k denotes the set of clauses in � with k literals

3One could also consider an intermediate algorithm that used the Feige–Goemans, Karloff–Zwick, and Halperin–Zwick algorithms in combination with our algorithm. However, we foundthat such an algorithm produced a performance guarantee that was better than our 0.7846-approximation algorithm only by a very tiny amount.

Page 5: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 177

throughout the paper). Using a hybrid approach of combining a random-ized version of Johnson’s algorithm and randomized rounding of a linearprogramming (LP) relaxation of MAX SAT, Goemans and Williamson [6]also obtain a 0.75-approximation algorithm. It finds a random truth assign-ment of value at least

0�75W ∗1 +0�75W ∗

2 +0�789W ∗3 +0�810W ∗

4 +0�820W ∗5 +0�824W ∗

6 +∑k≥7

βkW∗k �

where

βk = 12

(2 − 1

2k−(1− 1

k

)k)

�k ≥ 1� (3)

and W ∗k is the objective function value of clauses of length k provided by

the linear programming relaxation. Note that 0�826 < βk < 1− �0�75�k fork ≥ 7. Observe that in terms of the behavior of the algorithm on clauses oflength k, the two algorithms are incomparable. Asano [1] gives an improve-ment of Yannakakis’ algorithm for finding a random truth assignment withvalue at least

0�75W1 + 0�75W2 + 0�791W3 + 0�811W4 + 0�823W5 + 0�850W6

+∑k≥7

�1− �0�75�k�Wk�

He noted that this bound is better than the bounds of Goemans andWilliamson and Yannakakis and that it leads to a 0.770-approximationalgorithm if it is combined with the algorithms in Asano et al. [2]. Hisalgorithm is, however, very complicated and only experts on network algo-rithms can understand it without difficulty. Thus, simpler algorithms withhigh performance guarantees are desired.In this paper, we show that one of the non-hybrid algorithms of Goemans

and Williamson finds a random truth assignment xp with value F��xp� atleast

0�75W ∗1 + 0�75W ∗

2 + 0�804W ∗3 + 0�851W ∗

4 + 0�888W ∗5

+ 0�915W ∗6 + ∑

k≥7γkW

∗k �

where

γk = 1− 12

(34

)k−1(1− 1

3�k − 1�

)k−1�

Note that this bound is the best among the bounds described above sinceγk > 1− �0�75�k for k ≥ 3. If this is combined with the other MAX kSAT

Page 6: Improved Approximation Algorithms for MAX SAT

178 asano and williamson

algorithms as stated previously, it leads to the improved approximationalgorithms for MAX SAT. Additionally, a generalization of this algorithmfinds a random truth assignment with value at least

0�914W ∗1 + 0�75W ∗

2 + 0�75W ∗3 + 0�766W ∗

4 + 0�784W ∗5 + 0�801W ∗

6

+ 0�817W ∗7 + 0�832W ∗

8 + 0�845W ∗9 + 0�857W ∗

10

+ ∑k≥11

(1− 0�914k

(1− 1

k

)k)W ∗

k �

When combined with Zwick’s conjectured MAX SAT algorithm, this leadsto the best overall performance guarantee.The significance of our result is twofold. First, if P �= NP the best per-

formance guarantee achievable for MAX SAT is 78 . Our result takes us

almost halfway there from Zwick’s bound of 0.7977, assuming that his con-jecture is correct. Second, while much of the recent effort on MAX SAThas been invested in algorithms and analysis for semidefinite programmingformulations, we show that stronger analysis of algorithms using linear pro-gramming is useful.The remainder of the paper is structured as follows. In Section 2 we

review the algorithms of Goemans and Williamson and then sharpen theiranalysis. In Section 3 we give the improved approximation algorithm forMAX SAT.

2. THE MAX SAT ALGORITHMS OFGOEMANS AND WILLIAMSON

Goemans and Williamson [6] formulate MAX SAT as an integer pro-gramming problem as follows:

max∑

Cj∈�w�Cj�zj

s.t.kj∑i=1

yji≥ zj ∀Cj = xj1

∨ xj2 ∨ · · · ∨ xjkj∈ �

yi + yn+i = 1 ∀ i ∈ �1� 2� � � � � n�yi ∈ �0� 1� ∀ i ∈ �1� 2� � � � � 2n�zj ∈ �0� 1� ∀Cj ∈ ��

In this integer programming (IP) formulation, variables y = �yi� correspondto the literals �x1� � � � � x2n� and variables z = �zj� correspond to the clauses�. Thus, variable yi = 1 if and only if xi = 1. Similarly, zj = 1 if and onlyif Cj is satisfied. The first set of constraints implies that one of the literals

Page 7: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 179

in a clause is true if the clause is satisfied and thus this formulation exactlycorresponds to MAX SAT. If the variables y = �yi� and variables z = �zj�are allowed to take on any values between 0 and 1, then the following LPrelaxation �GW � of MAX SAT is obtained.

max∑

Cj∈�w�Cj�zj

s.t.kj∑i=1

yji≥ zj ∀Cj = xj1

∨ xj2∨ · · · ∨ xjkj

∈ �

�GW � yi + yn+i = 1 ∀ i ∈ �1� 2� � � � � n�0 ≤ yi ≤ 1 ∀ i ∈ �1� 2� � � � � 2n�0 ≤ zj ≤ 1 ∀Cj ∈ �

Using an optimal solution �y∗� z∗� to this LP relaxation of MAX SAT,Goemans and Williamson set each variable xi to be true with probabilityy∗i . Then they show that the probability Cj�y∗� of a clause Cj with k literalsbeing satisfied is at least �1− �1− 1

k�k�z∗j . Thus, the expected value F�y∗�

of the random truth assignment y∗ obtained in this way satisfies

F�y∗� ≥ ∑k≥1

(1−

(1− 1

k

)k)W ∗

k ≥(1− 1

e

)W ∗ ≈ 0�632W ∗�

where e is the base of the natural logarithm and W ∗ = ∑Cj∈� w�Cj�z∗j

and W ∗k = ∑

Cj∈�kw�Cj�z∗j (note that W ∗ = ∑

Cj∈� w�Cj�z∗j ≥ W =∑Cj∈� w�Cj�zj for an optimal solution �y� z� to the IP formulation of MAX

SAT). Thus, this algorithm leads to a 0.632-approximation algorithm forMAX SAT. If this is combined with a randomized version of Johnson’salgorithm, it leads to a 3

4 -approximation algorithm. Note that the random-ized version of Johnson’s algorithm uses the random truth assignment inwhich each variable xi is set to be true with probability 1

2 and the probabil-ity of a clause Cj with k literals being satisfied is 1− 1

2k . Thus, if we choosethe better of these two random truth assignments, the expected value is atleast

∑k≥1 βkW

∗k ≥ 3

4W∗, where βk is defined in Eq. (3).

Goemans and Williamson also considered three other non-linear ran-domized rounding algorithms. In these algorithms, each variable xi is setto be true with probability f��y∗i � defined as follows �� = 1� 2� 3�.

f1�y� =

34y + 1

4if 0 ≤ y ≤ 1

312

if13≤ y ≤ 2

334y if

23≤ y ≤ 1

Page 8: Improved Approximation Algorithms for MAX SAT

180 asano and williamson

f2�y� = �2a − 1�y + 1− a

(34≤ a ≤ 3

3√4− 1 ≈ 0�889881

)1− 4−y ≤ f3�y� ≤ 4y−1�

Note that

f��y∗i � + f��y∗n+i� = 1

holds for � = 1� 2 and that f3�y∗i � has to be chosen to satisfy f3�y∗i � +f3�y∗n+i� = 1. They then proved that the random truth assignments xp =f��y∗� = �f��y∗1 �� � � � � f��y∗2n�� obtained in this way all have the expectedvalues of at least 3

4W∗ and all lead to 3

4 -approximation algorithms. In par-ticular, they show that for each k, the probability of a clause Cj with k

literals being satisfied is at least 34z

∗j , and thus the expected weight of satis-

fied clauses in �k is at least 34W

∗k .

To give an improved approximation algorithm for MAX SAT, we willsharpen the analysis of Goemans and Williamson to provide more precisebounds on the probability of a clause Cj = xj1

∨ xj2∨ · · · ∨ xjk

with k literalsbeing satisfied and on the expected weight of satisfied clauses in �k by therandom truth assignment xp = f��y∗� �� = 1� 2� for each k. By symmetry, weassume that xji

= xi for each i = 1� 2� � � � � k since f��x� = 1− f��x� and wecan set x �= x if necessary. Thus, we consider clause Cj = x1 ∨ x2 ∨ · · · ∨ xk

corresponding to the constraint y1 + y2 + · · · + yk ≥ zj in the LP relaxation�GW � of MAX SAT, and we give a bound on the ratio of the probability ofclause Cj being satisfied by the random truth assignment xp = f��y∗� �� =1� 2� to z∗j . Since we will need it later, we will instead analyze a parametrizedfunction f a

1 , as follows, where f1 = f3/41 ,

f a1 �y� =

ay + 1− a if 0 ≤ y ≤ 1− 12a

12

if 1− 12a

≤ y ≤ 12a

ay if12a

≤ y ≤ 1,

(4)

for 12 ≤ a ≤ 1. Similarly, since f2 is a parametrized function, we denote it

by f a2 from now on by explicitly expressing parameter a; i.e.,

f a2 �y� = �2a − 1�y + 1− a

(12≤ a ≤ 1

)� (5)

We will use γak and δa

k to bound the probabilities of clause Cj with k literalsbeing satisfied by the random truth assignments xp = f a

1 �y∗� and xp =f a2 �y∗�, respectively, where

γak =

{a if k = 1min�γa

k� 1� γak� 2� if k ≥ 2 (6)

Page 9: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 181

with

γak� 1 = 1− 1

2ak−1

(1− 1− 1

2a

k − 1

)k−1� (7)

γak� 2 = 1− ak

(1− 1

k

)k

� (8)

and

δak = 1−

(a − 2a − 1

k

)k

= 1− ak

(1− 2 − 1

a

k

)k

� (9)

Since we obtain no structural property of optimal solutions to the LPrelaxation (GW ) of MAX SAT, we assume, throughout this paper, thatany feasible solution can become an optimal solution. Under this genericassumption, the probability of clause Cj = x1 ∨ x2 ∨ · · · ∨ xk with k literalsbeing satisfied by the random truth assignment xp = f a

2 �y∗� is at most δakz

∗j

since clause Cj corresponds to the LP constraint y∗1 + y∗2 + · · · + y∗k ≥ z∗jand it is possible that y∗1 = y∗2 = · · · y∗k = 1

k� z∗j = 1 and, by Eqs. (1), (5),

and (9),

Cj�f a2 �y∗�� = 1−

k∏i=1

�1− f a2 �y∗i �� = 1−

(a − 2a − 1

k

)k

= 1− ak

(1− 2 − 1

a

k

)k

= δakz

∗j �

For the same reason, the probability of clause Cj = x1 ∨ x2 ∨ · · · ∨ xk withk literals being satisfied by the random truth assignment xp = f a

1 �y∗� is atmost γa

kz∗j , since, if k ≥ 2 then, for y∗1 = y∗2 = · · · y∗k = 1

k≤ 1− 1

2a � z∗j = 1,we have

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i �� = 1−

(1−

(a

k+ 1− a

))k

= 1− ak

(1− 1

k

)k

= γak� 2z

∗j

by Eqs. (1), (4), and (8), and for y∗1 = y∗2 = · · · y∗k−1 = 1−�1/2a�k−1 � y∗k =

12a � z

∗j = 1, we have

Cj�f a1 �y∗��=1−

k∏i=1

�1−f a1 �y∗i ��=1−

(1−

(a�1− 1

2a�k−1

+1−a

))k−1(1− a

2a

)

=1− 12ak−1

(1− 1− 1

2a

k−1

)k−1=γa

k�1z∗j

Page 10: Improved Approximation Algorithms for MAX SAT

182 asano and williamson

by Eqs. (1), (4), and (7). Furthermore, if k = 1 then, for 12a ≤ y∗1 = z∗j ≤ 1,

we have

Cj�f a1 �y∗�� = 1− �1− f a

1 �y∗1 �� = f a1 �y∗1 � = ay∗1 = az∗j = γa

1z∗j �

Thus, the probabilities of clause Cj with k literals being satisfied by therandom truth assignments xp = f a

1 �y∗� and xp = f a1 �y∗� are at most γa

kz∗j

and δakz

∗j , respectively. Below we will show that the converse inequalities

hold; that is, the following theorems are true.

Theorem 2.1. The probability of Cj = x1 ∨ x2 ∨ · · · ∨ xk ∈ � being sat-isfied by the random truth assignment xp = f a

1 �y∗� = �f a1 �y∗1 �� � � � � f a

1 �y∗2n�� is

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i �� ≥ γa

kz∗j � (10)

for 12 ≤ a ≤ 1, where f a

1 is the function defined in Eq. (4) and γak is the

number defined in Eq. (6). Thus, the expected value F�f a1 �y∗�� of the random

truth assignment xp = f a1 �y∗� satisfies

F�f a1 �y∗�� ≥

∑k≥1

γakW

∗k �

Theorem 2.2. The probability of Cj = x1 ∨ x2 ∨ · · · ∨ xk ∈ � being sat-isfied by the random truth assignment xp = f a

2 �y∗� = �f a2 �y∗1 �� � � � � f a

2 �y∗2n��is

Cj�f a2 �y∗�� = 1−

k∏i=1

�1− f a2 �y∗i �� ≥ δa

kz∗j � (11)

for 12 ≤ a ≤ 1, where f a

2 is the function defined in Eq. (5) and δak is the

number defined in Eq. (9). Thus, the expected value F�f a2 �y∗�� of the random

truth assignment xp = f a2 �y∗� satisfies

F�f a2 �y∗�� ≥

∑k≥1

δakW

∗k �

Furthermore, under the generic assumption described above, we willshow that the algorithm using f a

1 dominates the algorithm that uses f a2 ;

that is, the following theorem holds.

Theorem 2.3. For γak and δa

k defined in Eqs. (6) and (9), γak > δa

k holdfor all k ≥ 3 and for all a with 1

2 < a < 1. For k = 1� 2� γak = δa

k �γa1 = δa

1 =a� γa

2 = δa2 = 3

4� hold.We first give a proof of Theorem 2.2 since it is much simpler than a proof

of Theorem 2.1. Then we give a proof of Theorem 2.1, and finally, we willgive a proof of Theorem 2.3.

Page 11: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 183

Proof of Theorem 2.2. Noting that clause Cj = x1 ∨ x2 ∨ · · · ∨ xk corre-sponds to the LP constraint y∗1 + y∗2 + · · · + y∗k ≥ z∗j , we will show that

Cj�f a2 �y∗�� = 1−

k∏i=1

�1− f a2 �y∗i �� ≥ δa

kz∗j �

Let

r�zj� = 1−(a − �2a − 1�zj

k

)k

and let

R�zj� = δakzj =

(1−

(a − 2a − 1

k

)k)zj�

Then, since r�zj� is concave in zj with 0 ≤ zj ≤ 1 and r�0� = 1− ak ≥ 0 =R�0� and r�1� = 1− �a − 2a−1

k�k = R�1�, we have

r�zj� = 1−(a − �2a − 1�zj

k

)k

≥(1−

(a − 2a − 1

k

)k)zj = R�zj��

Thus,

Cj�f a2 �y∗�� = 1−

k∏i=1

�1− f a2 �y∗i �� = 1−

k∏i=1

�a − �2a − 1�y∗i �

≥ 1−(a − �2a − 1�∑k

i=1 y∗ik

)k

≥ 1−(a − �2a − 1�z∗j

k

)k

≥(1−

(a − 2a − 1

k

)k)z∗j = δa

kz∗j

by the arithmetic/geometric mean inequality and y∗1 + y∗2 + · · · + y∗k ≥ z∗k.

To make a proof of Theorem 2.1 substantially shorter, we use the follow-ing lemma.

Lemma 2.4. 1. For a fixed a with 12 ≤ a ≤ 1, let 1

2a ≤ y ≤ 1. Then fork ≥ 2,

�1− ay�(1− 1− y

k − 1

)k−1≤ 1

2

(1− 1− 1

2a

k − 1

)k−1� (12)

Page 12: Improved Approximation Algorithms for MAX SAT

184 asano and williamson

2. Let � be a positive integer and let b and z satisfy 0 ≤ b ≤ z ≤ 1. Letµb be a positive function depending only on b and decreasing with b and letµ be a positive constant such that µb ≤ µ and �b + 1�µ ≤ 1. Then

1−µb

(1− z−b

)�

≥(1−µb

(1− 1−b

)�)z≥

(1−µ

(1− 1−b

)�)z� (13)

Proof. To prove part 1, we show that

g�y� ≡ �1− ay�(1− 1− y

k − 1

)k−1

is decreasing with y ≥ 12a . Then we have Eq. (12), since g�y� ≤ g� 1

2a� =12 �1 − 1−�1/2a�

k−1 �k−1. To see that g�y� is decreasing for y ≥ 12a , we observe

that the derivative g′�y� of g�y� is

g′�y� = −a

(1− 1− y

k − 1

)k−1+ �1− ay�

(1− 1− y

k − 1

)k−2

=(1− 1− y

k − 1

)k−2(−a

(1− 1− y

k − 1

)+ 1− ay

)

=(1− 1− y

k − 1

)k−2(1− a�k − 2� + aky

k − 1

)≤ 0�

since a�k − 2� + aky ≥ a�k − 2� + k2 ≥ k−2

2 + k2 = k − 1 by the conditions

on y� a, and k.In Eq. (13) of part 2, the second inequality is clear and we consider only

the first inequality. For

h�z� ≡ 1− µb

(1− z − b

)�

and H�z� ≡(1− µb

(1− 1− b

)�)z�

h�z� is concave in z and h�1� = 1−µb�1− 1−b��� = H�1�. Thus, if we show

that h�b� ≥ H�b� given the conditions that µb ≤ µ and �b + 1�µ ≤ 1, wewill have h�z� ≥ H�z� for b ≤ z ≤ 1 which implies Eq. (13). Set

G�b� = 1− b − µ

(1−

(1− 1− b

)�

b

)�

Page 13: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 185

Then

h�b� − H�b� = �1− µb� −(1− µb

(1− 1− b

)�)b

= 1− b − µb

(1−

(1− 1− b

)�

b

)

≥ 1− b − µ

(1−

(1− 1− b

)�

b

)= G�b��

This implies that, if G�b� ≥ 0 then we have h�b� −H�b� ≥ 0 and Eq. (13).Thus, we will show that G�b� ≥ 0. The derivative of G�b� with respect tob is

G′�b� = −1+ µ

(1− 1− b

)�

+ µb

(1− 1− b

)�−1

= −1+ µ

(1− 1− b

)�−1(b +

(1− 1− b

))≤ −1+ µ�b + 1� ≤ 0�

by the stated conditions. Thus, G�b� is decreasing with b, and, by G�1� = 0,we have G�b� ≥ 0.

Proof of Theorem 2.1. Noting that clause Cj = x1 ∨ x2 ∨ · · · ∨ xk corre-sponds to the constraint

y∗1 + y∗2 + · · · + y∗k ≥ z∗j (14)

in the LP relaxation �GW � of MAX SAT, we will show that

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i �� ≥ γa

kz∗j �

If k = 1, then

Cj�f a1 �y∗�� = f a

1 �y∗1 � ≥ ay∗1 ≥ az∗j = γa1z

∗j

by Eq. (14). For k ≥ 2, we assume that y∗1 ≤ y∗2 ≤ · · · ≤ y∗k by symmetryand consider three cases as follows. Case 1: y∗k ≤ 1 − 1

2a ; Case 2: y∗k−1 ≤1− 1

2a < y∗k; Case 3: 1− 12a < y∗k−1 ≤ y∗k.

Page 14: Improved Approximation Algorithms for MAX SAT

186 asano and williamson

Case 1. y∗k ≤ 1 − 12a . Since all y∗i ≤ 1 − 1

2a �i = 1� 2� � � � � k�, we havef a1 �y∗i � = ay∗i + 1− a and 1− f a

1 �y∗i � = a�1− y∗i �. Thus, we have

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i �� = 1− ak

k∏i=1

�1− y∗i �

≥ 1− ak

(1−

∑ki=1 y∗ik

)k

≥ 1− ak

(1− z∗j

k

)k

≥(1− ak

(1− 1

k

)k)z∗j = γa

k� 2z∗j ≥ γa

kz∗j �

where the first inequality follows by the arithmetic/geometric mean inequal-ity, the second by Eq. (14), and the third by Eq. (13) in Lemma 2.4 withµb = ak = µ and b = 0 satisfying �b + 1�µ ≤ 1.

Case 2. y∗k−1 ≤ 1 − 12a < y∗k. Since 1 − f a

1 �y∗i � = a�1 − y∗i � �i =1� 2� � � � � k − 1�, we have

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i �� = 1− �1− f a

1 �y∗k��ak−1k−1∏i=1

�1− y∗i �

≥ 1− �1− f a1 �y∗k��ak−1

(1−

∑k−1i=1 y∗i

k − 1

)k−1

≥ 1− �1− f a1 �y∗k��ak−1

(1− z∗j − y∗k

k − 1

)k−1

by the arithmetic/geometric mean inequality and Eq. (14). We now assume4

that yk ≤ z∗j . Let

µ = 12ak−1 and µb = �1− f a

1 �b��ak−1 using b = y∗k�

Then

µb =

12ak−1 if 1− 1

2a< y∗k ≤ 1

2a,

�1− ay∗k�ak−1 if12a

< y∗k ≤ 1.

Thus, in either case, µb is a positive function decreasing with b = y∗k andthe inequalities

µ ≥ µb and �b + 1�µ ≤ 1

4In an optimal solution to the LP (GW ), z∗j = min�1�∑k

i=1 y∗i � and y∗

i ≤ 1, so that y∗k ≤ z∗

j .In later sections, we will be able to make this assumption by adding it as an explicit constraintto the linear program or semidefinite program.

Page 15: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 187

hold, since if y∗k ≤ 12a then µb = µ and if y∗k > 1

2a then µb = �1− ay∗k�ak−1 ≤12a

k−1 = µ. Furthermore, �b+ 1�µ ≤ 2 · 12ak−1 = ak−1 ≤ 1, since b = y∗k ≤ 1and 1

2 ≤ a ≤ 1. Thus, we have

Cj�f a1 �y∗�� ≥ 1− �1− f a

1 �y∗k��ak−1(1− z∗j − y∗k

k − 1

)k−1

≥(1− �1− f a

1 �y∗k��ak−1(1− 1− y∗k

k − 1

)k−1)z∗j

≥(1− 1

2ak−1

(1− 1− 1

2a

k − 1

)k−1)z∗j

= γak�1z

∗j ≥ γa

kz∗j �

where the second inequality follows by Eq. (13) in Lemma 2.4 and thethird inequality follows from the fact that if y∗k ≤ 1

2a then �1 − 1−y∗kk−1 �k−1 is

increasing with y∗k or from Eq. (12) in Lemma 2.4 if y∗k > 12a .

Case 3. 1 − 12a < y∗k−1 ≤ y∗k. Let � be the number satisfying y∗� ≤

1 − 12a < y∗�+1. Then, � ≤ k − 2, and, since 1 − f a

1 �y∗�+i� ≤ 12 ≤ a

�i = 1� 2� � � � � k − �� and 1 − f a1 �y∗i � = a�1 − y∗i � ≤ a �i = 1� 2� � � � � ��,

we have

Cj�f a1 �y∗�� = 1−

k∏i=1

�1− f a1 �y∗i ��

= 1−k∏

i=�+1�1− f a

1 �y∗i ���∏

i=1�a�1− y∗i ��

≥ 1−(12

)k−�

a� ≥ 1−(12

)2

ak−2�

Furthermore, since �1− 1−1/2ak−1 �k−1 is increasing with k ≥ 2 and

14= 1

2· a · 1

2a≤ 1

2· a(1− 1− 1

2a

k − 1

)k−1�

we have

Cj�f a1 �y∗�� ≥ 1− 1

2ak−1

(1− 1− 1

2a

k − 1

)k−1= γa

k�1 ≥ γak ≥ γa

kz∗j

by z∗j ≤ 1.

Page 16: Improved Approximation Algorithms for MAX SAT

188 asano and williamson

We will further show (in Theorem 2.6) that we can state precisely whenγa

k = γak�1. Now, we will give a proof of Theorem 2.3 by using the following

lemma which makes its proof substantially shorter.

Lemma 2.5. Let k ≥ 2 be an integer and let x� y be two numbers indepen-dent of k. Let

d�x� k� y� = ln�1− y

k�k

�1− 1−xk−1�k−1

� (15)

Then the following statements hold.

1. Let x� y satisfy 0 < 2�1 − x� ≤ y ≤ 1. Then ∂d�x�k�y�∂k

> 0 for k > 2,where ∂d�x�k�y�

∂kis the derivative of d�x� k� y� with respect to k. Furthermore

∂d�x�k�y�∂k

≥ 0 for k = 2 and d�x� k� y� is increasing with k ≥ 2.

2. Let x� y satisfy 0 < y ≤ 1− x ≤ 1. Then ∂d�x�k�y�∂k

< 0 for k ≥ 2 andd�x� k� y� is decreasing with k ≥ 2.

Proof. Since d�x� k� y� = k ln�k − y� − k lnk − �k − 1� ln�k − 2 + x� +�k − 1� ln�k − 1�, we have

∂d�x�k�y�∂k

= ln�k−y�+ k

k−y− lnk−1− ln�k−2+x�− k−1

k−2+x

+ ln�k−1�+1

=−lnk�k−2+x��k−y��k−1� +

k�k−2+x�−�k−y��k−1��k−y��k−2+x� � (16)

To prove Part 1, we assume that 0 < 2�1− x� ≤ y ≤ 1 and k ≥ 2. Then

k�k−2+x�−�k−y��k−1�=�k−1�y−k�1−x�≥�k−2��1−x�≥0 (17)

and we have

lnk�k − 2 + x��k − y��k − 1� = ln

(1+ �k − 1�y − k�1− x�

�k − y��k − 1�

)≤ �k − 1�y − k�1− x�

�k − y��k − 1�since ln�1+ z� ≤ z for any z ≥ 0. Thus, by Eq. (16) and Eq. (17),

∂d�x� k� y�∂k

≥ −�k − 1�y − k�1− x��k − y��k − 1� + �k − 1�y − k�1− x�

�k − y��k − 2 + x�

= �k − 1�y − k�1− x��k − y��k − 2 + x��k − 1��1− x� > 0

if k > 2, and ∂d�x�k�y�∂k

≥ 0 if k = 2. Thus, d�x� k� y� is increasing with k ≥ 2for all x� y with 0 < 2�1− x� ≤ y ≤ 1.

Page 17: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 189

To prove Part 2, we assume that 0 < y ≤ 1− x ≤ 1 and k ≥ 2. Then

k�k − 2 + x� − �k − y��k − 1� = �k − 1�y − k�1− x�≤ −�1− x� < 0 (18)

and we have

ln�k − y��k − 1�k�k − 2 + x� = ln

(1+ k�1− x� − �k − 1�y

k�k − 2 + x�

)≤ k�1− x� − �k − 1�y

k�k − 2 + x�since ln�1+ z� ≤ z for any z ≥ 0. Thus, by Eq. (16) and Eq. (18),

∂d�x� k� y�∂k

≤ k�1− x� − �k − 1�yk�k − 2 + x� − k�1− x� − �k − 1�y

�k − y��k − 2 + x�

= − k�1− x� − �k − 1�yk�k − y��k − 2 + x�y < 0

for k ≥ 2. Thus, d�x� k� y� is decreasing with k ≥ 2 for all x� y with 0 <y ≤ 1− x ≤ 1�

Proof of Theorem 2.3. Since

γak� 1 − δa

k = ak

((1− 2�1− 1

2a�k

)k

− 12a

(1− 1− 1

2a

k − 1

)k−1)for k ≥ 2, we consider

B�x� k� ≡(1− 2�1−x�

k

)k

x(1− 1−x

k−1)k−1 =

(1− 2�1− 1

2a �k

)k

12a

(1− 1− 1

2ak−1

)k−1 �

b�x� k� ≡ ln xB�x� k� = ln

(1− 2�1−x�

k

)k

(1− 1−x

k−1)k−1 �

using 12 ≤ x ≡ 1

2a < 1. Then by Eq. (15) in Lemma 2.5, we have b�x� k� =d�x� k� y� with y = 2�1− x� and thus, by Part 1 of Lemma 2.5,

∂b�x� k�∂k

> 0 for k > 2 and∂b�x� k�

∂k≥ 0 for k ≥ 2�

for all 12 ≤ x < 1. Thus, b�x� k� and B�x� k� are increasing with k ≥ 2.

Since B�x� 2� = 1, we have B�x� k� > 1 for all k ≥ 3 and for all 12 < x < 1

(i.e., 1 > a > 12 ). This completes the proof of γa

k� 1 > δak for all k ≥ 3 and

12 < a < 1 �1 > x > 1

2 �. It is easy to see that

γak� 2 = 1− ak

(1− 1

k

)k

> δak = 1− ak

(1− 2 − 1

a

k

)k

Page 18: Improved Approximation Algorithms for MAX SAT

190 asano and williamson

for all k ≥ 2 and 12 < a < 1, since 2�1− 1

2a� < 1 implies

1−(1− 1

k

)k

> 1−(1− 2�1− 1

2a�k

)k

Thus, γak = min�γa

k� 1� γak� 2� > δa

k for all k ≥ 3 and 12 < a < 1.

For k = 1� 2, it is clear that γa1 = δa

1 = a and γa2 = δa

2 = 34 .

In the following theorem, we give conditions on when γak� 1 dominates

γak� 2, which will be used to obtain improved approximation algorithms in

the next section.

Theorem 2.6. Let α = √2 − 1

2 ≈ 0�914213 and let β be the numbersatisfying 2β = e1/2β �β ≈ 0�881611�. Then, the following statements hold.

1. For 34 ≤ a ≤ α and k ≥ 3� γa

k� 1� γak� 2, and γa

k = min�γak� 1� γ

ak� 2�

are all decreasing with a and increasing with k. Furthermore, γak ≥ 3

4 for34 ≤ a ≤ α and k ≥ 1.

2. For 34 ≤ a ≤ β� γa

k� 1 ≤ γak� 2 for all k ≥ 2 and thus, γa

k = γak� 1.

3. For β ≤ a ≤ α, there is a unique integer ka such that

γak� 1 ≤ γa

k� 2 for all 2 ≤ k ≤ ka and γak� 1 > γa

k� 2 for all k > ka�

In particular, kα = 7 for α = √2 − 1

2 . Thus,

γαk� 1 < γα

k� 2 for all 2 ≤ k ≤ 7 and γαk� 1 > γα

k� 2 for all k ≥ 8�

Proof. We will first prove Part 1. Since

γak� 1 = 1− 1

2ak−1

(1− 1− 1

2a

k − 1

)k−1= 1− 1

2

(2�k − 2�a + 1

2�k − 1�)k−1

γak� 2 = 1− ak

(1− 1

k

)k

both γak� 1 and γa

k� 2 are decreasing with a. Similarly, for 34 ≤ a ≤ α,

1− γak� 1 =

12ak−1

(1− 1− 1

2a

k − 1

)k−1and 1− γa

k� 2 = ak

(1− 1

k

)k

are decreasing (γak� 1 and γa

k� 2 are increasing) with k ≥ 3, which can beshown as follows. Consider

C1�x� k� ≡1− γa

k+1� 11− γa

k� 1=

a(1− 1− 1

2ak

)k

(1− 1− 1

2ak−1

)k−1 =(1− 1−x

k

)k2x(1− 1−x

k−1)k−1

Page 19: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 191

using 12 < x ≡ 1

2a < 1. Let

c1�x� k� = ln 2xC1�x� k� = ln

(1− 1−x

k

)k(1− 1−x

k−1)k−1 �

Then by using y = 1 − x satisfying 0 < y ≤ 1 − x ≤ 1 in Part 2 ofLemma 2.5, c1�x� k� is decreasing with k ≥ 2 and so is C1�x� k�. Thus, forall 3

4 ≤ a = 12x ≤ α = √

2 − 12 � 23 ≥ x ≥ 1

2α = 2√2+17 ≈ 0�546918�, we have

C1�x� k� < C1�x� 2� =�1+ x�28x2 = �2a + 1�2

8≤ �2α + 1�2

8= 1

and

γak+1� 1 > γa

k� 1 �1− γak+1� 1 < 1− γa

k� 1�for k ≥ 3.Similarly, for

C2�k� ≡1− γa

k+1� 21− γa

k� 2=

a(1− 1

k+1)

(1− 1

k

)k

k+1

and c2�k� ≡ lnC2�k�

a= ln

(1− 1

k+1)k+1

(1− 1

k

)k

with x = 0 and y = 1 satisfying 0 < y ≤ 1− x ≤ 1 in Part 2 of Lemma 2.5,c2�k� is decreasing with k ≥ 1 and so is C2�k�. Thus, for all 3

4 ≤ a ≤ α =√2 − 1

2 , we have

C2�k� ≤ C2�3� =37a211

≤ 37α211

≈ 0�976262 < 1

and

γak+1� 2 > γa

k� 2 �1− γak+1� 2 < 1− γa

k� 2�for k ≥ 3.By the argument above, γa

k = min�γak� 1� γ

ak� 2� is decreasing with a for all

34 ≤ a ≤ α and increasing with k ≥ 3. Thus, γa

k ≥ γa3 ≥ 3

4 for all 34 ≤ a ≤ α

and k ≥ 3 since γak is increasing with k ≥ 3 and γa

3 = γa3� 1 = 1− �2a+1�2

32 ≥ 34

by γa3� 1 = 1 − �2a+1�2

32 and γa3� 2 = 1 − 8a3

27 . It is clear that γa1 = a ≥ 3

4 andγa2 = 3

4 by γa2� 1 = 3

4 and γa2� 2 = 1− a2

4 . This completes the proof of Part 1.

Page 20: Improved Approximation Algorithms for MAX SAT

192 asano and williamson

Next we prove Part 2. Since

γak� 1 − γa

k� 2 = ak

(1− 1

k

)k

− 12ak−1

(1− 1− 1

2a

k − 1

)k−1(19)

we consider

D�x� k� ≡(1− 1

k

)k

x(1− 1−x

k−1)k−1 =

(1− 1

k

)k

12a

(1− 1− 1

2ak−1

)k−1 (20)

using 12 ≤ x ≡ 1

2a < 1. Let

d�x� k� = lnD�x� k� = ln

(1− 1

k

)k

x(1− 1−x

k−1)k−1 � (21)

Then d�x� k� = d�x� k� 1� − ln x, where d�x� k� y� is the function definedby Eq. (15) and y = 1 satisfies the condition 0 < 2�1− x� ≤ y ≤ 1 in Part 1of Lemma 2.5. Thus, d�x� k� = d�x� k� 1� − ln x is increasing with k ≥ 2for all 1

2 ≤ x < 1. Furthermore, it is clear that d�x� k� is decreasing with x,since

∂d�x� k�∂x

= − 1x− k − 1

k − 2 + x< 0 for all

12≤ x ≤ 1 and k ≥ 2�

From the above argument, for a fixed 12 ≤ x < 1, if

d�x�∞� = limk→∞

ln

(1− 1

k

)k

x(1− 1−x

k−1)k−1 = ln

e−1

xe−1+x= ln

1xex

≤ 0�

then d�x� k� ≤ 0 for all k ≥ 2, since d�x� k� is increasing with k ≥ 2. Letxβ be a number satisfying

xβexβ = 1� (22)

Then xβ is unique �xβ = 12β ≈ 0�567143� and d�xβ�∞� = 0, and, thus,

d�x� k� < 0 for all k ≥ 2 and x > xβ (i.e., xex > 1), since d�x� k� is increas-ing with k and decreasing with x. Similarly, d�xβ� k� < 0 for all finite k ≥ 2.Thus, by Eqs. (20) and (21),

D�x� k� = D

(12a

� k

)< 1 for all (finite) k ≥ 2

and12

< a = 12x

≤ 12xβ

= β ≈ 0�881611�

Page 21: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 193

By Eq. (19), this implies that

γak� 1 − γa

k� 2 < 0 for all (finite) k ≥ 2 and12

< a ≤ β ≈ 0�881611

and Part 2 is proved.The proof of Part 3 is almost similar. For 1

2 ≤ x < xβ, there is an integerkx such that

d�x� k� ≤ 0 for all 2 ≤ k ≤ kx and d�x� k� > 0 for all k > kx�

since d�x� k� is increasing with k and

d�x� 2� = ln

(1− 1

2

)2x(1− 1−x

2−1)2−1 = ln

14x2 ≤ 0 and d�x�∞� = ln

1xex

> 0�

Thus, for a = 12x > 1

2xβ= β ≈ 0�881611, there is a unique integer ka such

that

D�x� k� = D

(12a

� k

)≤ 1 for all 2 ≤ k ≤ ka

and D�x� k� = D

(12a

� k

)> 1 for all k > ka�

By Eq. (19), this implies that for a = 12x > β there is a unique integer ka

such that

γak� 1 − γa

k� 2 ≤ 0 for all 2 ≤ k ≤ ka

and γak� 1 − γa

k� 2 > 0 for all k > ka�

For α = √2 − 1

2 ≈ 0�914213 > β = 0�881611, it is easy to see that kα = 7;that is,

γα7� 1 < γα

7� 2 and γα8� 1 > γα

8� 2�

Thus,

γαk� 1 < γα

k� 2 for all 2 ≤ k ≤ 7 and γαk� 1 > γα

k� 2 for all k ≥ 8�

As a corollary of Theorems 2.1 and 2.6, we obtain a new family of 34 -

approximation algorithms. Furthermore, by using a = 34 and a = α = √

2−12 ≈ 0�914213, we have the following results which will be used to obtainimproved approximation algorithms in the next section.

Page 22: Improved Approximation Algorithms for MAX SAT

194 asano and williamson

Corollary 2.7. Let α = √2 − 1

2 ≈ 0�914213. Then, for any a with 34 ≤

a ≤ α, the expected value F�f a1 �y∗�� of the random truth assignment xp =

f a1 �y∗� satisfies F�f a

1 �y∗�� ≥ 34∑

k≥1 W ∗k . In particular, the following statements

hold for a = 34 and a = α.

1. The expected value F�f 3/41 �y∗�� of the random truth assignment xp =

f3/41 �y∗� satisfies

F(f3/41 �y∗�

)≥ ∑

k≥1γ3/4k W ∗

k

= 0�75W ∗1 + 0�75W ∗

2 + 0�804W ∗3 + 0�851W ∗

4

+ 0�888W ∗5 + 0�915W ∗

6 + 0�936W ∗7 + 0�952W ∗

8

+ 0�964W ∗9 + 0�973W ∗

10 +∑

k≥11γ3/4k� 1W

∗k �

2. The expected value F�f α1 �y∗�� of the random truth assignment xp =

f α1 �y∗� satisfies

F�f α1 �y∗�� ≥

∑k≥1

γαkW

∗k

= αW ∗1 + 0�75W ∗

2 + 0�75W ∗3 + 0�766W ∗

4 + 0�784W ∗5

+ 0�801W ∗6 + 0�817W ∗

7 + 0�832W ∗8 + 0�845W ∗

9

+ 0�857W ∗10 +

∑k≥11

γαk� 2W

∗k �

3. THE IMPROVED APPROXIMATION ALGORITHMS

In this section, we give our improved approximation algorithms for MAXSAT based on a hybrid approach. We use a semidefinite programming relax-ation of MAX SAT which is a combination of relaxations given by Goemansand Williamson [7], Feige and Goemans [4], Karloff and Zwick [11], andZwick [16]. We give some notation needed to describe the formulationand the formulation itself in Section 3.1. We then give our algorithm inSection 3.2. We will use a combination of the MAX SAT algorithm in theprevious section which uses the function f

3/41 , the Feige–Goemans MAX

2SAT algorithm, and the Karloff–Zwick MAX 3SAT algorithm to obtain our0.7846-approximation algorithm for MAX SAT. Finally, if we use the MAXSAT algorithm of the previous section f a

1 for a = √2 − 1

2 and combineit with Zwick’s algorithm, we obtain a performance guarantee of 0.8331,assuming the correctness of Zwick’s conjecture. The analysis of these algo-rithms is presented in Section 3.3.

Page 23: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 195

3.1. The MAX SAT Relaxation

We begin by defining some notation we will need to describe our semidef-inite programming relaxation of the MAX SAT problem. For a clause Cj =xj1

∨ xj2∨ · · · ∨ xjkj

with kj ≥ 3, let Pj be the set of all possible clauses C

with two literals in Cj (e.g., if Cj = x∨ y ∨ z then Pj = �x∨ y� x∨ z� y ∨ z�).Similarly, for kj ≥ 4, let Qj be the set of all possible clauses C with threeliterals in Cj (e.g., if Cj = x∨ y ∨ z ∨ u, then Qj = �x∨ y ∨ z� x∨ y ∨ u� x∨z ∨ u� y ∨ z ∨ u�). For simplicity, we use Pj = Cj if Cj is a clause with oneor two literals and Qj = Cj if Cj is a clause of three literals.To give the semidefinite programming relaxation, we follow [7] by intro-

ducing variables y = �y0� y1� � � � � y2n� corresponding toy0yi ≡ 2xi − 1 with y0 = yi = 1 and yn+i = −yi�

Thus, xi becomes 1+y0yi

2 and a clause Cj = xj1∨ xj2

∈ �2 can be consideredto be a function on y = �y0� y1� � � � � y2n� as follows:

Cj = Cj�y� = 1− 1− y0yj1

21− y0yj2

2

= 3+ y0yj1+ y0yj2

− yj1yj2

4�

We also introduce variables z for clauses corresponding to zj = Cj�y�. Werelax this formulation to a “vector programming” problem using �2n + 1�-dimensional vectors vi with norm �vi� = 1 (i.e., vi ∈ S2n) and vn+i = −vi

corresponding to yi with yi = 1 and yn+i = −yi. We replace each yi1yi2

with an inner vector product vi1vi2

and set yi1i2= vi1

vi2.

We need to use constraints on the products vivj and the variables zj

as given in �4� 7� 11� 16�. While we can give these explicitly for the casesof �4� 7� 11�, the number of constraints in [16] becomes too large to giveexplicitly (although still polynomially sized). Thus we will write the latterconstraints as Canon�v� z� ≥ b. For the other constraints, let C = xi1

∨ · · · ∨xik

be a clause with k ≤ 3 literals. Define

relax�C� =

1+ v0vi1

2if k = 1

4− �−v0 + vi1��−v0 + vi2

�4

if k = 2

min

{4− �−v0 + vi1

��vi2+ vi3

�4

4− �−v0 + vi2��vi3

+ vi1�

4�

4− �−v0 + vi3��vi1

+ vi2�

4

}if k = 3

Page 24: Improved Approximation Algorithms for MAX SAT

196 asano and williamson

max∑

Cj∈�w�Cj�zj1

s.t.kj∑i=1

1+ v0vji

2≥ zj ∀Cj = xj1

∨ xj2∨ · · · ∨ xjkj

∈ � (23)

1+ v0vji

2≤ zj ∀ 1 ≤ i ≤ kj�∀Cj = xj1

∨ xj2∨ · · · ∨ xjkj

∈ � (24)

1k − 1

∑C∈Pj

relax�C� ≥ zj ∀Cj ∈ �k� ∀k ≥ 2 (25)

�AW � 1(k−12

) ∑C∈Qj

relax�C� ≥ zj ∀Cj ∈ �k� ∀k ≥ 3 (26)

fg�i1� i2� i3� ≥ −1 ∀ i1� i2� i3 with 0 ≤ i1 < i2 < i3 ≤ n (27)

Canon�v� z� ≥ b (28)

vi ∈ Sn 0 ≤ ∀ i ≤ 2n (29)

vn+i = −vi 1 ≤ ∀ i ≤ n (30)

0 ≤ zj ≤ 1 ∀Cj ∈ �� (31)

FIG. 2. The semidefinite relaxation �AW �.

as Karloff and Zwick did [11]. They show that for a clause Cj� relax�Cj� ≥ zj

is a valid constraint. Finally, define

fg�i1� i2� i3� = min

vi1

vi2+ vi1

vi3+ vi2

vi3�

−vi1vi2

+ vi1vi3

− vi2vi3

�−vi1

vi2− vi1

vi3+ vi2

vi3�

vi1vi2

− vi1vi3

− vi2vi3

Feige and Goemans [4] show that fg�i1� i2� i3� ≥ −1 is a valid constraintfor all 0 ≤ i1 < i2 < i3 ≤ n.Now we are ready to formulate MAX SAT as a vector programming

problem. The formulation �AW � is shown in Fig. 2. It can be consideredto be a semidefinite programming problem in the standard way (see [7], forexample). Let V = �v0� v1� � � � � v2n� and let Y = V TV , so that yij = vivj .Then, the matrix Y = �yi1i2

� is symmetric and positive semidefinite and yii =1 �i = 0� 1� � � � � 2n�. Note that yn+i1�n+i2

= yi1i2and yn+i1�i2

= yi1�n+i2= −yi1i2�1 ≤ i1� i2 ≤ n�. From now on we will not distinguish between semidefinite

programming and vector programming.

Page 25: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 197

We now argue that �AW � is a valid relaxation of the MAX SAT problem.The first set of constraints (23) corresponds to the set of LP relaxationconstraints and implies that if Cj = 1 (i.e., zj = 1) then one of the literalsin Cj is true. Thus, they hold for any truth assignment x if we identifyx with v such that v0 = �1� 0� � � � � 0� and vi = �1� 0� � � � � 0� if xi = 1 andvi = �−1� 0� � � � � 0� if xi = 0. Similarly, the set of constraints (24) impliesthat zj = 1 if one of the literals in Cj is true, and thus they hold for anytruth assignment. The next two sets of constraints (25) and (26) representa relaxation of each MAX SAT clause by sets of MAX 2SAT and MAX3SAT clauses, respectively. If any literal in Cj ∈ �k is set true, then at leastk − 1 length 2 clauses in Pj are true and at least

(k−12

)length 3 clauses

in Qj are true. Thus, the constraints hold for any truth assignment x. Theother constraints also hold for any truth assignment as was shown in thepapers in which they were defined. Thus, �AW � can be considered to bea relaxation of MAX SAT, which is a combination of an LP relaxation andMAX 2SAT, MAX 3SAT, and MAX SAT relaxations based on semidefiniteprogramming.

3.2. The Algorithm

Our algorithm for MAX SAT is to find an optimal solution �V ∗� z∗�to the SDP �AW � and use it to produce four different solutions, asdescribed below. We then output the best solution of the four. LetV ∗ = �v∗0� v∗1� � � � � v∗2n�. We run the following four polynomial-time algo-rithms.

Algorithm 1. Use a random truth assignment xp = �xpi � with x

pi =

f a1 � 1+v∗0v

∗i

2 � corresponding to the LP relaxation of Goemans and Williamson[6], where f a

1 is the function defined in Eq. (4) for a specified value of a.

Algorithm 2. Use a random truth assignment corresponding to theMAX 2SAT algorithm given by Feige and Goemans [4] as follows. LetU∗ = �u∗0� u∗1� � � � � u∗2n� be obtained from V ∗ = �v∗0� v∗1� � � � � v∗2n� by slightlyrotating V ∗ = �v∗0� v∗1� � � � � v∗2n�. More specifically, let u∗0 = v∗0 and let u∗i bethe vector, coplanar with v∗0 and v∗i , on the same side of v∗0 as v∗i is, andwhich forms an angle with v∗0 equal to f �θi� for some function f , where θi

is the angle between v∗0 and v∗i . Feige and Goemans use the function

f �θ� = θ + 0�806765[π2�1− cos θ� − θ

]�

Note that f �π − θ� = π − f �θ� and, thus, u∗n+i = −u∗i for each i� 1 ≤ i ≤ n.Then, take a random �2n+ 1�-dimensional unit vector r uniformly from theunit sphere and set xi to be true if and only if sgn�u∗i · r� = sgn�u∗0 · r�.

Page 26: Improved Approximation Algorithms for MAX SAT

198 asano and williamson

Algorithm 3. Use a random truth assignment obtained by usingf �θ� = θ in Algorithm (2). This random truth assignment was originallyproposed by Goemans and Williamson [5] and it corresponds to the MAX3SAT algorithm given by Karloff and Zwick [11].

Algorithm 4. Use a random truth assignment corresponding to theMAX SAT algorithm given by Zwick [16]. Let V = �v∗0� v∗1� � � � � v2n� and letY = V TV . Then set Y ′ = �cos2 β�Y + �sin2 β�I for β = 0�4555, whereI is the �2n + 1� × �2n + 1� identity matrix. Let Y ′ = UTU , for U =�u∗0� u∗1� � � � � u∗2n�. Obtain the random truth assignment from the u∗i as inAlgorithm 3.

3.3. Analysis

We now turn to the proof of our main theorem.

Theorem 3.1. Let F�V ∗� z∗� denote the expected value of the algorithmabove, with a = 3

4 for Algorithm 1. Then

F�V ∗� z∗� ≥ 0�7846∑k≥1

W ∗k � (32)

where W ∗k = ∑

Cj∈�kw�Cj�z∗j . Let F ′�V ∗� z∗� denote the expected value of a

modified version of the algorithm in which a = α = √2− 1/2 in Algorithm 1.

Then if Zwick’s algorithm performs as conjectured,

F�V ∗� z∗� ≥ 0�8331∑k≥1

W ∗k � (33)

This implies that we have a 0�7846-approximation algorithm for MAXSAT. Furthermore, if Zwick’s algorithm behaves as conjectured, we obtaina 0�8331-approximation algorithm for MAX SAT.

Proof of Theorem 3�1� The expected value of the solution returned bythe algorithm above is at least as good as the expected value of an algorithmthat uses Algorithm i with probability pi, where p1 + p2 + p3 + p4 = 1.From the arguments in Section 2, the probability that a clause Cj ∈ �k

is satisfied by Algorithm 1 is at least γakzj , where γa

k is defined Eq. (6).Similarly, from the arguments in [4, 5, 7], the probability that a clauseCj ∈ �k is satisfied by Algorithm 2 is at least

0�93109 · 1(k2

) ∑C∈Pj

relax�C� ≥ 0�93109 · 2k

1k − 1

∑C∈Pj

relax�C�

≥ 0�93109 · 2kz∗j for k ≥ 2

Page 27: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 199

and at least 0�97653z∗j for k = 1. By an analysis obtained by Karloff andZwick [11] and an argument similar to one in [5, 7], we have the probabilitythat a clause Cj ∈ �k is satisfied by Algorithm 3 is at least

78· 1(

k3

) ∑C∈Qj

relax�C� ≥ 78· 3k

1(k−12

) ∑C∈Qj

relax�C�

≥ 3k

78z∗j for k ≥ 3�

and at least 0�87856z∗j for k = 1� 2.Zwick [16, 17] conjectures that the worst-case ratio of the value of a

certain semidefinite programming bound to the performance of a certainalgorithm is attained at a certain point; if the conjecture is true, he obtainsan analytic expression for the probability that a clause Cj ∈ �k is satisfied byAlgorithm 4. This analytic expression is quite complicated, and we do notrepeat it here. Instead, we use the fact that the probability that a clauseCj ∈ �k is satisfied by Algorithm 4 is at least αkz

∗j , where αk ≥ 0�7977;

the values of αk for k = 1� � � � � 39 were obtained from Zwick [17], whocomputed the value of the analytic expression at these points. We list thesevalues in Appendix A. Importantly, α1 = α35 = 0�7977 and αk is increasingfor k > 35 [17].We now analyze what happens if we use Algorithm i with probability

pi. The expected value of this algorithm is at least �γa1p1 + 0�97653p2 +

0�87856p3 + 0�7977p4�wjz∗j on clauses Cj ∈ �1. Similarly, the value of the

algorithm is at least �γa2p1 + 0�93109p2 + 0�87856p3 + 0�87995p4�wjz

∗j on

clauses Cj ∈ �2� �γa3p1 + 0�931092

3p2 + 78p3 + 7

8p4�wjz∗j on clauses Cj ∈ �3

and �γakp1 + 0�93109 2

kp2 + 7

8 · 3kp3 + αkp4�wjz

∗j on clauses Cj ∈ �k for

k ≥ 4. If we can show that this quantity is at least βw�Cj�z∗j for all clausesCj , then we obtain an approximation algorithm with performance guaranteeβ since the expected value of the solution is at least β

∑Cj∈� w�Cj�z∗j ≥

βW ∗ ≥ βW .Suppose first that we do not assume Zwick’s conjecture, and we set a = 3

4(that is, we use the function f

3/41 in Algorithm 1). If we set p4 = 0, p1 =

0�7846081� p2 = 0�1316834, and p3 = 1− �p1 + p2� = 0�0837085, then wehave

34p1 + 0�97653p2 + 0�87856p3 ≥

34p1 + 0�93109p2 + 0�87856p3

≥ p1 ≥ 0�7846 for k = 1� 2�

and for k ≥ 3,

γ3/4k p1 +

2 × 0�93109k

p2 +3k

78p3 ≥ p1 ≥ 0�7846�

Page 28: Improved Approximation Algorithms for MAX SAT

200 asano and williamson

This can be proved by the following observation. Since, for k ≥ 3,

k�1− γ3/4k � = k

(12

(34

)k−1(1− 1

3�k − 1�)k−1)

takes the maximum value for k = 4, we have

�2 × 0�93109�p2 + 218 p3

k�1− γ3/4k �

≥ �2 × 0�93109�p2 + 218 p3

4�1− γ3/44 �

≥ p1�

This implies the inequality (32), and thus the algorithm has performanceguarantee 0.7846.We now prove inequality (33). Suppose we use all four algorithms, and

we set a = √2 − 1

2 . We attempt to determine the best weighting of thealgorithms by using a linear program, in which the probabilities pi and theperformance guarantee β are variables, and there is a constraint for everyclause size from 1 to 39 (e.g., for clause size 1, the constraint is 0�91421p1 +0�97653p2 + 0�87856p3 + 0�79778p4 ≥ β�. We attempt to maximize β sub-ject to these constraints and obtain the solution p1 = 0�303636� p4 =0�696364� β = 0�8331� p2 = p3 = 0. To verify the answer, one need onlycheck that p1γ

ak + p4αk ≥ 0�8331 for k ranging from 1 to 35, since both αk

and γak are increasing for k > 35.

Since the algorithms in [9, 16] are parametrized, we expect that smallimprovements to our bounds can be made by altering some of these param-eters.

APPENDIX

Performance of Zwick’s Algorithm

Here we list the performance of Zwick’s algorithm for various clauselengths [17].

k αk

1 0.797775942 0.879953393 0.875000074 0.86480993

Page 29: Improved Approximation Algorithms for MAX SAT

improved approximation algorithms for max sat 201

k αk

5 0.855234136 0.846955417 0.839916138 0.833938999 0.8288485810 0.8244955211 0.8207574912 0.8175353913 0.8147489214 0.8123326315 0.8102328816 0.8084053717 0.8068132418 0.8054256519 0.8042166120 0.8031640821 0.8022492722 0.8014560923 0.8007706724 0.8001810025 0.7996766626 0.7992485627 0.7988887228 0.7985901529 0.7983466730 0.7981528231 0.7980037632 0.7978951933 0.7978232634 0.7977845335 0.7977759436 0.7977947237 0.7978383838 0.7979046839 0.79799160

ACKNOWLEDGMENTS

The first author was supported in part by a Grant in Aid for Scientific Research of theMinistry of Education, Science, Sports and Culture of Japan and The Institute of Science and

Page 30: Improved Approximation Algorithms for MAX SAT

202 asano and williamson

Engineering, Chuo University. We thank Howard Karloff and Uri Zwick for useful commentsand Uri Zwick for calculating the bounds αk of his algorithm for us.

REFERENCES

1. T. Asano, Approximation algorithms for MAX SAT: Yannakakis vs. Goemans–Williamson,in “Proceedings of the 5th Israel Symposium on Theory of Computing and Systems,”pp. 24–37, 1997.

2. T. Asano, T. Ono, and T. Hirata, Approximation algorithms for the maximum satisfiabilityproblem, Nordic J. Comput. 3 (1996), 388–404.

3. J. Chen, D. Friesen, and H. Zheng, Tight bound on Johnson’s algorithm for MAX SAT,J. Comput. System Sci. 58 (1999), 622–640.

4. U. Feige and Michel X. Goemans, Approximating the value of two prover proof systems,with applications to MAX 2SAT and MAX DICUT, in “Proceedings of the 3rd IsraelSymposium on Theory of Computing and Systems,” pp. 182–189, 1995.

5. Michel X. Goemans and David P. Williamson, .878-approximation algorithms for MAXCUT and MAX 2SAT, in “Proceedings of the 26th ACM Symposium on the Theory ofComputating,” pp. 422–431, 1994.

6. Michel X. Goemans and David P. Williamson, New 3/4-approximation algorithms for themaximum satisfiability problem, SIAM J. Discrete Math. 7 (1994), 656–666.

7. Michel X. Goemans and David P. Williamson, Improved approximation algorithms formaximum cut and satisfiability problems using semidefinite programming, J. ACM 42(1995), 1115–1145.

8. Johan Håstad, Some optimal inapproximability results, in “Proceedings of the 28th ACMSymposium on the Theory of Computing,” pp. 1–10, 1997.

9. Erin Halperin and Uri Zwick, Approximation algorithms for MAX 4-SAT and roundingprocedures for semidefinite programs, J. Algorithms 40, (2001), 184–211.

10. David S. Johnson, Approximation algorithms for combinatorial problems, J. Comput. Sys-tem Sci. 9 (1974), 256–278.

11. Howard Karloff and Uri Zwick, A 7/8-approximation algorithm for MAX 3SAT? in“Proceedings of the 38th IEEE Symposium on the Foundations of Computer Science,”pp. 406–415, 1997.

12. R. Kohli and R. Krishnamurti, Average performance of heuristics for satisfiability, SIAMJ. Discrete Math. 2 (1989), 508–523.

13. K. Lieberherr and E. Specker, Complexity of partial satisfaction, J. ACM 28 (1981),411–421.

14. Luca Trevisan, Gregory B. Sorkin, Madhu Sudan, and David P. Williamson, Gadgets,approximation, and linear programming, SIAM J. Comput. 29 (2000), 2074–2097.

15. Mihalis Yannakakis, On the approximation of maximum satisfiability, J. Algorithms 17(1994), 475–502.

16. Uri Zwick, Outward rotations: A tool for rounding solutions of semidefinite programmingrelaxations, with applications to MAX CUT and other problems, in “Proceedings of the31st ACM Symposium on the Theory of Computing,” pp. 679–687, 1999.

17. Uri Zwick, Personal communication, 1999.