7
Information Processing Letters 80 (2001) 171–177 Approximation algorithms for maximum linear arrangement Refael Hassin , Shlomi Rubinstein Department of Statistics and Operations Research, School ofMathematical Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel Received 28 February 2000; received in revised form 7 February 2001 Communicated by L. Boasson Abstract The GENERALIZED MAXIMUM LINEAR ARRANGEMENT PROBLEM is to compute for a given vector x R n and an n × n non-negative symmetric matrix W = (w i,j ), a permutation π of {1,...,n} that maximizes i,j w π i j |x j x i |. We present a fast 1 3 -approximation algorithm for the problem. We present a randomized approximation algorithm with a better performance guarantee for the special case where x i = i, i = 1,...,n. Finally, we introduce a 1 2 -approximation algorithm for MAX k - CUT WITH GIVEN SIZES OF PARTS. This matches the bound obtained by Ageev and Sviridenko, but without using linear programming. 2001 Elsevier Science B.V. All rights reserved. Keywords: Algorithms; Algorithmical approximation 1. Introduction We define the GENERALIZED LINEAR ARRANGE- MENT PROBLEM as the problem of computing for a given vector x = (x 1 ··· x n ) R n of ‘points’ and an n × n non-negative symmetric matrix w = (w i,j ) of ‘weights’, a permutation π of {1,...,n} so that i,j w π i j |x j x i | is optimized. In an illustra- tive example, consider n linearly ordered points in which a set of n machines is to be located, and w i,j is a measure of association of the i th and j th ma- chines. Our interest is in the maximization version, the GENERALIZED MAXIMUM LINEAR ARRANGEMENT PROBLEM (GMLAP), where the goal is to maximize i,j w π i j |x j x i |, and keep the machines far from each other (compare with [10]). * Corresponding author. E-mail addresses: [email protected] (R. Hassin), [email protected] (S. Rubinstein). The special (NP-hard) case in which x i = i is known as the LINEAR ARRANGEMENT PROBLEM and an O(log n)-approximation for the minimization ver- sion is given in [9] (see also [6]). The minimiza- tion version is polynomially solvable if we are al- lowed to locate several elements (or none) at any point, even when some of the locations are already predeter- mined [8]. Another interesting (also, NP-hard) special case of the problem is MAX CUT PROBLEM WITH GIVEN SIZES OF PARTS where for some p n/2, x 1 = ···= x p = 0 and x p+1 =···= x n = 1. Ageev and Sviridenko [2] applied a novel method of rounding linear programming relaxations and developed a 1 2 - approximation algorithm for this problem. They also obtained a similar result for a more general MAX k - CUT PROBLEM in which integers p 1 ,...,p k are given and the goal is to compute a k -cut, that is, a partition S 1 ,...,S k of {1,...,n} with |S i |= p i ,i = 1,...,k , 0020-0190/01/$ – see front matter 2001 Elsevier Science B.V. All rights reserved. PII:S0020-0190(01)00159-4

Approximation algorithms for maximum linear arrangement

Embed Size (px)

Citation preview

Information Processing Letters 80 (2001) 171–177

Approximation algorithms for maximum linear arrangement

Refael Hassin∗, Shlomi RubinsteinDepartment of Statistics and Operations Research, School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel

Received 28 February 2000; received in revised form 7 February 2001Communicated by L. Boasson

Abstract

The GENERALIZED MAXIMUM LINEAR ARRANGEMENT PROBLEM is to compute for a given vectorx ∈ Rn and ann × n

non-negative symmetric matrixW = (wi,j ), a permutationπ of {1, . . . , n} that maximizes∑

i,j wπi,πj |xj − xi |. We present a

fast 13-approximation algorithm for the problem. We present a randomized approximation algorithm with a better performance

guarantee for the special case wherexi = i, i = 1, . . . , n. Finally, we introduce a12-approximation algorithm forMAX k-

CUT WITH GIVEN SIZES OF PARTS. This matches the bound obtained by Ageev and Sviridenko, but without using linearprogramming. 2001 Elsevier Science B.V. All rights reserved.

Keywords:Algorithms; Algorithmical approximation

1. Introduction

We define theGENERALIZED LINEAR ARRANGE-MENT PROBLEM as the problem of computing for agiven vectorx = (x1 � · · · � xn) ∈ R

n of ‘points’and ann × n non-negative symmetric matrixw =(wi,j ) of ‘weights’, a permutationπ of {1, . . . , n} sothat

∑i,j wπi ,πj |xj − xi | is optimized. In an illustra-

tive example, considern linearly ordered points inwhich a set ofn machines is to be located, andwi,j

is a measure of association of theith and j th ma-chines. Our interest is in the maximization version, theGENERALIZED MAXIMUM LINEAR ARRANGEMENT

PROBLEM (GMLAP), where the goal is tomaximize∑i,j wπi ,πj |xj − xi |, and keep the machines far from

each other (compare with [10]).

* Corresponding author.E-mail addresses:[email protected] (R. Hassin),

[email protected] (S. Rubinstein).

The special (NP-hard) case in whichxi = i isknown as theLINEAR ARRANGEMENT PROBLEMandan O(logn)-approximation for theminimizationver-sion is given in [9] (see also [6]). The minimiza-tion version is polynomially solvable if we are al-lowed to locate several elements (or none) at any point,even when some of the locations are already predeter-mined [8].

Another interesting (also, NP-hard) special case ofthe problem isMAX CUT PROBLEM WITH GIVEN

SIZES OF PARTSwhere for somep � n/2, x1 =· · · = xp = 0 andxp+1 = · · · = xn = 1. Ageev andSviridenko [2] applied a novel method of roundinglinear programming relaxations and developed a1

2-approximation algorithm for this problem. They alsoobtained a similar result for a more generalMAX k-CUT PROBLEMin which integersp1, . . . , pk are givenand the goal is to compute ak-cut, that is, a partitionS1, . . . , Sk of {1, . . . , n} with |Si | = pi, i = 1, . . . , k,

0020-0190/01/$ – see front matter 2001 Elsevier Science B.V. All rights reserved.PII: S0020-0190(01)00159-4

172 R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177

which maximizes the weight of pairs whose elementsare in different parts of the partition.

The GMLAP is a special case of theMAXIMUM

QUADRATIC ASSIGNMENT PROBLEMwhich also in-cludes as special cases fundamental problems liketheMAXIMUM TRAVELING SALESMAN PROBLEM . Inthis problem twon× n nonnegative symmetric matri-cesA = (ai,j ) andB = (bi,j ) are given and the ob-jective is to compute a permutationπ of {1, . . . , n} sothat∑

i,j∈Vi �=j

aπ(i),π(j)bi,j

is maximized. A 14-approximation algorithm for this

problem, under the assumption that the values ofone of the matrices satisfy the triangle inequality, isgiven in [4]. Of course, this bound applies also to theGMLAP.

We present a13-approximation algorithm for the

GMLAP. An interesting feature of our algorithmis that it simultaneously approximates the max cutproblems with sizesp and n − p of parts for allpossible values ofp. We present a randomized12-approximation for theMAXIMUM LINEAR ARRANGE -MENT PROBLEM. Finally, we present an alternative12-approximation forMAX k-CUT PROBLEM WITH

GIVEN SIZES OF PARTS. Unlike the algorithm ofAgeev and Sviridenko, the latter algorithm does notuse linear programming.

We first describe, in Section 2, a generic random-ized approximation algorithm forMAX CUT WITH

GIVEN SIZES OF PARTS. The analysis of this specialcase will be used in Section 3 where we obtain ourmain result on the GMLAP. In Section 4 we presentour algorithm for MAXIMUM LINEAR ARRANGE -MENT and in Section 5 we treat theMAX k-CUT PROB-LEM WITH GIVEN SIZES OF PARTS.

For disjointS,T ⊂ V we mean by{i, j } ∈ (S,T )

that eitheri ∈ S and j ∈ T or j ∈ S and i ∈ T . Wedenote

w(S,T ) =∑

{i,j}∈(S,T )wi,j ,

and

w(i, T ) = w({i}, T )

.

We usew(i,V ) for w(i,V \ {i}). Finally, we denoteby opt the optimal solution value in the problem underconsideration.

2. MAX CUT WITH GIVEN SIZES OF PARTS

Given an undirected graphG = (V ,E) with |V | =n and edge weightswi,j , {i, j } ∈E, acut is a partition(S,T ) of V , and its weight isw(S,T ). The problem isto compute a maximum weight cut such that|S| = p.Without loss of generality, we assume thatp � n/2.

Theorem 1. Let w(S,T ) be the expected weight ofthe cut returned by Max_Cut(Fig.1). Then,w(S,T ) �opt/3.

Proof. The probability for an edge{i, j } ∈ (P,V \P)

to be separated by(S,T ) is 12, since it corresponds to

the event thati is selected toS. The probability for anedge{i, j } ∈ P × P to be separated by(S,T ) is

p

2p − 1>

1

2,

since it corresponds to the event thatexactlyone ofi andj is selected. Consider an optimal solution anddenote byOPT the set of sizep in it. Let

r = |OPT∩ P |, s = w(OPT∩P,P \ OPT),

and

t = w(OPT∩P,V \ (OPT∪P)

).

Max_Cutinput1. A graphG = (V ,E), V = {1, . . . , n}

with edge weightswi,j , {i, j } ∈E.2. An integerp � n/2.returnsA cut (S,T ) such that|S| = p.beginP := {i1, . . . , i2p ∈ V | w(i,V ) � w(j,V )

∀i ∈ P, j /∈ P }.Randomly choosep vertices fromP to formS.T := V \ S.return (S,T ).end Max_Cut

Fig. 1. AlgorithmMax_Cut.

R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177 173

Fig. 2.

Note that by the definition ofP , there is somethresholdk such thatw(i,V ) is at leastk for i ∈ P

and at mostk for i ∈ V \ P (see Fig. 2). Also notethat edges with two ends inP are counted twice ins + ∑

i∈P \OPTw(i,V ) so that

w(S,T ) �s + ∑

i∈P \OPTw(i,V )

4+ t

2

� s + (2p − r)k

4+ t

2≡ A,

and

opt� (p − r)k + t + s ≡ B.

We note that to compute a minimum possible value forthe ratioA/B given that it can be made smaller than12, we can assume without loss of generality thatt = 0.Let s = α(2p − r)k. We now distinguish two cases.

Suppose first thatα � 1. We use

w(S,T ) � k(1+ α)2p − r

4

and

opt� k((2p − r)(1+ α)− p

),

so that

w(S,T )

opt� 1

4

(2p − r)(1+ α)

(2p − r)(1+ α)− p.

This ratio is monotone decreasing inα so that for theworst case we substituteα = 1 and obtain

w(S,T )

opt� 2p − r

2(3p− 2r).

This expression is monotone increasing inr and it isminimized whenr = 0, in which case we obtain theratio 1

3.

beginSortV in non-increasing order ofw(i,V ).(For simplicity, suppose that

w(1,V ) � · · · � w(n,V ).)

for i = 1, . . . , pAssign 2i − 1 toS, with probability 1

2.Assign 2i to S otherwise.

end for

Fig. 3. ModifiedMax_Cut.

Suppose now thatα > 1. We use the inequalities

w(S,T ) � s

2,

(since the edges definings are inP × P ) and

opt� (p − r)k + s � 2p − r

2k + s = s

2α+ s � 3

2s,

so that

w(S,T )

opt� 1

3. ✷

The bound of Theorem 1 is asymptotically achiev-able: LetG consist of a star with 2p vertices,p − 1vertex disjoint edges, and isolated vertices. The opti-mal solution is to choose the center of the star and oneend of each edge to form a set ofp vertices. Thus,opt= 3p − 2. We could formP from the 2p verticesof the star and thenw(S,T ) = p− 1

2 . The correspond-ing ratio is asymptotically13.

Algorithm Max_Cut is fast and simple, but for ourresults in the next section we will use a variationof it, which is also easier for derandomization (aswe describe in the next section). In this variation wechange the main step of the algorithm as described inFig. 3.

Theorem 2. Letw(S,T ) be the expected weight of thepartition returned by Max_Cut with the modificationgiven in Fig.3. Then,

w(S,T ) � opt

3.

Proof. The proof is as in Theorem 1, withP ={1, . . . ,2p}. Note that the probability for an edge in(P,V \ P) to be separated by(S,T ) is 1

2, as inTheorem 1. The probability for an edge{2i − 1,2i}

174 R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177

for somei = 1, . . . , p, to be separated by(S,T ) is 1,and for other edges inP ×P this probability is1

2. ✷

3. GENERALIZED MAXIMUM LINEAR

ARRANGEMENT

We start by presenting an alternative way to com-pute the weight of a solutionπ to GMLAP (a similarrepresentation is used in [8]): Forp = 1, . . . , n− 1 let

Cp =p∑

i=1

n∑

j=p+1

wπi,πj .

Note that the problem of maximizingCp over allpermutationsπ of {1, . . . , n} is the max cut problemwith sizes of partsp andn− p. Now we observe that

i,j

wπi,πj |xj − xi | =n−1∑

p=1

Cp|xp+1 − xp|. (1)

In other words, the contribution of the interval[xp, xp+1] to the weight of the solution isCp|xp+1 −xp|. Our algorithmRandomized_GMLA (Fig. 4) ap-proximatessimultaneouslyall of these cut problemswith factor 1

3 each, and consequently the same boundapplies to the GMLAP instance as well. We note thatthe permutation defining the approximate solution isindependent of the vector(x1, . . . , xn).

Theorem 3. Let π be the permutation returned byRandomized_GMLA.

(1) Let Sp = {π1, . . . , πp}, Tp = {πp+1, . . . , πn}.Then(Sp,Tp) is a randomized1

3-approximationfor the max cut problem with sizes of partsp andn− p.

(2) π is a randomized 13-approximation for the

GMLAP.

Proof. By Theorem 2, forp = 1, . . . , n− 1, the valueof Cp in the output of the algorithm is a randomized13-approximation for the respective max cut problem.The proof for the second part of the theorem followsnow from (1). ✷

Derandomizing the algorithm is particularly simple.We apply the ‘method of conditional expectations’(see [3]). In the first iteration we assignπ1 := 1 andπn := 2. Consider theith iteration for i > 1. LetSi−1 and Ti−1 be the partial permutation that hasalready been set in the firsti − 1 iterations. We shouldset πi to either 2i − 1 or to 2i, andπn−i+1 to theother value. This is done so that the expected valueof the solution is maximized givenSi and Ti andassuming that the following assignments will be doneaccording toRandomized_GMLA. We observe that theexpected weight gained by the latter assignments isindependent of the current decision. Therefore, thecurrent assignment should be done in a way thatmaximizes the weight of edges connecting the vertices2i − 1 and 2i to the previously assigned vertices. It

Randomized_GMLAinput A non-negative symmetric matrixW = (wi,j , i, j = 1, . . . , n).returns A permutationπ1, . . . , πn of V = {1, . . . , n}.beginSortV in non-increasing order ofw(i,V ).(For simplicity, suppose thatw(1,V ) � · · · � w(n,V ).)for i = 1, . . . , �n/2�

Setπi := 2i − 1 andπn−i+1 := 2i with probability 12.

Setπi := 2i andπn−i+1 := 2i − 1 otherwise.If n is odd, setπ(n+1)/2 := n.

end forreturn π .end Randomized_GMLA

Fig. 4. AlgorithmRandomized_GMLA.

R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177 175

is easy to see that under this rule the algorithm setsπi := 2i − 1 andπn−i+1 := 2i if

w(2i − 1, Si−1)+w(2i, Ti−1)

� w(2i, Si−1)+w(2i − 1, Ti−1).

It setsπi := 2i andπn−i+1 := 2i − 1 otherwise. Wecall the resulting AlgorithmGMLA.

Theorem 4. Let m = |{{i, j }: wi,j > 0}|. Then, Al-gorithm GMLA computes a13-approximation for theGMLAP and forMAX CUT WITH GIVEN SIZES OF

PARTS for every p = 1, . . . , �n/2� in time O(m +n logn).

4. MAXIMUM LINEAR ARRANGEMENT

Recall that theMAXIMUM LINEAR ARRANGE -MENT PROBLEM is the special case of GMLAP wherexi = i, i = 1, . . . , n. There are simple approximationalgorithm with constant performance guarantees forthis problem. The simplest one relies on the observa-tion that the average value of|xj − xi | is n/3. There-fore, the expected weight of a random permutation is(n/3)

∑i,j wπi ,πj which is at least13opt.

A better bound can be obtained as follows. Let(S,T ) be a partition which maximizes

mc=∑

{i,j}∈(S,T )wi,j .

Let π1, . . . , π|S| be a random permutation ofS andlet π|S|+1, . . . , πn be a random permutation ofT .

(Alternatively, order the elementsi ∈ S in decreasingorder of

∑j∈T wi,j and order the elementsj ∈ T in

increasing order of∑

i∈S wi,j .) Let apxbe the weightof the resulting solution.opt is equal to a sum ofvalues of a series ofn cuts, each bounded bymc.Thusopt� n · mc. The average ‘distance’ between anelement inS and inT is n/2. Thus,

apx= n

2mc� 1

2opt.

We can use the 0.878-approximation algorithm forMAX CUT [7] to obtain a 0.439-approximation for ourproblem.

We now present a randomized algorithm with abetter performance guarantee (see Fig. 5).

Lemma 5. Consider an optimal solution. The totalcontribution to the solution’s weight which comes frompairs of elements that were placed within distance ofat mostn0.8 from each other is less than1

n0.1 opt.

Proof. In a random solution, the expected weight be-tween any pair of elements isn/3. If the total con-tribution to the solution’s weight which comes frompairs of elements that were placed within distance ofat mostn0.8 from each other is greater than1

n0.1 opt,then just the expected contribution of these elementsto the weight of a random solution would be atleastn/3

n0.8

1

n0.1 opt> opt,

a contradiction. ✷

Max_LAinput A non-negative symmetric matrixW = (wi,j , i, j = 1, . . . , n).returns A permutationπ of V = {1, . . . , n}.beginPartitionV into subsetsS,T by independently assigning eachi ∈ V

with probability 12 to S or T .

SortS in non-increasing order ofWi = ∑j∈T wi,j and letπ1, . . . , π|S|

be the corresponding order.SortT in non-decreasing order ofW ′

i = ∑j∈S wi,j and letπ|S|+1, . . . , πn

be the corresponding order.return π .end Max_LA

Fig. 5. AlgorithmMax_LA.

176 R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177

Observation 6. Let S,T be a random partition of{1, . . . , n}, as in Max_LA (Fig. 5). The expectedcontribution to opt coming from pairs of elements thatbelong to different parts of the partition is opt/2.

Observation 7. The permutationπ computed byMax_LA maximizes the contribution to the total weightcoming from pairs of elements that belong to differentparts of the partitionS,T under the constraint that theelements ofS are assigned to1, . . . , |S| and those ofT to |S| + 1, . . . , n.

Lemma 8. With probability that exponentially ap-proaches1 as a function ofn, it is possible to assignS to 1, . . . , |S| andT to |S| + 1, . . . , n, so that the dis-tance of each element from its relevant end(1 for S orn for T ) differs from its distance in opt from the near-est end, by at mostn0.6.

Proof. Index the elements according to their distancein opt from their nearest end, breaking ties arbitrarily.AssignS to 1, . . . , |S| andT to n,n − 1, . . . , |S| + 1in increasing order of the indices. Consider an elementwith indexk. Its distance from the nearest end inopt is�(k − 1)/2�. Its distance under the above assignmentdepends on the number of lower index elements thatwere assigned to its part (S or T ). Thus, this distanceis a random variable with binomial distributionB(k −1, 1

2). Its mean is(k − 1)/2 and standard deviation12

√k − 1. Therefore, using the Normal approximation

of the Binomial distribution, the probability of adeviation of at leastn0.6 from the mean is of ordere−n1.2/k which is less than e−n0.2

since k � n. Theprobability that such a deviation will be obtained forany of then elements is bounded byne−n0.2

. ✷Theorem 9. Algorithm Max_LA(Fig. 5) produces,with probability that approaches1 as a function ofn,a solution of weight1− o(1) times the contribution toopt which comes from elements that belong to differentparts of the partition(S,T ).

Proof. By Lemma 8, the distance between any twopoints in the solution produced byMax_LA is atleast their distance in the optimal solution minusO(n0.6). By Lemma 5, the relative contribution ofthe pairs that are closer thann0.8 is asymptotically

0. For elements whose distance inopt is at leastn0.8, a relative deviation ofn0.6/n0.8 is asymptoticallyzero. Therefore, by Observation 7 and Lemma 8, thealgorithm guarantees almost all of the weight that theoptimal solution has betweenS andT . ✷Theorem 10. The expected weight of the solutionproduced by Max_LA is asymptotically12opt, withprobability that approaches1 as a function ofn.

Proof. Follows from Theorem 9 and Observa-tion 7. ✷

5. MAX k-CUT WITH GIVEN SIZES OF PARTS

Given a graphG = (V ,E) with edge weightsw andintegersp1, . . . , pk such that

∑pi = n, the MAX k-

CUT WITH GIVEN SIZES OF PARTSis to compute ak-cut, that is, a partitionS1, . . . , Sk of V , such that|Si | = pi, i = 1, . . . , k, maximizing the weight ofedges whose ends are in different parts of the partition.

A vertex v ∈ V is said to cover the weight ofthe edges{{u,v} ∈ E}. A subsetV ′ ⊂ V covers theweight of the union of edges which have at leastone end in it. Bar-Yehuda [5] developed an O(n2) 2-approximation algorithm for the following problem:Givenw, compute a vertex set of minimum size thatcovers edge weight of size at leastw.

One can obtain from this result, in a straightforwardway, a solution to the following problem: Givenp �12n find a setS′ of 2p vertices that covers edge weightof at leastw(p), wherew(p) is the maximum edgeweight that can be covered byp vertices. To achievethis goal we apply binary search over[0,w(E)], wherew(E) is the total weight ofE. For each test value,w,we apply Bar-Yehuda’s algorithm and we stop with thehighest value for which the algorithm returns a setS′with at most 2p vertices. If the set contains less than2p vertices we extend it by adding arbitrary vertices.The complexity of this procedure is O(n2 logw(E)).

Our algorithm for the case ofk = 2 (MAX CUT

WITH GIVEN SIZES OF PARTS) proceeds as follows:Randomly selectp vertices fromS′ and move themto the other side of the cut. Let the resulting setbe SA. We claim that the expected size of the cut(SA,V \ SA) is a 1

2-approximation for the problem.The argument is that the weight of the edges covered

R. Hassin, S. Rubinstein / Information Processing Letters 80 (2001) 171–177 177

by S′ is an upper bound on the optimal solutionvalue, and each of these edges will be in the cut withprobability 1

2. The algorithm can be derandomizedby applying the ‘method of conditional expectations’(see [3]). However, the rounding method of Ageevand Sviridenko can also be used to obtain a1

2-approximation in deterministic linear time, onceS′ isgiven [1].

Ageev and Sviridenko [2] also applied their methodto obtain a 1

2-approximation for theMAX k-CUT

PROBLEM WITH GIVEN PARTS.Our algorithm can be modified for the maxk-cut

problem as well: We first observe that ifpi � 12n

for every i = 1, . . . , k then the cut contains morethan half of the edges so that a random solution hasexpected weight of at least half the total weight of thegraph. Thus a random solution suffices to obtain a1

2-approximation.

Assume now thatp1 > n/2. We compute as abovesetsS′ and SA with p = n − p1. We setP1 = V \SA and arbitrarily partitionSA to form P2, . . . ,Pk .The resultingk-cut has the property that the expectedweight of edges betweenP1 and the other partsis already half the weight of the edges covered byS′ which is itself an upper bound on the optimalsolution. Thus thek-cut we constructed is a1

2-approximation for the problem. Again, the algorithmcan be derandomized.

References

[1] A.A. Ageev, Personal communication.[2] A.A. Ageev, M.I. Sviridenko, Approximation algorithms for

maximum coverage and max cut with given sizes of parts, in:Proceedings of IPCO’99, Lecture Notes in Computer Science,Vol. 1610, Springer, Berlin, 1999, pp. 17–30.

[3] N. Alon, J.H. Spencer, The Probabilistic Method, John Wileyand Sons, New York, 1992.

[4] E. Arkin, R. Hassin, M.I. Sviridenko, Approximating themaximum quadratic assignment problem, Inform. Process.Lett. 77 (2001) 13–16. A preliminary version appeared in:Proc. 11th Annual ACM-SIAM Symposium on Discrete Al-gorithms (SODA2000), 2000, pp. 889–890.

[5] R. Bar-Yehuda, Using homogeneous weights for approximat-ing the partial cover problem, J. Algorithms 39 (2001) 137–144. A preliminary version appeared in: Proc. 10th AnnualACM-SIAM Symposium on Discrete Algorithms (SODA99),1999, pp. 71–75.

[6] G. Even, J. Naor, S. Rao, B. Schieber, Divide-and-conquerapproximation algorithms via spreading matrices, J. ACM 47(2000) 585–616.

[7] M.X. Goemans, D.P. Williamson, Improved approximationalgorithms for maximum cut and satisfiability problems usingsemidefinite programming, J. ACM 42 (1995) 1115–1145.

[8] J.-C. Picard, H.D. Ratliff, A cut approach to the rectilineardistance facility location problem, Oper. Res. 26 (1978) 422–433.

[9] S. Rao, A.W. Richa, New approximation techniques for someordering problems, in: Proc. 9th Annual ACM-SIAM Sympo-sium on Discrete Algorithms (SODA98), 1998, pp. 211–218.

[10] A. Tamir, Obnoxious facility location on graphs, SIAM J. Dis-crete Math. 4 (1991) 550–567.