Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Applied Algorithm DesignLecture 5
Pietro Michiardi
Eurecom
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86
Approximation Algorithms
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 2 / 86
Introduction
In the first lectures, we discussed about NP-completeness and theidea of computational intractability in generalHow should we design algorithms for problems where polynomialtime is probably an unattainable goal?
Approximation algorithms:Run in polynomial timeFind solutions that are guaranteed to be close to optimal
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 3 / 86
Introduction
Since we will not be seeking at optimal solutions, polynomialrunning time becomes feasibleWe are interested in proving that our algorithms find solutions thatare guaranteed to be close to the optimum
But how can you compare to and reason about the optimal solutionthat is computationally very hard to find?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 4 / 86
Approximation techniques
1 Greedy algorithms: simple and fast algorithms that requirefinding a greedy rule that leads to solution probably close tooptimal
2 Pricing method: motivated by an economic perspective, thesealgorithms consider a price one has to pay to enforce eachconstraint of the problem. A.k.a. Primal-dual technique
3 Linear programming and rounding: these algorithms exploit therelationship between the computational feasibility of linearprogramming and the expressive power of integer programming
4 Dynamic programming and rounding: complex technique thatachieves extremely good approximations
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 5 / 86
The Greedy Approach
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 6 / 86
Greedy Algorithms and Bounds on the Optimum
To illustrate this first technique we consider a fundamental problem:Load Balancing.
Load Balancing:This is a problem with many facetsA simple instance of this problem arises when multiple serversneed to process a set of jobs or requestsWe look at the case in which all servers are identical and eachcan be used to serve any of the requests
This problem is useful to learn how to compare an approximatesolution with an optimum solution that we cannot compute efficiently.
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 7 / 86
Load Balancing: The Problem
Definition:Given a set M = {M1,M2, ...,Mm} of m machinesGiven a set J = {1,2, ...n} of n jobs, with job j having a processingtime tj
The goal is to assign each job to one of the machines so that the loadsplaced on all machines are as “balanced” as possible
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 8 / 86
Load Balancing: Some Useful Definitions
Assignment:
Let A(i) denote the set of jobs assigned to machine Mi
Load:Under the assignment defined above, machine Mi needs to work for atotal time of:
Ti =∑
j∈A(i)
tj
Makespan:We denote the maximum load on any machine to be a quantity knownas the makespan:
T = maxi
Ti
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 9 / 86
Our Goal, Revisited
Our goal is thus to minimize the makespan
Although we will not prove it, the scheduling problem of finding anassignment of minimum makespan is NP-hard
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 10 / 86
Designing the algorithm
Consider a simple greedy algorithm, which makes one passthrough the jobs in any orderWhen it comes to job j , it assigns j to the machine whose load isthe smallest so far
Algorithm 1: Greedy-Balance:Start with no jobs assignedSet Ti = 0 and A(i) = ∅ for all machines Mifor j = 1 · · · n do
i ← arg mink TkA(i)← A(i) ∪ {j}Ti ← Ti + tj
end
Question: running time? implementation?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 11 / 86
Analyzing the algorithm
Theorem [Graham, 1966]Greedy algorithm is a 2-approximation.
First worst-case analysis of an approximation algorithmNeed to compare resulting solution with optimal makespan T ∗
We need to compare our solution to the optimal value T ∗, which isunknownWe need a lower bound on the optimum: a quantity that nomatter how good the optimum is, it cannot be less than this bound
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 12 / 86
Analyzing the algorithm
Lemma 1The optimal makespan is T ∗ ≥ maxj tj
Proof.Some machine must process the most time-consuming job
Lemma 2
The optimal makespan is T ∗ ≥ 1m∑
j tj
Proof.The total processing time is
∑j tj
One of m machines must do at least a 1/m fraction of total work
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 13 / 86
Analyzing the algorithm
TheoremGreedy algorithm is a 2-approximation, that is it produces an assign-ment of jobs to machines with a makespan T ≤ 2T ∗
Proof.Consider load Ti of “bottleneck” machine Mi
Let j be last job scheduled on machine Mi
When job j assigned to machine Mi , Mi had smallest loadIts load before assignment is Ti − tj
⇒ Ti − tj ≤ Tk∀k ∈ {1 · · ·m}
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 14 / 86
Analyzing the algorithm
5
List-scheduling algorithm.
! Consider n jobs in some fixed order.
! Assign job j to machine whose load is smallest so far.
Implementation. O(n log m) using a priority queue.
Load Balancing: List Scheduling
List-Scheduling(m, n, t1,t2,…,tn) {
for i = 1 to m {
Li ! 0
J(i) ! " }
for j = 1 to n {
i = argmink Lk J(i) ! J(i) # {j} Li ! Li + tj }
return J(1), …, J(m)
}
jobs assigned to machine i
load on machine i
machine i has smallest load
assign job j to machine i
update load of machine i
6
Load Balancing: List Scheduling Analysis
Theorem. [Graham, 1966] Greedy algorithm is a 2-approximation.
! First worst-case analysis of an approximation algorithm.
! Need to compare resulting solution with optimal makespan L*.
Lemma 1. The optimal makespan L* $ maxj tj.
Pf. Some machine must process the most time-consuming job. !
Lemma 2. The optimal makespan
Pf.
! The total processing time is %j tj .
! One of m machines must do at least a 1/m fraction of total work. !!
L * " 1
mt jj# .
7
Load Balancing: List Scheduling Analysis
Theorem. Greedy algorithm is a 2-approximation.
Pf. Consider load Li of bottleneck machine i.
! Let j be last job scheduled on machine i.
! When job j assigned to machine i, i had smallest load. Its load
before assignment is Li - tj & Li - tj ' Lk for all 1 ' k ' m.
j
0L = LiLi - tj
machine i
blue jobs scheduled before j
8
Load Balancing: List Scheduling Analysis
Theorem. Greedy algorithm is a 2-approximation.
Pf. Consider load Li of bottleneck machine i.
! Let j be last job scheduled on machine i.
! When job j assigned to machine i, i had smallest load. Its load
before assignment is Li - tj & Li - tj ' Lk for all 1 ' k ' m.
! Sum inequalities over all k and divide by m:
! Now !
!
Li " t j # 1
mLkk$
= 1
mtkk$
# L *
!
Li = (Li " t j )
# L*
1 2 4 3 4 + t j
# L*
{ # 2L *.
Lemma 2
Lemma 1
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 15 / 86
Analyzing the algorithm
Proof.Sum inequalities over all k and divide by m:
Ti − tj ≤1m
∑k
Tk
=1m
∑k
tk
≤ T ∗ ← From Lemma 2
Now:Ti = (Ti − tj) + tj ≤ 2 · T ∗ ← From Lemma 1
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 16 / 86
Example
Suppose we have m machines and n = m(m − 1) + 1 jobsThe first m(m − 1) = n − 1 jobs each require tj = 1The last job requires tn = m
What does our greedy algorithm do?It evenly balances the first n − 1 jobsAdd the last giant job n to one of them
→ The resulting makespan is T = 2m − 1
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 17 / 86
Example: greedy solution
9
Load Balancing: List Scheduling Analysis
Q. Is our analysis tight?
A. Essentially yes.
Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m
machine 2 idle
machine 3 idle
machine 4 idle
machine 5 idle
machine 6 idle
machine 7 idle
machine 8 idle
machine 9 idle
machine 10 idle
list scheduling makespan = 19
m = 10
10
Load Balancing: List Scheduling Analysis
Q. Is our analysis tight?
A. Essentially yes.
Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m
m = 10
optimal makespan = 10
11
Load Balancing: LPT Rule
Longest processing time (LPT). Sort n jobs in descending order of
processing time, and then run list scheduling algorithm.
LPT-List-Scheduling(m, n, t1,t2,…,tn) {
Sort jobs so that t1 ! t2 ! … ! tn
for i = 1 to m {
Li ! 0
J(i) ! "
}
for j = 1 to n {
i = argmink Lk J(i) ! J(i) # {j}
Li ! Li + tj }
return J(1), …, J(m)
}
jobs assigned to machine i
load on machine i
machine i has smallest load
assign job j to machine i
update load of machine i
12
Load Balancing: LPT Rule
Observation. If at most m jobs, then list-scheduling is optimal.
Pf. Each job put on its own machine. !
Lemma 3. If there are more than m jobs, L* $ 2 tm+1.
Pf.
! Consider first m+1 jobs t1, …, tm+1.
! Since the ti's are in descending order, each takes at least tm+1 time.
! There are m+1 jobs and m machines, so by pigeonhole principle, at
least one machine gets two jobs. !
Theorem. LPT rule is a 3/2 approximation algorithm.
Pf. Same basic approach as for list scheduling.
!
!
Li = (Li " t j )
# L*
1 2 4 3 4 + t j
# 12L*
{ # 3
2L *.
Lemma 3( by observation, can assume number of jobs > m )
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 18 / 86
Example
What would the optimal solution look like?It assigns the giant job n to one machine, say M1
It spreads evenly the remaining jobs on the other m − 1 machines→ The resulting makespan is T ∗ = m
As a consequence the ratio between the greedy and optimal solutionis:
(2m − 1)m
= 2− 1m→ 2 when m is large
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 19 / 86
Example: optimal solution
9
Load Balancing: List Scheduling Analysis
Q. Is our analysis tight?
A. Essentially yes.
Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m
machine 2 idle
machine 3 idle
machine 4 idle
machine 5 idle
machine 6 idle
machine 7 idle
machine 8 idle
machine 9 idle
machine 10 idle
list scheduling makespan = 19
m = 10
10
Load Balancing: List Scheduling Analysis
Q. Is our analysis tight?
A. Essentially yes.
Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m
m = 10
optimal makespan = 10
11
Load Balancing: LPT Rule
Longest processing time (LPT). Sort n jobs in descending order of
processing time, and then run list scheduling algorithm.
LPT-List-Scheduling(m, n, t1,t2,…,tn) {
Sort jobs so that t1 ! t2 ! … ! tn
for i = 1 to m {
Li ! 0
J(i) ! "
}
for j = 1 to n {
i = argmink Lk J(i) ! J(i) # {j}
Li ! Li + tj }
return J(1), …, J(m)
}
jobs assigned to machine i
load on machine i
machine i has smallest load
assign job j to machine i
update load of machine i
12
Load Balancing: LPT Rule
Observation. If at most m jobs, then list-scheduling is optimal.
Pf. Each job put on its own machine. !
Lemma 3. If there are more than m jobs, L* $ 2 tm+1.
Pf.
! Consider first m+1 jobs t1, …, tm+1.
! Since the ti's are in descending order, each takes at least tm+1 time.
! There are m+1 jobs and m machines, so by pigeonhole principle, at
least one machine gets two jobs. !
Theorem. LPT rule is a 3/2 approximation algorithm.
Pf. Same basic approach as for list scheduling.
!
!
Li = (Li " t j )
# L*
1 2 4 3 4 + t j
# 12L*
{ # 3
2L *.
Lemma 3( by observation, can assume number of jobs > m )
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 20 / 86
An improved approximation algorithm
Can we do better, i.e., guarantee that we’re always within a factorof strictly less than 2 away from the optimum?Let’s think about the previous example:
I We spread everything evenlyI A last giant job arrived and we had to compromise
Intuitively, it looks like it would help to get the largest jobsarranged nicely firstSmaller jobs can be arranged later, in any case they do not hurtmuch
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 21 / 86
An improved approximation algorithm
Algorithm 2: Sorted-Balance:Start with no jobs assignedSet Ti = 0 and A(i) = ∅ for all machines MiSort jobs in decreasing order of processing time tjAssume that t1 ≥ t2 ≥ · · · ≥ tnfor j = 1 · · · n do
i ← arg mink TkA(i)← A(i) ∪ {j}Ti ← Ti + tj
end
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 22 / 86
Analyzing the improved algorithm
Observation:If we have fewer than m jobs, then everything gets arranged nicely, onejob per machine, and the greedy is optimal
Lemma 3:If we have more than m jobs, then T ∗ ≥ 2tm+1
Proof.Consider first m + 1 jobs t1, ..., tm+1
Since the ti ’s are in descending order, each takes at least tm+1timeThere are m + 1 jobs and m machines, so by pigeonhole principle,at least one machine gets two jobs
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 23 / 86
Analyzing the improved algorithmTheoremSorted-balance algorithm is a 1.5-approximation, that is it produces anassignment of jobs to machines with a makespan T ≤ 3
2T ∗
Proof.Let’s assume (by the above observation and Lemma 3) thatmachine Mi has at least two jobsLet tj be the last job assigned to the machineNote that j ≥ m + 1, since the algo assigns the first m jobs to mdistinct machines
Ti = (Ti − tj) + tj ≤32
T ∗
(Ti − tj ) ≤ T∗
Lemma 3 → tj ≤1
2T∗
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 24 / 86
The center selection problem
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 25 / 86
The Center Selection Problem
As usual, let’s start informallyThe center selection problem can also be related to the generaltask of allocating work across multiple serversThe issue here, however, is where it is best to place the serversWe keep the formulation simple here, and don’t incorporate alsothe notion of load balancing in the problemAs we will see, a simple greedy algorithm can approximate theoptimal solution with a gap that can be arbitrarily bad, while asimple modification can lead to results that are always nearoptimal
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 26 / 86
The Center Selection Problem
The problem
We have a set S = {s1, s2, · · · , sn} of n sites to serveWe have a set C = {c1, c2, · · · , ck} of k centers to place
Select k centers C placement so that maximum distance form a site tothe nearest center is minimized
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 27 / 86
The Center Selection Problem
Definition: distanceWe consider instances of the problem where the sites are pointsin the planeWe define the distance as the Euclidean distance between pointsAny point in the plane can be a potential location of a center
Note that the algorithm we develop can be applied to a broader notionof distance, including:
latencynumber of hopscost
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 28 / 86
The Center Selection Problem
Metric Space:We allow any function that satisfies the following properties:
dist(s, s) = 0∀s ∈ SSymmetry: dist(s, z) = dist(z, s)∀s, z ∈ STriangle inequality: dist(s, z) + dist(z,h) ≥ dist(s,h)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 29 / 86
The Center Selection Problem
Let’s put down some assumptions
Service points are served by the closest center:
dist(si ,C) = minc∈C
dist(si , c)
Covering radius:r(C) = maxidist(si ,C)
C forms a cover if dist(si ,C) ≤ r(C)∀si ∈ S
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 30 / 86
Example
13
Load Balancing: LPT Rule
Q. Is our 3/2 analysis tight?
A. No.
Theorem. [Graham, 1969] LPT rule is a 4/3-approximation.
Pf. More sophisticated analysis of same algorithm.
Q. Is Graham's 4/3 analysis tight?
A. Essentially yes.
Ex: m machines, n = 2m+1 jobs, 2 jobs of length m+1, m+2, …, 2m-1 and
one job of length m.
11.2 Center Selection
15
center
r(C)
Center Selection Problem
Input. Set of n sites s1, …, sn and integer k > 0.
Center selection problem. Select k centers C so that maximum
distance from a site to nearest center is minimized.
site
k = 4
16
Center Selection Problem
Input. Set of n sites s1, …, sn and integer k > 0.
Center selection problem. Select k centers C so that maximum
distance from a site to nearest center is minimized.
Notation.
! dist(x, y) = distance between x and y.
! dist(si, C) = min c ! C dist(si, c) = distance from si to closest center.
! r(C) = maxi dist(si, C) = smallest covering radius.
Goal. Find set of centers C that minimizes r(C), subject to |C| = k.
Distance function properties.
! dist(x, x) = 0 (identity)
! dist(x, y) = dist(y, x) (symmetry)
! dist(x, y) " dist(x, z) + dist(z, y) (triangle inequality)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 31 / 86
Designing the Algorithm
Let’s start with a simple greedy ruleConsider an algorithm that selects centers one by one in a myopicfashion, without considering what happens to other centersPut the first center at the best possible location (for a singlecenter)Keep adding centers so as to reduce the covering radius by asmuch as possibleDone, once all the k centers have been placed
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 32 / 86
Example 1
Example where bad things can happenConsider only two sites s, z and k = 2Let d = dist(s, z)The algorithm would put the first center exactly at half-way:
r({c1}) = d/2
Now, we’re stuck, and no matter what we do with the secondcenter, the covering radius will always be d/2
Where is the optimal location to place the two centers?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 33 / 86
Example 2
Here’s another similar example, with clusters of sites
17
centersite
Center Selection Example
Ex: each site is a point in the plane, a center can be any point in the
plane, dist(x, y) = Euclidean distance.
Remark: search can be infinite!
r(C)
18
Greedy Algorithm: A False Start
Greedy algorithm. Put the first center at the best possible location
for a single center, and then keep adding centers so as to reduce the
covering radius each time by as much as possible.
Remark: arbitrarily bad!
greedy center 1
k = 2 centers sitecenter
19
Center Selection: Greedy Algorithm
Greedy algorithm. Repeatedly choose the next center to be the site
farthest from any existing center.
Observation. Upon termination all centers in C are pairwise at least r(C)
apart.
Pf. By construction of algorithm.
Greedy-Center-Selection(k, n, s1,s2,…,sn) {
C = !
repeat k times {
Select a site si with maximum dist(si, C)
Add si to C
}
return C
}
site farthest from any center
20
Center Selection: Analysis of Greedy Algorithm
Theorem. Let C* be an optimal set of centers. Then r(C) " 2r(C*).
Pf. (by contradiction) Assume r(C*) < ! r(C).
! For each site ci in C, consider ball of radius ! r(C) around it.
! Exactly one ci* in each ball; let ci be the site paired with ci*.
! Consider any site s and its closest center ci* in C*.
! dist(s, C) " dist(s, ci) " dist(s, ci*) + dist(ci*, ci) " 2r(C*).
! Thus r(C) " 2r(C*). !
C*sites
! r(C)
ci
ci*s
" r(C*) since ci* is closest center
! r(C)
! r(C)
#-inequality
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 34 / 86
Designing the Algorithm
Suppose for a minute that someone told us what the optimumcovering radius r isThat is, suppose we know there is a set of k centers C∗ withr(C∗) ≤ rWould this information help?Our job would be to find some set of k centers whose coveringradius is not much more than rIt turns out that finding the above set for r(C) ≤ 2r is easy
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 35 / 86
Designing the Algorithm
The idea is the followingConsider any site s ∈ SThere must be a center c∗ ∈ C∗ that covers site s, with a distanceat most rTake s as a center in our solution instead of c∗ (we don’t knowwhere c∗ is)We would like that our center could cover all the sites that c∗
covers in the unknown solution C∗
This is accomplished by expanding the radius from r to 2rI Note that this is true because of the triangle inequalityI All the sites that were at distance at most r from c∗ are at distance
at most 2r from s
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 36 / 86
Designing the Algorithm
Algorithm 3: Greedy-Center placementS′ is the set of sites that still need to be coveredInitialize S′ = SLet C = ∅while S′ 6= ∅ do
Select any site s ∈ S′ and add s to CDelete all sites from S′ that are at distance at most 2r from s
endif |C| ≤ k then
Return C as the selected set of siteselse
Claim that there is no set of k centers with covering radius at most rend
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 37 / 86
Designing the Algorithm
PropositionAny set of centers C returned by the Greedy-algorithm above has cov-ering radius r(C) ≤ 2r
PropositionSuppose the Greedy-algorithm above selects more than k centers.Then, for any set C∗ of size at most k , the covering radius is r(C∗) > r
QuestionWhat about designing an algorithm for the center selection problemwithout knowing in advance the optimal covering radius?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 38 / 86
Discussion
Methodology 1
Assume that you know the value achieved by an optimal solutionDesign your algorithm under this assumption and then convert itinto one that achieves a comparable performance guaranteeBasically, try out a range of “guesses” as to what the optimalsolution value might beOver the course of your algorithm, this sequence of guesses getsmore and more accurate, until an approximate solution is reached
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 39 / 86
Discussion
Methodology 1: application to the center selection problem
We know that the optimal solution is larger than 0 and smallerthan rmax = max dist(si , sj)∀si , sj ∈ SLet r = rmax/2
1 The Greedy-algorithm above tells us there is a set of k centerswith a covering radius at most 2r
2 The Greedy-algorithm above terminates with a negative answerI In case 1) we can lower our initial guess of the optimal radiusI In case 2) we have to raise our initial guess
→ We can do “binary-search” on the radius, and stop when ourestimate gets close enough
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 40 / 86
Discussion
We saw a general technique that can be used when dropping theassumption of known optimal solution
Next, we look at a simple greedy algorithm that approximate wellthe optimal solution without requiring it to be known in advanceand without using the previous general methodology
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 41 / 86
Designing the Algorithm
A greedy algorithm that worksRepeatedly choose the next center to be the site farthest from anyexisting center
Algorithm 4: Greedy-farthestAssume k ≤ |S| (else define C = S)Select any site s and let C = {s}while |C| < k do
Select a site si ∈ S that maximizes dist(si ,C)Add si to C
endReturn C as the selected set of sites
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 42 / 86
Analyzing the Algorithm
Observation:Upon termination all centers in C are pairwise at least r(C) apart
Theorem:Let C∗ be an optimal set of centersThen the covering radius achieved by the greedy algorithmsatisfies r(C) ≤ 2r(C∗)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 43 / 86
Analyzing the Algorithm
Proof.By contradiction, assume r(C∗) < 1/2r(C)
For each site ci ∈ C, consider a ball of radius 1/2r(C) around itExactly one c∗i in each ball; let ci be the site paired with c∗iConsider any site s and its closest center c∗i ∈ C∗
dist(s,C) ≤ dist(s, ci) ≤ dist(s, c∗i ) + dist(c∗i , ci) ≤ 2r(C∗)I First inequality, derives from triangle inequalityI The two terms of the triangle inequality ≤ r(C∗) since c∗i is the
closest center
Thus r(C) ≤ 2r(C∗)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 44 / 86
Analyzing the Algorithm
The proof in images
17
centersite
Center Selection Example
Ex: each site is a point in the plane, a center can be any point in the
plane, dist(x, y) = Euclidean distance.
Remark: search can be infinite!
r(C)
18
Greedy Algorithm: A False Start
Greedy algorithm. Put the first center at the best possible location
for a single center, and then keep adding centers so as to reduce the
covering radius each time by as much as possible.
Remark: arbitrarily bad!
greedy center 1
k = 2 centers sitecenter
19
Center Selection: Greedy Algorithm
Greedy algorithm. Repeatedly choose the next center to be the site
farthest from any existing center.
Observation. Upon termination all centers in C are pairwise at least r(C)
apart.
Pf. By construction of algorithm.
Greedy-Center-Selection(k, n, s1,s2,…,sn) {
C = !
repeat k times {
Select a site si with maximum dist(si, C)
Add si to C
}
return C
}
site farthest from any center
20
Center Selection: Analysis of Greedy Algorithm
Theorem. Let C* be an optimal set of centers. Then r(C) " 2r(C*).
Pf. (by contradiction) Assume r(C*) < ! r(C).
! For each site ci in C, consider ball of radius ! r(C) around it.
! Exactly one ci* in each ball; let ci be the site paired with ci*.
! Consider any site s and its closest center ci* in C*.
! dist(s, C) " dist(s, ci) " dist(s, ci*) + dist(ci*, ci) " 2r(C*).
! Thus r(C) " 2r(C*). !
C*sites
! r(C)
ci
ci*s
" r(C*) since ci* is closest center
! r(C)
! r(C)
#-inequality
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 45 / 86
Analyzing the Algorithm
TheoremGreedy algorithm is a 2-approximation for center selection problem
RemarkGreedy algorithm always places centers at sites, but is still within afactor of 2 of best solution that is allowed to place centers anywhere
QuestionIs there hope of a 3/2-approximation? 4/3?
TheoremUnless P = NP, there no α-approximation for center-selection problemfor any α < 2
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 46 / 86
The pricing method
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 47 / 86
The Pricing Method
We now turn to our second technique for designing approximationalgorithms: the pricing methodWe already outlined the Vertex Cover Problem, and its relatedparent, the Set Cover Problem
Note:We being the section with a general discussion on how to usereductions in the design of approximation algorithmsWhat are reductions?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 48 / 86
The Vertex Cover Problem
The Vertex Cover ProblemYou are given a graph G = (V ,E)
A set S ⊆ V is a vertex cover if each edge e ∈ E has at least oneend in S
The Weighted Vertex Cover Problem
You are given a graph G = (V ,E)
Each v ∈ V has a weight wi ≥ 0The weight of a set S of vertices is denoted w(S) =
∑i∈S wi
A set S ⊆ V is a vertex cover if each edge e ∈ E has at least oneend in SFind a vertex cover S of minimum weight w(S)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 49 / 86
Discussion
Unweighted Vertex ProblemWhen all weights in the weighted vertex problem are equal to 1,deciding if a vertex cover of weight at most k is the standard decisionversion of the Vertex Cover
Vertex Cover is easily reducible to Set CoverThere is an approximation algorithm to the Set Cover (not seen inclass)
→ What does this imply about the approximability of the VertexCover?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 50 / 86
Discussion
Polynomial-time ReductionsWe will outline some of the subtle ways in which approximation resultsinteract with this technique, which indicates ways of reducing hardproblems to polynomial-time problems
Consider an unweighted vertex cover problem: we look at a vertexcover of minimum sizeWe can show that Set Cover is NP-complete using a reductionfrom the decision version of unweighted Vertex Cover
Vertex Cover ≤P Set Cover
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 51 / 86
Discussion
Polynomial-time ReductionVertex Cover ≤P Set Cover
If we had a polynomial time algorithm to solve the Set Cover Problem,then we could use this algorithm to solve the Vertex Cover Problem in
polynomial time
→ Now, since we know the Vertex Cover is NP-complete, it isimpossible to use a poly-time algo for the Set Cover to solve it,hence there is no poly-time algo to solve the Set Cover as well
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 52 / 86
Discussion
Now, we know that a polynomial-time approximation algorithm forthe Set Cover exists
Does this imply that we can use it to formulate an approximationalgorithm for the Vertex Cover?
Proposition:One can use the Set Cover approximation algorithm to give an approx-imation algorithm for the weighted Vertex Cover Problem
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 53 / 86
Discussion
Cautionary example:It is possible to use the Independent Set Problem to prove that theVertex Cover Problem is NP-Complete:
Independent Set ≤P Vertex Cover
If we had a polynomial-time algorithm that solves the Vertex CoverProblem, then we could use this algorithm to solve the Independent
Set Problem in polynomial time
Can we use this reduction to say we can use an approximationalgorithm for the minimum-size vertex cover to design acomparably good approximation for the maximum-sizeindependent set?
→ NO!
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 54 / 86
Example of Vertex Cover Problem
Recall:Weighted vertex cover: Given a graph G with vertex weights, find avertex cover of minimum weight
21
Center Selection
Theorem. Let C* be an optimal set of centers. Then r(C) ! 2r(C*).
Theorem. Greedy algorithm is a 2-approximation for center selection
problem.
Remark. Greedy algorithm always places centers at sites, but is still
within a factor of 2 of best solution that is allowed to place centers
anywhere.
Question. Is there hope of a 3/2-approximation? 4/3?
e.g., points in the plane
Theorem. Unless P = NP, there no "-approximation for center-selectionproblem for any " < 2.
11.4 The Pricing Method: Vertex Cover
23
Weighted Vertex Cover
Weighted vertex cover. Given a graph G with vertex weights, find avertex cover of minimum weight.
4
9
2
2
4
9
2
2
weight = 2 + 2 + 4 weight = 9
24
Pricing Method
Pricing method. Each edge must be covered by some vertex.Edge e = (i, j) pays price pe # 0 to use vertex i and j.
Fairness. Edges incident to vertex i should pay ! wi in total.
Lemma. For any vertex cover S and any fair prices pe: $e pe ! w(S).
Pf. !
4
9
2
2
ijiee wpi !"
= ),(
:x each vertefor
).(),(
SwwppSi
ijiee
SiEee =!! """"
#=##
sum fairness inequalitiesfor each node in S
each edge e covered byat least one node in S
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 55 / 86
The Pricing Method
Also known as the Primal-Dual methodMotivated by an economic perspectiveFor the case of the Vertex Cover problem:
I We will think of the weights on the nodes as costsI We will think of each edge as having to pay for its “share” of the
costs of the vertex cover we find
An edge is seen as an independent “agent” who is willing to “pay”something to the node that covers it.
Our algorithm will find a vertex cover S and determine the pricespe ≥ 0 for each edge e ∈ E so that if each edge pays pe, this will intotal approximately cover the cost of S
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 56 / 86
The Pricing Method
Pricing method: Each edge must be covered by some vertex.Edge e = (i , j) pays price pe ≥ 0 to use vertex i and jFairness: Edges incident to vertex i should pay ≤ wi in total, thatis: ∑
e=(i,j)
pe ≤ wi
21
Center Selection
Theorem. Let C* be an optimal set of centers. Then r(C) ! 2r(C*).
Theorem. Greedy algorithm is a 2-approximation for center selection
problem.
Remark. Greedy algorithm always places centers at sites, but is still
within a factor of 2 of best solution that is allowed to place centers
anywhere.
Question. Is there hope of a 3/2-approximation? 4/3?
e.g., points in the plane
Theorem. Unless P = NP, there no "-approximation for center-selectionproblem for any " < 2.
11.4 The Pricing Method: Vertex Cover
23
Weighted Vertex Cover
Weighted vertex cover. Given a graph G with vertex weights, find avertex cover of minimum weight.
4
9
2
2
4
9
2
2
weight = 2 + 2 + 4 weight = 9
24
Pricing Method
Pricing method. Each edge must be covered by some vertex.Edge e = (i, j) pays price pe # 0 to use vertex i and j.
Fairness. Edges incident to vertex i should pay ! wi in total.
Lemma. For any vertex cover S and any fair prices pe: $e pe ! w(S).
Pf. !
4
9
2
2
ijiee wpi !"
= ),(
:x each vertefor
).(),(
SwwppSi
ijiee
SiEee =!! """"
#=##
sum fairness inequalitiesfor each node in S
each edge e covered byat least one node in S
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 57 / 86
The Pricing MethodLemma (“The Fairness Lemma”):For any vertex cover S and any fair prices pe, we have:∑
e∈E
pe ≤ w(S)
Proof.The following holds for each edge covered by at least one node inS: ∑
e∈E
pe ≤∑i∈S
∑e=(i,j)
pe
Now, if we sum the fairness inequalities for each node in S, wehave that: ∑
i∈S
∑e=(i,j)
pe ≤∑i∈S
wi
→ We have our claim:∑
e∈E pe ≤ w(S)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 58 / 86
The Approximation Algorithm
Definition: tightness
A node is tight (or “paid for”) if∑
e=(i,j) pe = wi
Algorithm 5: Vertex-Cover Approx (G,w)
Set pe = 0 ∀e ∈ Ewhile ∃e = (i , j) such that neither i nor j are tight do
Select such an edge eIncrease pe without violating fairness
endS ← set of all tight nodes
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 59 / 86
The Approximation Algorithm
25
Pricing Method
Pricing method. Set prices and find vertex cover simultaneously.
Weighted-Vertex-Cover-Approx(G, w) {
foreach e in E
pe = 0
while (! edge i-j such that neither i nor j are tight) select such an edge e
increase pe as much as possible until i or j tight
}
S " set of all tight nodes
return S
}
ijiee wp =!
= ),(
26
Pricing Method
vertex weight
Figure 11.8
price of edge a-b
27
Pricing Method: Analysis
Theorem. Pricing method is a 2-approximation.Pf.
! Algorithm terminates since at least one new node becomes tightafter each iteration of while loop.
! Let S = set of all tight nodes upon termination of algorithm. S is avertex cover: if some edge i-j is uncovered, then neither i nor j istight. But then while loop would not terminate.
! Let S* be optimal vertex cover. We show w(S) # 2w(S*).
!
w(S) = wii" S
# =i" S
# pee=(i, j)
# $i"V
# pee=(i, j)
# = 2 pee" E
# $ 2w(S*).
all nodes in S are tight S $ V,prices % 0
fairness lemmaeach edge counted twice
11.6 LP Rounding: Vertex Cover
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 60 / 86
Analyzing the Algorithm
TheoremThe Pricing method for the weighted Vertex Cover Problem is a 2-approximation
Proof.Algorithm terminates since at least one new node becomes tightafter each iteration of while loopLet S = set of all tight nodes upon termination of algorithmS is a vertex cover
I If some edge e = (i , j) is uncovered, then neither i nor j is tightI But then while loop would not terminate
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 61 / 86
Analyzing the Algorithm
Proof.Let S∗ be optimal vertex coverWe show that w(S) ≤ 2w(S∗). Indeed:
I since all nodes in S are tight :
w(S) =∑i∈S
wi =∑i∈S
∑e=(i,j)
pe
I since S ⊆ V and pe ≥ 0:∑i∈S
∑e=(i,j)
pe ≤∑i∈V
∑e=(i,j)
pe
I since each edge is counted twice:∑i∈V
∑e=(i,j)
pe = 2∑e∈E
pe
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 62 / 86
Analyzing the Algorithm
Proof.since each edge is counted twice:∑
i∈V
∑e=(i,j)
pe = 2∑e∈E
pe
From the fairness Lemma:
2∑e∈E
pe ≤ 2w(S∗)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 63 / 86
Linear Programming and Rounding
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 64 / 86
Linear Programming and Rounding
We now look at a third technique used to design approximationalgorithmsThis method derives from operation research: linearprogramming (LP)Linear programming is the subject of entire courses, here we’ll justgive a crash introduction to the subjectOur goal is to show how LP can be used to approximate NP-hardoptimization problems
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 65 / 86
LP as a General Technique
Recall, from linear algebra, the problem of a system of equationsUsing a matrix-vector notation, we have a vector x of unknownreal numbers, a given matrix A, and a given vector bThe goal is to solve:
Ax = b
Gaussian elimination is a well-known efficient algorithm for thisproblem
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 66 / 86
LP as a General Technique
The basic LP problem can be viewed as a more complex versionof this, with inequalities in place of equationsThe goal is to determine a vector x that satisfies:
Ax ≥ b
Each coordinate of the vector Ax should be greater than or equalto the corresponding coordinate of the vector bSuch system of inequalities define regions in the space
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 67 / 86
LP as a General Technique
ExampleSuppose x = (x1, x2)
Our system of inequalities is:
x1 ≥ 0 , x2 ≥ 0
x1 + x2 ≥ 6
2x1 + x2 ≥ 6
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 68 / 86
LP as a General Technique
The example tells that the set of solutions is in the region in the planeshown below
33
Linear Programming
Linear programming. Max/min linear objective function subject to
linear inequalities.
! Input: integers cj, bi, aij .
! Output: real numbers xj.
Linear. No x2, xy, arccos(x), x(1-x), etc.
Simplex algorithm. [Dantzig 1947] Can solve LP in practice.
Ellipsoid algorithm. [Khachian 1979] Can solve LP in poly-time.
!
(P) max cj x jj=1
n
"
s. t. aij x jj=1
n
" # bi 1$ i $ m
xj # 0 1$ j $ n
!
(P) max ctx
s. t. Ax " b
x " 0
34
LP Feasible Region
LP geometry in 2D.
x1 + 2x2 = 6
2x1 + x2 = 6
x2 = 0
x1 = 0
35
Weighted Vertex Cover: LP Relaxation
Weighted vertex cover. Linear programming formulation.
Observation. Optimal value of (LP) is ! optimal value of (ILP).
Pf. LP has fewer constraints.
Note. LP is not equivalent to vertex cover.
Q. How can solving LP help us find a small vertex cover?
A. Solve LP and round fractional values.
!
(LP) min wi xi
i " V
#
s. t. xi + x j $ 1 (i, j)" E
xi $ 0 i "V
!!
!
36
Weighted Vertex Cover
Theorem. If x* is optimal solution to (LP), then S = {i " V : x*i # !} is a
vertex cover whose weight is at most twice the min possible weight.
Pf. [S is a vertex cover]
! Consider an edge (i, j) " E.
! Since x*i + x*j # 1, either x*i # ! or x*j # ! $ (i, j) covered.
Pf. [S has desired cost]
! Let S* be optimal vertex cover. Then
!
wi
i " S*
# $ wixi
*
i " S
# $ 1
2wi
i " S
#
LP is a relaxation x*i # !
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 69 / 86
LP as a General Technique
Given a region defined by Ax + b, LP seeks to minimize a linearcombination of the coordinates of x , for all x belonging to theregion defined by the set of inequalitiesSuch a linear combination, called objective function, can berewritten as ctx , where c is a vector of coefficients, and ctxdenotes the inner product of two vectors.
LP in Standard FormGiven an m × n matrix A, and vectors b ∈ <m and c ∈ <n, find a vectorx ∈ <n to solve the following optimization problem:
min ctx
x ≥ 0
Ax ≥ b
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 70 / 86
LP as a General Technique
Example: continued...Assume we have c = (1.5,1)The objective function is 1.5x1 + x2
This function should be minimized over the region defined byAx ≥ bThe solution would be to choose the point x = (2,2), where thetwo slanting lines cross, which yields a value of ctx = 5
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 71 / 86
LP as a General Technique
LP as a Decision ProblemGiven a matrix A, vectors b and c, and a bound γ, does there existx so that x ≥ 0, Ax ≥ b, and ctx ≤ γ?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 72 / 86
Computational complexity of LP
The decision version of LP is NPHistorically, several methods to solve such problems have beendeveloped:
I Interior methods: practical poly-time algorithmsI The Simplex method: practical method that competes with
poly-time algorithmsI ...
How can linear programming help us when we want to solvecombinatorial problems such as Vertex Cover?
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 73 / 86
Vertex Cover as an Integer Program
The Weighted Vertex Cover Problem
You are given a graph G = (V ,E)
Each v ∈ V has a weight wi ≥ 0The weight of a set S of vertices is denoted w(S) =
∑i∈S wi
A set S ⊆ V is a vertex cover if each edge e ∈ E has at least oneend in SFind a vertex cover S of minimum weight w(S)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 74 / 86
Vertex Cover as an Integer Program
We now try to formulate a linear program that is in closecorrespondence with the Vertex Cover problemLP is based on the use of vectors of variablesWe use a decision variable xi for each node i ∈ V to model thechoice of whether to include node i in the vertex cover:xi = 0⇒ i /∈ S and xi = 1⇒ i ∈ SWe can now create a n-dimensional vector x of decision variables
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 75 / 86
Vertex Cover as an Integer Program
How do we proceed now?
We use linear inequalities to encode the requirement that theselected nodes form a vertex coverWe use the objective function to encode the goal of minimizing thetotal weightFor each edge (i , j) ∈ E , it must have one end in the vertex cover⇒ xi + xj ≥ 1 (“whether one, the other or both ends in the coverare ok”)We write the set of node weights as a n-dimensional vector w ,and seek to minimize w tx
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 76 / 86
Vertex Cover as an Integer Program
VC.IP
min∑i∈V
wixi
s.t. xi + xj ≥ 1 (i , j) ∈ E
xi ∈ {0,1} i ∈ V
Proposition:
S is a vertex cover in G iff the vector x, defined as xi = 0⇒ i /∈ Sand xi = 1⇒ i ∈ S, satisfies the constraints in VC.IPFurthermore, we have w(S) = w tx
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 77 / 86
Vertex Cover as an Integer Program
We can now put this system into the matrix form discussed before
Define a matrix A whose columns are the nodes in V , and whoserows are the edges in E : A[e, i] = 1 if node i is an end of the edgee, and A[e, i] = 0 otherwise (“Each row has exactly two non-zeroentries”)The system of inequalities can be rewritten as:
Ax ≥ 1
0 ≤ x ≤ 1
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 78 / 86
Vertex Cover as an Integer Program
Keep in mid that we crucially have required that all coordinates inthe solution be either 0 or 1
→ This is an instance of an Integer ProgramInstead, in linear programs, the coordinates can be arbitrary realnumbers
Integer ProgrammingInteger Programming (IP) is considerably harder than LPOur discussion really constitutes a reduction from the VertexCover to the decision version of IP
Vertex Cover ≤P Integer Programming
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 79 / 86
Using Linear Programming for Vertex Cover
Trying to solve the IP problem (VC.IP) optimally is clearly not theright way to go, as this is NP-hardWe thus exploit the fact that LP is not as hard as IP
LP version of (VC.IP)We drop the requirement that xi ∈ {0,1} and assume xi ∈ <{0,1}This gives us an instance of the problem we call (VC.LP), and wecan solve it in polynomial time
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 80 / 86
Using Linear Programming for Vertex Cover
(VC.LP)
Find a set of real values {x∗i } ∈ {0,1}Subject to x∗i + x∗j ≥ 1 ∀e = (i , j) ∈ EThe goal is to minimize
∑i wix∗i
Let x∗ denote the solution vectorLet wLP = w tx∗
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 81 / 86
Using Linear Programming for Vertex Cover
PropositionLet S∗ denote a vertex cover of minimum weight.Then wLP ≤ w(S∗)
Proof.Vertex Cover of G corresponds to integer solutions of (VC.IP)We have that the minimum (min w tx : 0 ≤ x ≤ 1, Ax ≥ 1) over allinteger x vectors is exactly the minimum-weight vertex coverTo get the minimum of the linear program (VC.LP), we allow x totake arbitrary real-number values and so the minimum of (VC.LP)is no larger than that of (VC.IP)
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 82 / 86
Using Linear Programming for Vertex Cover
Note:The previous proposition is one of the crucial ingredient we need for anapproximation algorithm: a good lower bound on the optimum, in theform of the efficiently computable quantity wLP
Note that wLP can be definitively smaller than w(S∗)
ExampleIf the graph G is a triangle and all weights are 1, then theminimum vertex cover has a weight of 2But, in a LP solution, we can set xi = 1/2 for all three vertices,and so get a solution with weight 3/2
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 83 / 86
Using Linear Programming for Vertex Cover
QuestionHow can solving the LP help us actually find a near-optimal vertexcover?
The idea is to work with the values x∗i and to infer a vertex cover Sfrom themIf we have integral values for x∗i , then there is no problem: x∗i = 0implies that node i is not in the cover, x∗i = 1 implies that node i isin the coverWhat to do with the fractional values in between? The naturalapproach is to round
Rounding
Given a fractional solution {x∗i }, we define S = {i ∈ V : x∗i ≥ 1/2}
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 84 / 86
Using Linear Programming for Vertex Cover
Proposition:
The set S defined as S = {i ∈ V : x∗i ≥ 1/2} (rounding) is a vertexcover, and w(S) ≤ 2wLP
S is a vertex cover.Consider and edge e = (i , j) ∈ ESince x∗i + x∗j ≥ 1 and x∗i ≥ 1/2 or x∗j ≥ 1/2
⇒ (i , j) is covered
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 85 / 86
Using Linear Programming for Vertex CoverProposition:
The set S defined as S = {i ∈ V : x∗i ≥ 1/2} (rounding) is a vertexcover, and w(S) ≤ 2wLP
S has desired cost.Let S∗ be the optimal vertex coverSince LP is a relaxation:∑
i∈S∗wi ≥
∑i∈S
wix∗i
Since x∗i ≥ 1/2, we have∑i∈S
wix∗i ≥12
∑i∈S
wi
Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 86 / 86