5
Information Processing Letters 99 (2006) 187–191 www.elsevier.com/locate/ipl Real time scheduling with a budget: Parametric-search is better than binary search Asaf Levin Department of Statistics, The Hebrew University, Jerusalem 91905, Israel Received 7 November 2005; received in revised form 16 March 2006; accepted 7 April 2006 Available online 9 June 2006 Communicated by K. Iwama Abstract We are given a set of jobs each has a processing time, a non-negative weight, a set of possible time intervals in which it can be processed and a cost. The goal is to schedule a feasible subset S of the jobs on a single machine such that the total weight of S is maximized, and the total cost of S is within a given budget. Using Megiddo’s parametric method we improve an earlier algorithm that is based on applying binary search. © 2006 Elsevier B.V. All rights reserved. Keywords: Parametric-search method; Approximation algorithms 1. Introduction Naor et al. [6] studied the following BUDGETED REAL- TIME SCHEDULING PROBLEM (BRS): We are given a set of n jobs, each job J j has a non-negative processing time p j , a non-negative weight w j , and a set of time intervals in which it can be processed (given as either a window with release time and due date or as a discrete set of possible processing intervals), and a positive cost c j . Additionally, we are given a budget B . A feasible solution consists of a subset of the jobs S whose total cost is at most B and an a schedule of the jobs in S such that each job is processed during one of its time intervals (i.e., an assignment φ of starting times for the jobs in S , such that j S φ(j) belongs to one of the time intervals of j ), and each pair of jobs are not processed at the same time (i.e., a single machine E-mail address: [email protected] (A. Levin). schedule). The goal is to find a maximum (total) weight feasible solution. They also studied the BUDGETED REAL- TIME SCHEDULING WITH OVERLAPS PROBLEM (BRSO) in which the jobs are scheduled on a single non-bottleneck machine, which can process simultaneously several jobs. I.e., a feasible solution in BRSO consists of a subset S of the jobs with total cost at most B , and an assignment φ , such that for each j S φ(j) denotes the starting time of job j , in order to be a feasible so- lution φ(j) has to be in one of the time intervals of job j (note that in this problem we do not require that the time intervals of processing j and j is disjoint for j = j ). The goal is to maximize the overall time in which the machine is utilized among the feasible solu- tions. For each of these problems [6] studied both a discrete model and a continuous model. In the discrete model each job J j can be scheduled in a given set of n j time 0020-0190/$ – see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.ipl.2006.04.011

Real time scheduling with a budget: Parametric-search is better than binary search

Embed Size (px)

Citation preview

Information Processing Letters 99 (2006) 187–191

www.elsevier.com/locate/ipl

Real time scheduling with a budget: Parametric-searchis better than binary search

Asaf Levin

Department of Statistics, The Hebrew University, Jerusalem 91905, Israel

Received 7 November 2005; received in revised form 16 March 2006; accepted 7 April 2006

Available online 9 June 2006

Communicated by K. Iwama

Abstract

We are given a set of jobs each has a processing time, a non-negative weight, a set of possible time intervals in which it can beprocessed and a cost. The goal is to schedule a feasible subset S of the jobs on a single machine such that the total weight of S ismaximized, and the total cost of S is within a given budget. Using Megiddo’s parametric method we improve an earlier algorithmthat is based on applying binary search.© 2006 Elsevier B.V. All rights reserved.

Keywords: Parametric-search method; Approximation algorithms

1. Introduction

Naor et al. [6] studied the following BUDGETED

REAL-TIME SCHEDULING PROBLEM (BRS): We aregiven a set of n jobs, each job Jj has a non-negativeprocessing time pj , a non-negative weight wj , and aset of time intervals in which it can be processed (givenas either a window with release time and due date oras a discrete set of possible processing intervals), and apositive cost cj . Additionally, we are given a budget B .A feasible solution consists of a subset of the jobs S

whose total cost is at most B and an a schedule of thejobs in S such that each job is processed during one ofits time intervals (i.e., an assignment φ of starting timesfor the jobs in S, such that ∀j ∈ S φ(j) belongs to oneof the time intervals of j ), and each pair of jobs arenot processed at the same time (i.e., a single machine

E-mail address: [email protected] (A. Levin).

0020-0190/$ – see front matter © 2006 Elsevier B.V. All rights reserved.doi:10.1016/j.ipl.2006.04.011

schedule). The goal is to find a maximum (total) weightfeasible solution.

They also studied the BUDGETED REAL-TIME

SCHEDULING WITH OVERLAPS PROBLEM (BRSO) inwhich the jobs are scheduled on a single non-bottleneckmachine, which can process simultaneously severaljobs. I.e., a feasible solution in BRSO consists of asubset S of the jobs with total cost at most B , and anassignment φ, such that for each j ∈ S φ(j) denotesthe starting time of job j , in order to be a feasible so-lution φ(j) has to be in one of the time intervals ofjob j (note that in this problem we do not require thatthe time intervals of processing j and j ′ is disjoint forj �= j ′). The goal is to maximize the overall time inwhich the machine is utilized among the feasible solu-tions.

For each of these problems [6] studied both a discretemodel and a continuous model. In the discrete modeleach job Jj can be scheduled in a given set of nj time

188 A. Levin / Information Processing Letters 99 (2006) 187–191

intervals I�,j (� = 1,2, . . . , nj ). Whereas in the contin-uous model each job Jj has a release date rj , a due datedj (and a processing time pj ) and each time intervalof pj time units that starts after the release date andends before the due date is a feasible time interval forjob j . In this paper we consider only the discrete model.All of these variants were shown to be strongly NP-hard(see [6] and references cited there).

A polynomial time algorithm is called a ρ-approx-imation algorithm if it always returns a feasible solutionwhose value is at least ρ times the value of the optimalsolution. For a constant a, we let a+ and a− denote thesymbols a + δ and a − δ for infinitesimally small δ > 0.Throughout this paper we let ε > 0 be a fixed positivenumber. We call a feasible solution for each of theseproblems a schedule.

For the discrete model of BRS [6] presented a 12+ε

-approximation algorithm and for the discrete model ofBRSO they presented a 1

3+ε-approximation algorithm.

Their method is based on using the Lagrangian relax-ation technique of Jain and Vazirani [2] and using abinary search to find a ‘correct’ value of the Lagrangianmultiplier. Each value of the Lagrangian multiplier re-sults an instance of the problem without budget (i.e.,B = ∞) that is approximated using an approximationalgorithm. The ε in the approximation ratio of [6] arisesbecause of the binary search. The resulting algorithmshave approximation ratios that are closed to the approx-imation ratio of the Lagrangian relaxation problems(and they differ by ε). In this paper we show that byapplying an approximated version of Megiddo’s para-metric search method we can get rid of this ε, and thusobtain approximation ratios that equal the approxima-tion ratio of the Lagrangian relaxation problems. If wedenote by T (n) the time complexity of the used ap-proximation algorithm for the problem without a bud-get, then the method in [6] takes O(log(W/ε) · T (n))

(where W = maxj wj )1 whereas our method takesO([T (n)]2).

In Section 2 we review the method of approxima-tion via Lagrangian relaxation and the results of Naor,Shachnai and Tamir. In Section 3 we present our modifi-cation of Megiddo’s parametric-search method. A sim-ilar modification was used in [3] to improve the resultsof [4]. In Section 4 we conclude our improved approxi-mation algorithm for BRS and BRSO.

1 The paper [6] improves this time complexity to O((log2 n

ε3 ) ·T (n)).

2. Approximation via Lagrangian relaxation forsubset selection problems

In this section we follow [6] and describe a gen-eral framework that is used to approximate both BRSand BRSO. This framework is applied to subset selec-tion problems. The input consists of a set of elementsA = {a1, a2, . . . , an}, where each element aj is asso-ciated with a weight wj and a cost cj � 1, and weare given a budget B � 1. The goal is to find a sub-set A′ ⊆ A that satisfies a set of constraints (includingthe budget constraint) whose total weight is maximized.We assume that if A′ satisfies the set of constraints andA′′ ⊆ A′, then A′′ satisfies the set of constraints (i.e., thefamily of the feasible subsets is monotone). Note thatBRS and BRSO are special cases of the subset selectionproblem, as in both BRS and BRSO one has to select asubset of the jobs S (such that a feasible assignment ofstarting times φ exists, and this is a set of constraints)such that the total cost of S is at most the given budgetand the total weight is maximized. We note that the fea-sible subsets for BRS and BRSO are monotone, as givena solution S with a schedule of S denoted as φ, any sub-set S′ of S with an assignment of starting times that isthe restriction of φ on S′ is a feasible solution. Hencethe results obtained for the subset selection problem canbe applied to the BRS and BRSO problems.

Denote by xj ∈ {0,1} the indicator variable for theselection of aj . Then, the integer program for our subsetselection problem has the following form:

(P ) = maxn∑

j=1

wjxj

s.t. Constraints C1,C2, . . . ,Cr

n∑

j=1

cj xj � B (1)

xj ∈ {0,1} ∀j = 1,2, . . . , n.

We treat constraint (1) as a complicated constraint, andwe construct the Lagrangian relaxation by relaxing thisconstraint, and penalizing in the objective function so-lutions that do not satisfy (1). We obtain the followingLagrangian relaxation:

(LR(λ)

) = λ · B + maxn∑

j=1

(wj − λcj )xj

s.t. Constraints C1,C2, . . . ,Cr

xj ∈ {0,1} ∀j = 1,2, . . . , n.

Assume that A is a ρ-approximation algorithm for(LR(λ)) for any value of λ > 0. Thus, one can find

A. Levin / Information Processing Letters 99 (2006) 187–191 189

values λ1 < λ2 such that A finds integral approximatesolutions x(1) and x(2) to (LR(λ1)) and (LR(λ2)), re-spectively, and the budgets used in these solutions areB1 and B2 such that B2 � B � B1. Let W1 and W2be the total weights of the solutions x(1) and x(2), re-spectively. I.e., Wi = λi · B + ∑n

j=1(wj − λicj )x(i)j

for i = 1,2. Without loss of generality, we assume thatW1,W2 � 1.

We assume that A has the following property: Letα = (B − B2)/(B1 − B2), then the convex combinationx = αx(1) + (1 − α)x(2) is a (fractional) ρ-approximatesolution that uses the budget B (exactly). This is thecase if A is an approximation algorithm that is based onthe primal–dual scheme, as the analysis of these approx-imation algorithm is based on comparing the value ofthe resulting integral solution with respect to the valueof the linear programming relaxation of the problem.

Denote by c the total cost of all the elements in A.The following is an approximation algorithm for thesubset selection problem:

Algorithm AL(ε)

1. Initialization: Let ε′ = εc

and W = maxj=1,...,nwj

c.

2. Binary search: Using binary search over the initialinterval [0,W ], find the values λ1 < λ2 as describedabove such that λ2 − λ1 � ε′.

3. Return the (feasible) integral solution found by Awhen applied to (LR(λ2)).

The following lemma was proved in [6] (see Theo-rem 1 in [6]):

Lemma 1. For any ε′ > 0 and λ1, λ2 resulting from theAL(ε) algorithm, W2 � W1 − ε′ · c.

Using Lemma 1, Naor, Shachnai and Tamir provedthat AL(ε) is a (ρ − ε)-approximation algorithm forproblem (P ). This is so because x is a ρ-approximationand W2 � (W1 −ε′w)α +W2(1−α) � (W1α +W2(1−α))− ε′c � (W1α +W2(1 −α))(1 − ε). I.e., they estab-lished the following theorem:

Theorem 2. Algorithm AL(ε) is a (ρ − ε)-approxima-tion algorithm for the subset selection problem.

3. An approximated variant of Megiddo’sparametric search method

In this section we describe an approximated variantof Megiddo’s parametric search method. This variantcan be used to find an approximated zero of a parametric

function defined as follows: For each λ > 0, we defineB(λ) to be the total cost of the solution constructed byA when applied to (LR(λ)). We note that B is a functionof λ that is not continuous in some points (in which theselected subset returned by A is changing). We use theparametric search method to find an approximated zerofor the function B(λ) − B , where an approximated zeroof a function F(λ) is a value λ∗ such that F(λ∗+) � 0and F(λ∗−) � 0. Note that Megiddo [5] described hisparametric search method for finding a zero of a contin-uous parametric function F(λ). When F(λ) is a contin-uous function then an approximated zero of F is a zeroof F , and our variant that is described in the sequel isequivalent to the original method of Megiddo.

The parametric search method executes A to approx-imate (LR(λ∗)) without knowing λ∗ in advance. In eachphase we maintain an interval I = (λlow, λhi) such that

B(λ+

low

) − B � 0 and B(λ−

hi

) − B � 0. (2)

We note that we can initialize I using I = (0, ρ∑

j wj ).This is so because for λ = 0+ we do not penalize fornot satisfying the constraint (1), and without loss ofgenerality, the solution does not satisfy this constraint(otherwise, it is a ρ-approximation solution for the sub-set selection problem). For λ = (ρ

∑j wj )

− as ∅ is afeasible solution to (P ) that has weight (as a solutionto (LR(λ))) slightly less than ρ · (∑j wj ) · B , and a so-lution that does not satisfy the constraint has weight atmost

∑j wj − B . Therefore, the returned solution for

λ = (ρ∑

j wj )− satisfies the constraint (1).

While executing A, we need to compare pairs of lin-ear functions of λ according to their relative order forλ = λ∗. To do so, we first compute the break-point λbr

of the two linear functions. If there is no such break-point, then the comparison is independent of λ (the twofunctions differ by a constant), and we can resolve it inconstant time. Next, we check if λbr ∈ I . If not, then thecomparison is independent of the value of λ ∈ I , andwe can resolve it in constant time. Assume that λbr ∈ I .We execute A with λ = λ+

br and with λ = λ−br. We distin-

guish three cases:

• If B(λ+br) − B � 0, then we set λlow = λbr, and now

the comparison is independent of λ for λ ∈ I , andtherefore we can resolve it in constant time.

• Otherwise, if B(λ−br) − B � 0, then we set λhi =

λbr, and now the comparison is independent of λ

for λ ∈ I , and therefore we can resolve it in constanttime.

• Otherwise, i.e., B(λ+br)−B < 0 and B(λ−

br)−B > 0.In this case λ∗ = λbr, and we are done.

190 A. Levin / Information Processing Letters 99 (2006) 187–191

We return the solution HEU that is found by A when itis applied to approximate (LR((λ∗)+)).

We denote by Ap the resulting approximation algo-rithm for the subset selection problem. I.e., Ap is themodification of AL where the binary search is replacedby the parametric search method.

4. Approximating BRS and BRSO using theparametric search method

4.1. Approximating the subset selection problem

We return our focus to the subset selection problem,and we establish the following theorem:

Theorem 3. Algorithm Ap is a ρ approximation algo-rithm for the subset selection problem.

Proof. Assume that we find an interval [λ1, λ2] oflength ε′ so that B(λ1) − B � 0 and B(λ2) − B � 0,then using Theorem 2, the solution returned by A whenapplied to (LR(λ2)) is a (ρ − cε′)-approximation. Us-ing the parametric search method we can find for each(infinitesimally) small value δ > 0, an interval [λ∗ − δ,

λ∗ + δ] that satisfies this property, and conclude that foreach δ > 0 the solution that we return is a (ρ − 2cδ)-ap-proximation algorithm. Since δ > 0 can be arbitrarysmall (note that the solution that the algorithm returns,does not depends on the value of δ), the claim fol-lows. �

As explained in Section 2, the subset selection prob-lem generalizes the BRS and BRSO problems. This facttogether with Theorem 3 is used in the following in or-der to obtain the improved bounds for BRS and BRSO.

4.2. Approximating BRS

In [6], in order to obtain an approximation algorithmfor BRS, the authors formulated the discrete modelof the BRS problem as an integer program, and thennoted that its Lagrangian relaxation is an instance ofthe throughput maximization problem. The THROUGH-PUT MAXIMIZATION problem is defined as follows (see[1]). The input consists of a set of activities, each re-quiring the utilization of a given, limited, resource. Theamount of resource available is fixed over time; we nor-malize it to unit size for convenience. The activitiesare specified as a collection of sets A1, . . . ,Am. Eachset represents a single activity: it consists of all pos-sible instances of that activity. An instance I ∈ Ai isdefined by the following parameters. (1) A half-open

time interval [s(I ), e(I )) during which the activity willbe executed. (2) The amount of resource required for theactivity, called the width of the instance. (3) The profitp(I) > 0 gained by scheduling this instance of the activ-ity. A schedule is a collection of instances. It is feasibleif: (1) it contains at most one instance of every activity,and (2) for all time instants t , the total width of the in-stances in the schedule whose time interval contains t

does not exceed 1. The goal is to find a feasible sched-ule that maximizes the total profit accrued by instancesin the schedule.

In the Lagrangian relaxation of BRS the profit froma job j is pj = wj − λcj , and we can remove all jobswith negative profit. Then, the instances that correspondto a job j are all feasible time intervals in which onecan process j , and the width of all these instances is 1(so no pair of jobs will be processed simultaneously).This shows that the Lagrangian relaxation of BRS isan instance of the throughput maximization problem.Bar-Noy et al. [1], presented a 1

2 -approximation algo-rithm for the throughput maximization problem that isbased on the local-ratio technique. As noted in [6] thisalgorithm has a primal–dual interpretation, and there-fore since BRS is a special case of the subset selectionproblem, we can apply Theorem 3 with ρ = 1

2 (i.e., thealgorithm of [1] is the required A for BRS with ρ = 1

2 ).Therefore, we conclude the following result (that im-proves the 1

2+ε-approximation algorithm of [6]):

Theorem 4. Algorithm Ap yields a 12 -approximation al-

gorithm for the discrete model of BRS.

The result of [6] for the continuous model of BRSis obtained by discretizing the set of feasible intervals.Such discretizing did not hurt the performance guar-antee of their algorithm, however it does increase theapproximation ratio of our new algorithm to 1

2+ε, and

therefore we did not obtain an improvement in this case.

4.3. Approximating BRSO

In [6], in order to obtain a 13+ε

-approximation al-gorithm for BRSO, the authors designed a primal–dual13 -approximation algorithm for the corresponding prob-lem without a budget constraint. So this algorithm canserves as the required A with ρ = 1

3 . Since BRSO is aspecial case of the subset selection problem, we can useTheorem 3 with ρ = 1

3 , and obtain the following result:

Theorem 5. Algorithm Ap yields a 13 -approximation al-

gorithm for the discrete model of BRSO.

A. Levin / Information Processing Letters 99 (2006) 187–191 191

References[1] A. Bar-Noy, R. Bar-Yehuda, A. Freund, J. Naor, B. Schieber,

A unified approach to approximating resource allocation andscheduling, Journal of the ACM 48 (2001) 1069–1090.

[2] K. Jain, V.V. Vazirani, Approximation algorithms for metric facil-ity location and k-median problems using the primal–dual schemaand Lagrangian relaxation, Journal of the ACM 48 (2001) 274–296.

[3] A. Levin, Strongly polynomial-time approximation for a class ofbicriteria problems, Operations Research Letters 32 (2004) 530–534.

[4] M.V. Marathe, R. Ravi, R. Sundaram, S.S. Ravi, D.J. Rosenkrantz,H.B. Hunt III, Bicriteria network design problems, Journal of Al-gorithms 28 (1998) 142–171.

[5] N. Megiddo, Combinatorial optimization with rational objectivefunctions, Mathematics of Operations Research 4 (1979) 414–424.

[6] J. Naor, H. Shachnai, T. Tamir, Real-time scheduling with a bud-get, in: Proceedings of the 30th International Colloquium on Au-tomata, Languages and Programming (ICALP 2003), in: LectureNotes in Computer Science (LNCS), vol. 2719, Springer, Berlin,2003, pp. 1123–1137.