9
Algorithms for multiprocessor scheduling with machine release times HANS KELLERER Institut fu ¨r Statistik, O ¨ konometrie und Operations Research, Universita ¨t Graz, Universita ¨tsstraße 15, A–8010 Graz, Austria E-mail: [email protected] In this paper we present algorithms for the problem of scheduling n independent jobs on m identical machines. As a generalization of the classical multiprocessor scheduling problem each machine is available only at a machine dependent release time. Two objective functions are considered. To minimize the makespan, we develop a dual approximation algorithm with a worst case bound of 5/4. For the problem of maximizing the minimum completion time, we develop an algorithm, such that the minimum completion time in the schedule produced by this algorithm is at least 2/3 times the minimum completion time in the optimum schedule. The paper closes with some numerical results. 1. Introduction 1.1. Problem statement Consider n independent tasks J 1 ; ... ; J n with positive joblengths J 1 ; ... ;‘J n which we will also briefly de- note as processing times p 1 ; ... ; p n . These jobs have to be assigned to m identical processors M 1 ; ... M m . Assume that the jobs are to be sorted in nonincreasing order, i.e., p 1 p 2 p n and are to be simultaneously available for execution at time t 0. Some machines may not be ready at the start, therefore let r i denote the earliest time that machine M i can start to process the jobs J 1 ; ... ; J n , where i 1; ... ; m. We distinguish two optimality crite- ria: minimizing overall finish time (briefly: P ; r j jjC max ) and maximizing the minimum completion time (briefly P ; r j jjC min ). The corresponding problems in which all the release times are equal to zero are denoted by P jjC max and P jjC min , respectively. Of course, all these problems are NP-hard and it appears quite unlikely that there is any polynomial algorithm able to generate such schedules. Hence, heuristic algorithms are considered in the hope of providing near-optimal solutions. 1.2. Previous work and related results P jjC max , i.e., minimizing the makespan, is one of the most studied problems in scheduling theory. Several heuristic algorithms have been suggested for the solution of this classical multiprocessor scheduling problem. One possible approach is list scheduling: a priority list of jobs is con- structed and whenever a processor becomes available for assignment, the first unexecuted job is taken from the list and assigned to that processor. The so-called Longest Processing Time (LPT-) heuristic is defined by the fact that a special priority list is constructed, i.e., the jobs are sorted in non-increasing processing times. Graham [1] has shown that a LPT-schedule has a tight worst case performance bound of 4/3–1/3m. A completely dierent algorithm, called MULTIFIT has been developed by Coman, et al. [2]. It uses bin packing methods, combined with a binary search over bin capacity, to find the minimum capacity such that all jobs would fit into m bins. In [2] it was shown that MULTIFIT has a makespan of at most 1:2 2 k times the optimal solution [3], where k denotes the number of iterations used in the binary search. (This bound was improved to 13/11 in Yue [4]). Hochbaum and Shmoys [5] have replaced the approximation algorithm in the binary search with a dual approximation algorithm. Refining this approach enabled them to obtain algorithms with a worst case performance of 6/5 and 7/6, respectively. A much less studied problem is P jjC min . It has ap- plications in the sequencing of maintenance actions for modular gas turbine aircraft engines, [6]. The behavior of the LPT-algorithm was first examined in Deuermayer et al. [7]. Csirik et al. [8] have shown that the minimum completion time in the schedule produced by LPT always remains within a tight factor of 3m 1=4m 2 of the optimum solution. Release times on the machines appear occasionally in flexible manufacturing systems [9] and job batch sched- uling. A polynomial algorithm has been developed by Schmidt [10] to address the problem of constructing a preemptive schedule to minimize the makespan, when each machine has a specific interval of availability. 0740-817X Ó 1998 ‘‘IIE’’ IIE Transactions (1998) 30, 991–999

Algorithms for multiprocessor scheduling with machine release times

Embed Size (px)

Citation preview

Page 1: Algorithms for multiprocessor scheduling with machine release times

Algorithms for multiprocessor schedulingwith machine release times

HANS KELLERER

Institut fuÈr Statistik, OÈkonometrie und Operations Research, UniversitaÈt Graz, UniversitaÈtsstraûe 15, A±8010 Graz, AustriaE-mail: [email protected]

In this paper we present algorithms for the problem of scheduling n independent jobs on m identical machines. As a generalizationof the classical multiprocessor scheduling problem each machine is available only at a machine dependent release time. Twoobjective functions are considered. To minimize the makespan, we develop a dual approximation algorithm with a worst casebound of 5/4. For the problem of maximizing the minimum completion time, we develop an algorithm, such that the minimumcompletion time in the schedule produced by this algorithm is at least 2/3 times the minimum completion time in the optimumschedule. The paper closes with some numerical results.

1. Introduction

1.1. Problem statement

Consider n independent tasks J1; . . . ; Jn with positivejoblengths `�J1�; . . . ; `�Jn� which we will also brie¯y de-note as processing times p1; . . . ; pn. These jobs have to beassigned to m identical processors M1; . . . Mm. Assumethat the jobs are to be sorted in nonincreasing order, i.e.,p1 � p2 � � � � � pn and are to be simultaneously availablefor execution at time t � 0. Some machines may not beready at the start, therefore let ri denote the earliest timethat machine Mi can start to process the jobs J1; . . . ; Jn,where i � 1; . . . ;m. We distinguish two optimality crite-ria: minimizing overall ®nish time (brie¯y: P ; rjj � jCmax)and maximizing the minimum completion time (brie¯yP ; rjj � jCmin). The corresponding problems in which allthe release times are equal to zero are denoted byP j � jCmax and P j � jCmin, respectively. Of course, all theseproblems are NP-hard and it appears quite unlikely thatthere is any polynomial algorithm able to generate suchschedules. Hence, heuristic algorithms are considered inthe hope of providing near-optimal solutions.

1.2. Previous work and related results

P j � jCmax, i.e., minimizing themakespan, is one of themoststudied problems in scheduling theory. Several heuristicalgorithms have been suggested for the solution of thisclassical multiprocessor scheduling problem. One possibleapproach is list scheduling: a priority list of jobs is con-structed and whenever a processor becomes available forassignment, the ®rst unexecuted job is taken from the list

and assigned to that processor. The so-called LongestProcessing Time (LPT-) heuristic is de®ned by the fact thata special priority list is constructed, i.e., the jobs are sortedin non-increasing processing times. Graham [1] has shownthat a LPT-schedule has a tight worst case performancebound of 4/3±1/3m. A completely di�erent algorithm,called MULTIFIT has been developed by Co�man, et al.[2]. It uses bin packing methods, combined with a binarysearch over bin capacity, to ®nd the minimum capacitysuch that all jobs would ®t into m bins. In [2] it was shownthat MULTIFIT has a makespan of at most 1:2� 2ÿk

times the optimal solution [3], where k denotes the numberof iterations used in the binary search. (This bound wasimproved to 13/11 in Yue [4]). Hochbaum and Shmoys [5]have replaced the approximation algorithm in the binarysearch with a dual approximation algorithm. Re®ning thisapproach enabled them to obtain algorithms with a worstcase performance of 6/5 and 7/6, respectively.A much less studied problem is P j � jCmin. It has ap-

plications in the sequencing of maintenance actions formodular gas turbine aircraft engines, [6]. The behavior ofthe LPT-algorithm was ®rst examined in Deuermayeret al. [7]. Csirik et al. [8] have shown that the minimumcompletion time in the schedule produced by LPT alwaysremains within a tight factor of �3mÿ 1�=�4mÿ 2� of theoptimum solution.Release times on the machines appear occasionally in

¯exible manufacturing systems [9] and job batch sched-uling. A polynomial algorithm has been developed bySchmidt [10] to address the problem of constructing apreemptive schedule to minimize the makespan, wheneach machine has a speci®c interval of availability.

0740-817X Ó 1998 ``IIE''

IIE Transactions (1998) 30, 991±999

Page 2: Algorithms for multiprocessor scheduling with machine release times

The non-preemptive case, P ; rjj � jCmax has been stud-ied by Lee [11]. He has shown that LPT yields a worstcase performance of 3/2 ± 1/2m. He has introduced amodi®ed algorithm, MLPT, which provides a makespanbounded 4/3 times the optimum solution. Both boundsare tight. For MLPT the machine release times wereconsidered as being processing times and merged withp1; . . . ; pn to form a new set S and the elements of S weresorted in nonincreasing order. Then these elements of Swere assigned to machines by LPT, with an exchangestep of jobs and release times, if necessary, such thateach machine has exactly one release time. To the au-thor's knowledge no study of P ; rjj � jCmin has previouslybeen reported in the literature. Such a study is of inte-rest since a good approximation for P ; rjj � jCmin can helpto ®nd better solutions for a special problem of inven-tory control [12].

1.3. De®nitions and conventions

The length `�T � of a set T of jobs is de®ned in the naturalway as `�T � �Px2T `�x�. A subschedule of a heuristic is apair �Mi; Si� which consists of a machine Mi plus the set ofjobs Si assigned to machine Mi by the heuristic (or �M�i ; S�i �for the optimum schedule). We will identify a machinewith its corresponding subschedule, if the meaning is clearfrom the context. Machines and subschedules with su-perscript� usually refer to the optimum schedule, expres-sions with the superscriptH or without a superscript referto the heuristic schedule. The length of a subschedule isde®ned as the length of the jobs plus the correspondingrelease time ri and is abbreviated as `�M�i � for the opti-mum schedule or as `�Mi� for a heuristic schedule.

For the makespan problem, we de®ne C� �maxi�1;...;m `�M�i � as the maximum completion time of theoptimum schedule and CH � maxi�1;...;m `�Mi� as themaximum completion time of the heuristic schedule. Formaximizing the minimum completion time the de®nitionsare analogous.

As abbreviations we use J :� fJ1; . . . ; Jng, P :�fp1; . . . ; png; M :� fM1; . . . ;Mmg and R :� fr1; . . . ; rmg.The pair, consisting of the machines and their corre-sponding release times, is denoted by �M ;R�.

The rest of this paper is ordered as follows: in Section2, we present a dual approximation algorithm forP ; rjj � jCmax with a worst case performance of 5/4. Aheuristic with a worst case factor of 2/3 for P ; rjj � jCmin isanalysed in Section 3. Finally, in Section 4 we presentsome numerical results.

2. A 5/4-algorithm to minimize the makespan

We will now present an algorithm for P ; rjj � jCmax with aworst case guarantee of 5/4 . Assume for the remainder ofthis section that r1 � r2 � � � � � rm.

Our algorithm will be based on the notion of a dualapproximation algorithm as introduced by Hochbaum andShmoys [5]. Given a capacity C and a set of jobs to pack,our q-dual approximation algorithm �q > 1� produces apacking that uses at most m bins with capacitiesqC ÿ r1; . . . ; qC ÿ rm. Using binary search on the capacityone obtains a q-relaxed decision procedure will be veryuseful.

De®nition. A q-relaxed decision procedure Pq�C� is de®nedas follows:As input we have the processing times P, release times Rand a given capacity C.As output either the answer is NO or the answer ALMOSTis produced.

(i) If the output is ALMOST, Pq�C� produces a schedulewith a makespan of at most qC.

(ii) If the output is NO, there is no schedule with amakespan of at most C.

qC is called the extended capacity and denoted by ~C.

The following lemma is an adaption of the corre-sponding results in Hochbaum and Shmoys [5].

Lemma 1. For each polynomial q-relaxed decision proce-dure there is a polynomial q-approximation algorithm Aq

using binary search. For k iterations the resulting solutionhas a makespan of at most q�1� 2ÿk�C�.

Proof. It is straightforward to see that:

S�m� :� max1

m

Xn

i�1pi �

Xm

j�1rj

!; p1; r1

( );

holds as a lower for the optimum makespan. It is easilyseen that the makespan of any list schedule is at most2S�m� [13]. These bounds serve to initialize a binarysearch. If u and ` are the current lower and upper bounds,set C :� b�u� `�2c and apply the q-relaxed decision pro-cedure Pq�C� to C. If the answer is ALMOST then reset u toC, otherwise reset ` to C. After a given number of loops k,the current value of u serves as an estimate for the opti-mummakespan C�. It is easily seen that this procedure hasthe appropriate performance guarantee. From (ii) weconclude that ` is always a lower bound on the optimummakespan. Obviously uÿ ` � 2ÿkC� holds, thus by (i):

CH � qu � q�uÿ `� `� � q�2ÿkC� � C�� � q�1� 2ÿk�C�:j

Let us now provide the tools for the construction of our5/4-relaxed decision procedure P5=4�C�. We consider therelease times as jobs, which means that all machines arerepresented by m bins with a capacity C. Shortly, we willcall the elements of the set R the red jobs and the elements

992 Kellerer

Page 3: Algorithms for multiprocessor scheduling with machine release times

of P the black jobs. Consider P as a disjoint union of twosets P1; P2; i.e., P � P1 [ P2, where P1 is the set of all blackjobs greater than 1=4C and P2 contains the remainingjobs. (Usually, we identify jobs with their processingtimes). For convenience, norm the sizes of the jobs insuch a way that C � 4: W.l.o.g. assume that all releasetimes are smaller than ®ve because it is not reasonable toassign jobs to machines with higher release times.

Before we start the description of P5=4�C�, we list somefurther useful notation: Call r�j� the jth-biggest of allnonassigned red jobs and let p�i� be de®ned in an anal-ogous way. Please note that each time when Step 3.1 isexecuted, the values of the dynamic quantities r�j� andp�j� have the values when Step 2 was completed. Ad-ditionally, de®ne P1�c� to be the set of all nonassignedblack jobs in P1 which are less than or equal to a con-stant c, and de®ne �P1�c� to be the set of all nonassignedjobs in P1 which are greater than c. The meanings ofR�c� and �R�c� are analogous to P1�c� and �P1�c�, respec-tively. When a decision is made to pack a set of elementsof P1 together, they are placed in the chosen bin and noother element of P1 is added. Therefore, let us call a binan s-item bin if exactly s elements of P1 are put in thatbin.

The algorithm is split into two parts. First, the jobs ofP1 [ R are assigned to the bins, then in Step 3.5.4 theelements of P2. Loop 3.1 is a ``guessing'' loop. Thatmeans, we guess the number k of 1-item bins in the op-timal solution. In principle, the algorithm is runningindependently for each 0 � k � minfjP1j; jRjg. Exactly oneof these numbers k corresponds to the number of 1-itembins in the optimal solution. The algorithm P5=4�C� maynot produce the same number of 1-item bins as the op-timal solution. However, we can say that P5=4�C� hasconsidered the number of 1-item bins that are in theoptimal solution. Analogously, Step 3.5.1 guesses thenumber j of 2-item bins in the optimal solution.

Procedure P5/4(C).Step 0. k :� 0Step 1. While �R�2� 6� ;: Pair r�1� 2 �R�2� with maximum

job p 2 P1 such that r�1� � p � 4. If no such pexists, put r�1� alone in a bin.

Step 2. While �P 1�3� 6� ;: Pair p�1� 2 �P 1�3� with maxi-mum red job r such that p�1� � r � 4. If no suchjob exists goto 4.

Step 3. While k � minfjP1�3�j; jR�2� jg begin3.0 j :� 03.1 Pair r�1� with p �1�; . . . ; r�k� with p�k�.3.2 While �P 1�2:5� 6� ;: If one of the sets P1�1:5�

or R�0:5� is empty, goto 3.6 else packp�1� 2 �P1�2:5� together with max P1�1:5�and max R�0:5� in a bin.

3.3 While �R�1� 6� ;: If P1�3ÿ r�1�� � ; orP1�2ÿ �r�1�=2�� � ; or their union con-tains at most one job, put r�1� alone in bin

else pack r�1� 2 �R�1� together with d1 :�max P1�3ÿ r�1�� and max P1�2ÿ �r�1�=2��nfd1g in a bin.

3.4 While �P 1�2� 6� ;, goto 3.6 else packp�1� 2 �P1�2� together with max P1�4ÿ p�1��and r�1� in a bin.

3.5 While j � minfjR�1�j; jP1�2�j=2g begin3.5.1 Put each of the triples fr�1�; p�1�; p�2�g,

fr�2�, p�3�, p�4�g; . . . ; fr�j�, p�2jÿ 1�,p�2j�g in one bin.

3.5.2 While R 6� ;: If one of the setsP1�2ÿ r�1��; P1��3=2� ÿ �r�1�=2�� orP1��4=3� ÿ �r�1�=3�� is empty or theirunion contains at most two jobs, putr�1� alone in a bin else pack r�1�together with d2 :� max P1�2ÿ r�1��;d3 :� max P1��3=2� ÿ �r�1�=2��nfd2gand max P1��4=3� ÿ �r�1�=3��nfd2; d3gin a bin.

3.5.3 If P1 6� ; goto 3.5.63.5.4 fAssignment of elements 2 P2g

While P2 6� ; : If there is no bin thatcurrently contains a load � 4, goto3.5.6 else take any element of P2 andput it in that bin.

3.5.5 Output ALMOST STOP3.5.6 j :� j� 1; end

3.6 k :� k � 1; endStep 4. Output NO

In order to prove that P5=4�C� is a 5/4-relaxed decisionprocedure we have to show that either P5=4�C� produces aschedule with a makespan of at most 5 or that there is noassignment with a makespan of at most 4. The proof willalso be split into two parts. First, we will restrict ourselvesto jobs from P1 [ R. Afterwards, we will extend the proofto arbitrary job sets. Let us assume in the following that Pcontains only jobs greater than 1. Because the ®rst part ofthe proof will be done by contradiction, the notation of aminimal counterexample is introduced.

De®nition.We de®ne a counterexample to be a set of jobs Jand a pair �M ;R� such that P5=4�C� produces the output NOwhereas an assignment of J to the machines with makespanless or equal to 4 exists.

A minimal counterexample consists of J and �M ;R� suchthat all of the following holds:

(i) J and �M ;R� form a counterexample.(ii) For all m0; 1 � m0 < m, no counterexample exists.

Of course, the existence of a counterexample implies theexistence of a minimal counterexample. To exploit the``minimality'', the concept of domination considerablyfacilitates the discussion. A subschedule Mi is said todominate a subschedule M�i of the optimum solution if we

Algorithms for multiprocessor scheduling with machine release times 993

Page 4: Algorithms for multiprocessor scheduling with machine release times

can ®nd for each job p� 2 M�j (a unique job p 2 Mi with`�p� � `�p�� and additionally rj � ri:

De®nition. A subschedule M�j is dominated by a sub-schedule Mi if there is an injective mapping f : S�j ! Si suchthat `�p� � `�f �p�� for any job p 2 S�i and such that rj � riholds.

Lemma 2 (domination lemma). Let N�j be any subschedulein an optimum schedule. Then no subschedule Mi formed byprocedure P5=4�C� of a minimal counterexample will dom-inate M�j .

Proof. Let us recall that P contains only jobs greater than1. Suppose Mi dominates M�j and let f : S�j ! Si be themapping involved. Consider the job set J 0 � JnSi and theset of release times R0 � Rnfrig obtained by deleting theitems of Mi and the release time ri, respectively. It is easilyseen that the heuristic packing of J 0 will be identical tothat for J except that machine Mi is missing. Of course,the output will be NO again. Now, construct a newpacking from the optimum assignment. First, remove allblack jobs from M�j and put all black jobs from M�i onmachine M�j . This is possible, since ri � rj holds. Theninterchange each item p of the removed elements with itsimage f �p�. Since `�p� � `�f �p�� holds, there will be notrouble with the capacity restrictions of the bins. Finally,remove all remaining jobs on Mi. Obviously, the machinesMnM�i contain only items of J 0. Thus, we obtain packingof J 0 into mÿ 1 machines with release times R0. Thiscontradicts the presumed minimality of m. j

Please note that by Lemma 2 an undominated schedulecan contain no zero-item bins.

When we go through the steps of the algorithm, we willshow each time applying the domination lemma that in aminimal counterexample no jobs of the speci®ed prop-erties can exist, taking advantage of the fact that all blackjobs have length greater than 1.

Step 1: In an optimum schedule red jobs of lengthgreater than 2 can only be packed in a 1-itembin. We pack r�1� with the largest piece that®les together with r�1�. The domination lem-ma can be applied and we conclude that inour minimal counterexample all red jobs havelength less than or equal to 2.

Step 2: Of course, each black job greater than 3 has tobe packed in a 1-item bin. Analogously toStep 1, we deduce that in a minimal counter-example all black jobs are less than or equalto 3.

Step 3.1: In this step, we determine the number of re-maining 1-item bins. Let h1 be the number of1-item bins in the optimum solution. Becausewe execute the algorithm for all possible val-

ues of k, we may from now on focus on thecase when k is set to h1 in 3.1. Consequently,there is the same number of 1-item bins in theschedule of our procedure as in the optimumschedule. We claim that h1 must be zero. It isfeasible to pair r�1� and p�1� becauser�1� � p�1� � 2� 3 � 5 and this pair domi-nates any 1-item bin in the optimum solution.It follows from the domination lemma thatboth the heuristic solution and the optimumsolution contain no 1-item bins.

Step 3.2: Let q be the black job and r the red jobpacked together with p�1� in a bin. Ourpacking is feasible since p�1� � q� r � 3 �1:5� 0:5 � 5. In the optimum solution, p�1� isalso in a 2-item bin. Let ~q and ~r be the jobspacked together with p�1� in that bin.p�1� > 2:5 implies ~q < 1:5, and p�1� � ~q > 3:5implies ~r < 0:5. As result, q � ~q and r � ~r.Domination forces all black jobs to be lessthan or equal to 2.5.

Step 3.3: Let r�1� be packed with p and q (pPq) in theheuristic. Then r�1� � p � q � r�1� � 3ÿ r�1�� 2ÿ r�1�=2 � 5 which guarantees feasibility.The black jobs ~p and ~q (~p � ~q) packed to-gether with r�1� in the optimum solution havethe following upper bounds on their lengths:~p � 3ÿ r�1� and ~q � �4ÿ r�1��=2. Domina-tion occurs, thus all red jobs are at most 1 in aminimal counterexample.

Step 3.4: The argument and conclusion are quite simi-lar to 3.2 and 3.3.

Step 3.5.1: So far, we have obtained that all red jobs areless than or equal to 1 and all black jobs areless than or equal to 2. Now, each combina-tion of two black jobs and one red job ®ts in abin with a capacity of 5 and it is permitted topack the pieces as chosen in 3.5.1. Let h2 bethe number of 2-item bins in the optimumsolution and consider the case where j is equalto h2 as in Step 3.1. Domination impliesh2 � 0. In the optimum schedule there are 0-item bins and 3-item bins and in the heuristicschedule there are 0-item bins, 3-item bins and4-item bins.

Step 3.5.2: Let q1; q2 and q3 �q1 � q2 � q3� be the blackjobs packed together with r�1� by the proce-dure. r�1� � q1 � q2 � q3 � r�1� � 2ÿ r�1��3=2 ÿ r�1�=2� 4=3ÿ r�1�=3 < 5 implies thefeasibility. Let r�1� be packed with ~q1; ~q2 and~q3 �~q1 � ~q2 � ~q3� in the optimum solution. Itis straightforward to see that ~q1 < 2ÿ r�1�; ~q2< 3=2ÿ r�1�=2 and ~q3 < 4=3ÿ r�1�=3 hold.

Altogether, the e�ect of domination is that in a minimalcounterexample there are no machines in the optimum

994 Kellerer

Page 5: Algorithms for multiprocessor scheduling with machine release times

schedule that are not dominated by a subschedule pro-duced from the heuristic. Thus, we have shown thatP5=4�C� is a 5/4-relaxed decision procedure for problemsin which all black jobs are greater than 1. It would be ofinterest to extend this result to arbitrary job sets.

Suppose that an optimum solution with a makespanof at most 4 exists. If we run procedure P5=4�C� there aretwo values ~k and ~j for k and j, respectively such thatjobs greater than 1 are all placed into bins with a ca-pacity of 5. We are about to execute Step 3.5.4. Becauseelements from P2 are less than or equal to 1, Step 3.5.4never packs any bin with more than 5 units. However ifeach bin is ®lled with more than its capacity of 4, nooptimum schedule can exist, which is a contradiction.This ®nishes the proof that P5=4�C� is a 5/4-relaxed de-cision procedure.

We can summarize our results in the following theo-rem:

Theorem 3. The approximation algorithm A5=4 forP ; rjj � jCmax using the 5/4-relaxed decision procedureP5=4�C� has a worst case bound of 5/4.

The reader may assume that a more detailed analysis ofthe algorithm would suggest that even better bounds than5/4 could be proved. This is not the case. The examplebelow shows that the extended capacity ~C cannot besmaller than 5=4C.

Take m � 5 and choose the release times asr1 � r2 � 6; r3 � 1; r4 � r5 � 0. The processing times arep1 � p2 � 9; p3 � p4 � 7; p5 � p6 � . . . � p11 � 5. Weget C� � 16 but for ~C < 20 our procedure producesoutput NO. Of course, this example can be extended to anarbitrary number of machines. Just give all extra ma-chines a release time of zero and add three jobs of pro-cessing time 5 for each extra machine.

A straightforward simple algorithm to be used within abinary search would be BEST FIT. For a given a capacity~C, this algorithm takes the greatest job from the list andputs it in the bin with the greatest load on it where it ®ts.Procedure P5=4�C� has a better worst-case performancethan BEST FIT. For example, take six machines withrelease times r1 � r2 � r3 � 7; r4 � 2; r5 � r6 � 0 and jobswith processing times p1 � p2 � p3 � 11; p4 � p5 � 8; p6� p7 � . . . � p12 � 6. For the optimum makespan we getC� � 19 whereas for any ~C less than 24 BEST FIT pro-duces no valid assignment.

3. A 2/3-algorithm to maximize the minimumcompletion time

In this section we will present a 2/3-approximation al-gorithm for P ; rjj � jCmin. A heuristic for this problem canbe employed as a subprocedure for an inventory controlproblem. Thus, ®nding better approximation algorithms

for P ; rjj � jCmin would help to obtain better algorithms forthe following problem [12].Given n processes, consuming or producing a certain

job-dependent quantity of a resource and one stock, wewant to ®nd a schedule of the processes such that there isalways su�cient quantity in the stock and the maximumload of the stock is minimized.Again, our algorithm will be a dual approximation

algorithm and the de®nition of a q-relaxed decisionprocedure �q < 1� is analogous to Section 2. In the fol-lowing assume the machines to be ordered such thatr1 � r2 � � � � � rm. W.l.o.g. let machine M1 be ready forexecution at time zero, i.e., r1 � 0.The objective of maximizing the minimum completion

time forces us to occupy each machine for as long aspossible. In this context, the release time r of a machineshould not be interpreted as the time when the machinestarts to work but as the time limit r until which themachine is busy. Thus it makes no sense to exclude ma-chines from consideration solely on the grounds of highrelease times.

De®nition. A q-relaxed decision procedure Pq�C� is de®nedas follows:As input we have the processing times P, release times Rand a given demand C.As output either the answer is NO or the answer ALMOSTis produced.

(i) If the output is ALMOST, Pq�C� produces a schedulewith minimum completion time at least qC.

(ii) If the output is NO, there is no schedule with mini-mum completion at least C.

Lemma 4. For each polynomial q-relaxed decision proce-dure there is a polynomial q-approximation algorithm Aq

using binary search. For k iterations the resulting solutionhas minimum completion time at least q�1ÿ 2ÿk�C�.

Proof. The argument is similar to that presented inLemma 1. De®ne T �m� in the following way:

T �m� :� 1

m

Xn

i�1pi �

Xm

j�1rj

!:

Obviously, T �m� holds as an upper bound for C� and zerois a trivial lower bound for C�. These bounds serve toinitialize the binary search. The upper bound u and thelower bound l interchange their roles: l serves as an es-timation for CH and u is an upper bound for the optimumsolution. After k loops we get:

CH � ql � q�lÿ u� u� � �1ÿ 2ÿk�qC�;

which yields the desired worst case guarantee. j

De®ne P 1 :� fpi j pi > 2=3Cg and P 2 :� fpi j 2=3C � pi> 1=3Cg. Let n1 denote the cardinality of P 1 and n2 the

Algorithms for multiprocessor scheduling with machine release times 995

Page 6: Algorithms for multiprocessor scheduling with machine release times

cardinality of P 2, respectively. We have enough tools tointroduce our procedure P2=3�C�.

Procedure P2=3(C).Step 1. Put the n1 elements of P 1 on machines

M1; . . . ;Mn1 . If n1 � m goto 7.Step 2. If n2 � 2�mÿ n1� goto 7 else set k :�

maxf0; n2 � n1 ÿ mg.Step 3. Put the 2k smallest elements of P 2 on the ma-

chines Mn1�1; . . . ;Mn1�k such that each of thesemachines contains 2 jobs.

Step 4. Assign the n2 ÿ 2k remaining elements of P 2 tothe machines with indices n1 � k � 1; . . . ; n1� n2 ÿ k (the largest element to the machinewith smallest release time, the second largestelement to the machine with the second largestrelease time, and so on).

Step 5. fLPT-algorithmgAssign the biggest remaining job to the machinewith the smallest load on it. Repeat this stepuntil all jobs are put on a machine.

Step 6. If the minimum completion time is greater than

2=3C goto 7.

k :� k � 1; if k � n2=2 goto 3 else output NO.

STOPStep 7. Output ALMOST .

That P2=3�C� produces ALMOST correctly, is trivial. Letthe subschedules in an assignment B� with a minimumcompletion time greater than or equal to C be denoted byB�i �i � 1; . . . ;m� and let h be the number of B�i containingat least two elements of P 2. We claim that if k is set to h inStep 6, then Steps 3 to 5 produce a schedule with aminimum completion time of at least 2=3C. Assume thatthis is not true and consider a ®xed counterexample wherethe number of machines m is a minimum over all count-erexamples. The sets Ai used in the following are de®nedas the subschedules constructed by the algorithm fork � h.

First, we observe that n1 must be zero. Otherwise, wederive a smaller counterexample by removing machineM1 and p1. The behavior of our algorithm does notchange and it is easy to see that the counterexample stillhas a minimum completion time greater than or equal toC.

Next, we claim that h must be zero in the smallestcounterexample. Indeed, if h > 0 holds, we remove ma-chine M1 and the two smallest elements of P 2 from thecounterexample. Let B�j contain two elements p and q ofP 2. We modify B� in the following way. B�j is replaced byB�1, the subschedule of machine M1. In the sets B�i , the twosmallest elements ~p and ~q of P 2 are replaced by p and q. Itcan be easily checked that the resulting schedule forM nM1 and Pnf~p; ~qg has a minimum completion timegreater than or equal to C and has exactly hÿ 1 sub-

schedules containing two or more elements of P 2.Moreover, the behavior of our procedure for k � hÿ 1does not change, and we end up with a smaller counter-example. Observe that `�Ai� � C cannot be true for alli; 1 � i � m, due to the assumption of our counterexam-ple. Now let r � m be the smallest index with `�Ar� > Cand let ~a denote the smallest element of Ar. By the con-struction Ar, we know that `�Ar� ÿ ~a < 2=3C. This yields~a > 1=3C and because of h � 0; Ar cannot contain anyother element. Consequently, Ar � f~ag and 2=3C �~a > C ÿ rr. Next, we distinguish two cases.Case 1: There exists ~b � ~a in some B�s with s � r. In this

case remove machine Mr and ~a from the counterexample.For k � 0 our procedure behaves just as before: instead ofplacing the non-existing ~a on machine Mr, it immediatelyproceeds to machine Mr�1 and again it will produceoutput NO. In the assignment B� we ®rst exchange thesets B�s and B�r . The minimum completion time will notdecrease since ~b � ~a > C ÿ rr holds. Next, we exchangethe positions of ~a and ~b. But now, ~a 2 Br. If we delete ~aand machine Mr, we have found a smaller couterexampleand derived a contradiction.Case 2: All ~b � ~a are in sets B�i with i � r ÿ 1. Step 4

assigns to each of the machines M1; . . . ;Mr exactly oneelement greater or equal than ~a. Consequently, one of thesubschedules B�i with i � r ÿ 1 has to contain two ele-ments of P 2 contrary to h � 0. This is the ®nal con-tradiction and we can formulate our main results of thissection.

Theorem 5. The approximation algorithm A2=3 forP ; rjj � jCmin using the 2/3-relaxed decision procedureP2=3�C� has a worst-case bound of 2/3.

Remark. It would be interesting to know:(a) Whether the 2/3 bound for A2=3 is a tight bound.(b) How good is the worst-case behavior of the LPT

algorithm.

We believe that both LPT and A2=3 are somewhat betterthan 2/3, and A2=3 should yield a better worst-case per-formance than LPT since A2=3 can be considered as anextended version of LPT. To ®nd a tight bound seems tobe very hard for both algorithms. We remember that eventhe proof of the �3mÿ 1�=�4mÿ 2� bound for LPT formaximizing the minimum completion time (without re-lease times !) was very complicated [7,8].

4. Computational results

In order to assess the performance of the problem ofminimizing the makespan under release time constraintswe have compared the average behavior of four algo-rithms: The presented 5/4-approximation algorithm A5=4,LPT, MLPT and BEST FIT embedded in a binary search

996 Kellerer

Page 7: Algorithms for multiprocessor scheduling with machine release times

over the number of machines used. The latter algorithm isdenoted as BEST. For the testing we used a slight mod-i®cation of P5=4�C�. Instead of jumping directly to 3.5.6 or3.6, respectively, we test before whether the remainingjobs can be assigned to m bins using LPT. If this is notpossible then we raise the index j or k, respectively byone. Of course, these modi®cations do not in¯uence theworst-case behavior. We have taken the processing timesand release times to be integers. Thus, we are able to runa binary search until the di�erence between the currentupper and lower bounds is less than one.

The parameters that we vary for computational testingare the number ofmachinesm, the number of jobs n and theratio r � p1=pn of the longest to smallest processing times.Formwe used values of 2, 3, 5, 7 and 9. The number of jobs

is expressed by AJN, the average job number per machine.AJN � k means that the number of jobs generated arerandomly selected from fkmÿ bm=2c; . . . ; km� bm=2cg.We tested six di�erent values forAJN, namely 1.5, 2, 3, 5, 7and 9. For higher values ofAJN the results of the heuristicsolutions were similar to the optimum values. For r wechoose four typical values, namely 1.1, 1.8, 2.5 and 10.Since the results for 2.5 are very similar to those for 1.8, wewill omit the corresponding tables. Processing times aregenerated as a uniform distributed integer variabletrunctated between 100 and 100r.We generate the release times by assigning the pro-

cessing times randomly onto m machines. In this proce-dure we assure that at least one job is put onto eachmachine. All machines are ``®lled up'' then with release

Table 1. r � 1:1

m AJN LPT MLPT A5=4 BEST AJN LPT MLPT A5=4 BEST

2 1.5 0.370 0 0 0.125 2 0.532 0.691 0.211 0.8302 3 0.466 0.573 0.182 0.737 5 0.486 0.535 0.486 1.0942 7 0.584 0.603 0.584 1.124 9 0.421 0.449 0.421 1.31

3 1.5 0.572 0 0.116 0.236 2 0.586 0.722 0.337 0.8193 3 0.673 0.663 0.391 1.014 5 0.750 0.747 0.750 1.1843 7 0.740 0.716 0.740 1.504 9 0.764 0.765 0.764 1.763

5 1.5 1.023 1.199 0.552 0.928 2 1.212 0.923 0.579 0.9825 3 0.969 1.039 0.836 1.327 5 1.151 1.105 1.151 1.3095 7 1.048 1.013 1.048 1.674 9 1.002 0.858 1.002 1.769

7 1.5 1.253 1.555 0.749 0.912 2 1.635 1.508 1.005 1.0637 3 1.157 1.090 1.053 1.176 5 1.295 1.220 1.295 1.4467 7 1.188 1.093 1.158 1.672 9 1.090 1.018 1.090 1.904

9 1.5 1.492 1.652 1.017 1.084 2 1.544 1.542 1.112 1.1669 3 1.467 1.385 1.343 1.329 5 1.377 1.272 1.377 1.4349 7 1.372 1.249 1.341 1.738 9 1.369 1.257 1.369 1.776

Table 2. r � 1:8

m AJN LPT MLPT A5=4 BEST AJN LPT MLPT A5=4 BEST

2 1.5 3.016 0 0 0 2 1.219 1.640 1.274 2.4002 3 2.983 3.490 0.848 3.582 5 1.870 2.209 1.841 3.2932 7 2.109 2.296 2.109 1.658 9 1.895 2.048 1.895 1.573

3 1.5 3.913 0 0.299 0.004 2 5.031 4.628 1.305 3.6283 3 4.161 3.953 2.112 4.346 5 3.611 3.506 3.569 2.6893 7 3.089 2.942 3.089 2.309 9 2.321 2.254 2.321 1.626

5 1.5 6.267 6.692 2.698 3.697 2 5.892 6.346 3.334 3.8625 3 5.276 5.039 3.623 3.658 5 4.186 3.704 4.022 2.8165 7 3.007 2.775 3.006 2.030 9 2.326 2.411 2.325 1.696

7 1.5 6.356 8.027 2.682 4.050 2 6.601 6.689 3.539 3.7817 3 5.404 5.533 4.335 3.743 5 4.043 3.753 4.016 2.9757 7 3.304 3.296 3.304 2.030 9 2.303 2.231 2.302 1.510

9 1.5 7.401 8.754 4.120 4.123 2 6.896 7.901 4.497 3.8519 3 6.652 6.470 5.404 3.644 5 4.124 3.992 4.124 2.6419 7 2.846 2.679 2.846 2.057 9 2.247 2.160 2.247 1.708

Algorithms for multiprocessor scheduling with machine release times 997

Page 8: Algorithms for multiprocessor scheduling with machine release times

times such that all subschedules have the same length.W.l.o.g. we assume rm is equal to zero. Moreover, wehave found an easy way to construct ``optimal'' scheduleswithout signi®cant wastage of computation time.

The results in the tables show the average percentageerrors (average over 50 runs) in comparision to the op-timum solution.

We see from Tables 1 to 3 that the average percentageerror for r � 1:8 and r � 2:5 is generally higher than forr � 1:1 and for r � 10:The reason for this behavior for an rvalue of processing times are very similar and an assign-ment of the jobs which is not optimal, does not producegreat deviations. For r � 10 (and for higher values of r) theexistence of very small jobs raises the probability for a goodapproximation of the optimum solution value. The higherthe number of jobs and machines the better is the behaviorof BEST. A5=4 is superior for the case of a smaller numberof jobs and machines. Only for r � 1:1 and large m and n(see Table 1) is MLPT the best of the four algorithms. Asexpected, LPT is inferior toMLPT.The results suggest thatit is reasonable to attack the problem of minimizing themakespan withmachine release times using BEST andA5=4

complementarily.Unsurprisingly, the two dual heuristics A5=4 and BEST

need much more computation time than the primal al-gorithms. Since the maximal value of CPU time weneeded was only 2.4 seconds, this should play no role inthe decision of which algorithm to use.

Acknowledgement

I want to thank Prof. Jochen HuÈ lsmann for helpful dis-cussions on the topic and Gottfried Amtmann for per-forming the numerical tests.

References

[1] Graham, R.L. (1969) Bounds on multiprocessing timing anoma-lies. SIAM Journal of Applied Mathematics, 17, 263±269.

[2] Co�man Jr, E.G., Garey, M.R. and Johnson, D.S. (1978) Anapplication of bin-packing to multiprocessor scheduling. SIAMJournal of Computing, 7, 1±17.

[3] Friesen, D.K. (1984) Tighter bounds for the multi®t processorscheduling algorithm. SIAM Journal of Computing, 13, 35±59.

[4] Yue, M. (1990) On the exact upper bound for the MULTIFITprocessor scheduling algorithm, in Operations Research in China,Yue, M. (ed.), Vol. 24 of Annals of Operations Research, Baltzer,Basel, Switzerland, pp. 233±259.

[5] Hochbaum, D.S. and Shmoys, D. (1987) Using dual approxima-tion algorithms for scheduling problems: theoretical and practicalresults. Journal of the Association of Computing Machinery, 34,144±162.

[6] Friesen, D.K. and Deuermayer, B.L. (1981) Analysis of greedysolution for a replacement part sequencing problem. Mathematicsof Operations Research, 6, 74±87.

[7] Deuermayer, B.L., Friesen, D.K. and Langston, M.A. (1982)Scheduling to maximize the minimum processor ®nish time in amultiprocessor system. SIAM Journal of Algebraic and DiscreteMethods, 3, 190±196.

[8] Csirik, J., Kellerer, H. and Woeginger, G. (1992) The exact LPT-bound for maximizing the minimum completion time. OperationsResearch Letters, 11, 281±287.

[9] Stecke, K.E. and Suri, R. (eds) (1985) Flexible ManufacturingSystems: Operations Research Models and Applications, Vol. 3 ofAnnals of Operations Research, Baltzer, Basel, Switzerland.

[10] Schmidt, G. (1984) Sheduling on semi-identical processors. Zeit-schift fuÈr Operations Research, 28, 153±162.

[11] Lee, C.Y. (1991) Parallel machines shedulingwith nonsimultaneousmachine available time. Discrete Applied Mathematics, 30, 53±61.

[12] Kellerer, H., Kotov, V., Rendl, F. and Woeginger, G. (1998) Thestock size problem. Operations Research 46 (suppl.) 1±13.

[13] Graham, R.L. (1966) Bounds for certain multiprocessing anom-alies. Bell System Technical Journal, 45, 1563±1581.

Table 3. r � 10

m AJN LPT MLPT A5=4 BEST AJN LPT MLPT A5=4 BEST

2 1.5 0.525 0 0 0 2 1.441 1.739 0.003 1.8132 3 1.941 1.699 0.639 1.509 5 1.121 1.332 0.747 0.9472 7 0.993 1.105 0.890 0.490 9 0.632 0.658 0.632 0.353

3 1.5 3.388 0.008 0.001 0.006 2 2.935 2.704 0.668 1.6493 3 2.537 2.716 0.988 1.875 5 1.161 1.146 0.907 1.0773 7 0.905 0.855 0.858 0.530 9 0.675 0.649 0.670 0.377

5 1.5 5.688 4.959 2.404 2.978 2 4.616 4.233 2.439 2.1615 3 2.496 2.454 1.157 1.628 5 1.421 1.503 0.968 0.7485 7 0.975 0.917 0.961 0.456 9 0.789 0.802 0.789 0.344

7 1.5 6.957 6.823 3.749 2.675 2 4.811 4.444 2.857 2.6627 3 2.678 2.618 1.516 1.539 5 1.358 1.313 1.190 0.5927 7 1.062 1.046 1.057 0.371 9 0.611 0.618 0.611 0.224

9 1.5 5.386 5.807 3.437 2.349 2 5.002 4.842 3.101 1.8839 3 2.519 2.508 1.468 1.289 5 1.407 1.271 1.101 0.5699 7 0.917 0.979 0.917 0.330 9 0.694 0.680 0.684 0.233

998 Kellerer

Page 9: Algorithms for multiprocessor scheduling with machine release times

Biography

Hans Kellerer is Associate Professor of Operations Research in theDepartment of Business Administration at University of Graz, Austria.

He obtained his Ph.D. in Mathematics from the Technical Universityof Graz. His research interests include scheduling, knapsack problems,approximation algorithms and combinatorial optimization.

Algorithms for multiprocessor scheduling with machine release times 999