Transcript
Page 1: Scheduling jobs with piecewise linear decreasing processing times

Scheduling Jobs with Piecewise LinearDecreasing Processing Times

T.C. Edwin Cheng,1 Qing Ding,1 Mikhail Y. Kovalyov,2

Aleksander Bachman,3 Adam Janiak3

1 Department of Management, The Hong Kong Polytechnic University, Hung Hom, Kowloon,Hong Kong SAR, People’s Republic of China

2 Institute of Engineering Cybernetics, National Academy of Sciences of Belarus, Surganova6, 220012 Minsk, Belarus

3 Institute of Engineering Cybernetics, Wroclaw University of Technology, Janiszewskiego11/17, 50-372 Wroclaw, Poland

Received 9 January 2001; revised 11 April 2002; accepted 25 November 2002

10.1002/nav.10073

Abstract: We study the problems of scheduling a set of nonpreemptive jobs on a single ormultiple machines without idle times where the processing time of a job is a piecewise linearnonincreasing function of its start time. The objectives are the minimization of makespan andminimization of total job completion time. The single machine problems are proved to beNP-hard, and some properties of their optimal solutions are established. A pseudopolynomialtime algorithm is constructed for makespan minimization. Several heuristics are derived for bothtotal completion time and makespan minimization. Computational experiments are conducted toevaluate their efficiency. NP-hardness proofs and polynomial time algorithms are presented forsome special cases of the parallel machine problems. © 2003 Wiley Periodicals, Inc. Naval ResearchLogistics 50: 531–554, 2003.

Keywords: machine scheduling; start time dependent processing times; computational com-plexity

1. INTRODUCTION

The following scheduling problem with start time dependent job processing times is studied.There are n independent nonpreemptive jobs, which are simultaneously available, to bescheduled for processing on m parallel machines. Each job can be completely processed by anymachine. Each machine can handle at most one job at a time and cannot stand idle until the last

Correspondence to: T.C.E. Cheng

© 2003 Wiley Periodicals, Inc.

Page 2: Scheduling jobs with piecewise linear decreasing processing times

job assigned to it has finished processing. A schedule is characterized by the sequences of jobsarranged in order of processing on the machines. The processing time of job j scheduled onmachine l depends on its start time t in the following way:

plj � � alj, if t � y,alj � blj�t � y�, if y � t � Y,alj � blj�Y � y�, if t � Y.

On machine l, each job j thus has a normal processing time alj, a common initial decreasingdate y, after which the processing time starts to decrease linearly with a decreasing rate blj anda common final decreasing date Y (Y � y), after which it decreases no further. It is assumedthat 0 � blj � 1 and alj � blj(min{¥i�1

n ali � alj, Y} � y) hold for each job j and machinel. The first condition ensures that the decrease of each job processing time is less than one unitper every unit of delay in its starting moment. The second condition guarantees positive jobprocessing times. These conditions are natural for real-life applications (see Ho, Leung, and Wei[14]). Given a schedule, the job completion times Cj, j � 1, . . . , n, are easily determined. Twoclassical scheduling criteria are considered: makespan, Cmax � max{Cj�j � 1, . . . , n}, andtotal completion time, ¥j�1

n Cj. These criteria are related to throughput time and total work-in-process inventories of a production system, respectively.

Two variants of the above problem are distinguished. If it is known that all jobs can becompleted by Y for any schedule, then the corresponding problem is called unbounded;otherwise, it is called bounded. It is assumed that all alj, y, and Y are integral and all blj arerational so that, blj � vlj/L, where vlj are integers and L is a natural number, j � 1, . . . , n,l � 1, . . . , m. For the case of a single machine or identical machines, the index l is omittedin the corresponding notation.

We adopt the three-field notation of Graham et al. [12] for describing traditional schedulingproblems, �/�/�, to denote our problem and its special cases. We add the description plj � alj �blj(t � y) in the second field to indicate the presence of start time dependent job processingtimes and the description Y � � to signify the unbounded problems. Some other descriptionsare added to the second field, which are easily understood.

Scheduling models with start time dependent job processing times began to attract theattention of the research community only in the last two decades. Most fundamental results areobtained for the situation where there is a single machine, the job processing time is a linearincreasing function of the job start time, i.e., pj � aj � bj(t � y) for each j, y � 0, Y � �,and the objective is to minimize makespan.

Assuming y � 0 and pj � aj � bjt, Gupta and Gupta [13] prove that the unbounded problemis solved by sequencing the jobs in nonincreasing order of bj/aj. Browne and Yechiali [4]consider a stochastic version of this problem. They show that the expected makespan isminimized when the jobs are sequenced in nondecreasing order of E(aj)/bj, where E(aj) is themean of aj. Glazebrook [10, 11] extends the latter model by incorporating precedence relationsbetween the jobs and allowing preemptions. Kunnathur and Gupta [20] address the unboundedproblem with individual initial increasing dates yj for the jobs. They develop a branch-and-bound algorithm and a dynamic programming algorithm for such a problem.

Mosheiov [23] addresses the unbounded problem of minimizing total completion time wherey � 0 and pj � 1 � bjt for all j. He proves that the optimal job sequences are V-shaped sothat the jobs appearing before the job with the smallest bj are sequenced in nonincreasing orderof bj, and the remaining jobs are sequenced in nondecreasing order of bj. This property provides

532 Naval Research Logistics, Vol. 50 (2003)

Page 3: Scheduling jobs with piecewise linear decreasing processing times

a basis for an O(n2n�3) enumerative algorithm and some heuristics. Mosheiov [24] also derivespolynomial algorithms for the unbounded problems of minimizing makespan, total completiontime, and number of tardy jobs if y � 0 and pj � bjt for all j, where it is assumed that theprocessing of the first job starts at time � 0 for a given small .

Kubiak and van de Velde [19] prove that the unbounded problem of minimizing makespan isNP-hard in the ordinary sense if y � 0 and pj � aj � bj(t � y) for all j. They also providepseudopolynomial algorithms for both unbounded and bounded cases. Kovalyov and Kubiak[18] develop a fully polynomial time approximation scheme for the bounded case.

There are many applications of the model where the job processing time is an increasingfunction of the job start time. These include the control of queues and communication systems,shops with deteriorating machines, and/or delay of maintenance or cleaning, fire fighting, andhospital emergency wards, scheduling steel rolling mills, etc. ([4], [20], [23], [19]).

The single machine model with pj � aj � bjt has recently been suggested by Ho, Leung, andWei [14]. They show that the sequence in nonincreasing order of aj/bj is optimal for theunbounded problem of minimizing maximum lateness if y � 0 and the jobs have a common duedate. This problem is motivated by military applications, where the task is to destroy an aerialthreat and its execution time decreases with time as the longer the action is delayed, the closerthe threat gets. Chen [6] gives an O(n2) algorithm for the problem where the objective is tominimize number of tardy jobs.

Another application of the model with decreasing job processing times can be found inenvironments where actual job processing times decrease due to the learning effect. As anexample, consider a machine operator who receives an order to produce a batch of enginecomponents that are similar in processing requirements but different in sizes. During the initialproduction period, his productivity is low as he needs to work out the proper operationalprocedures and work method through trial-and-error to produce the components. His produc-tivity will gradually increase as a result of learning through practice and experience gained fromrepeating similar operations and procedures over time.

A review of the results on scheduling with time dependent processing times has recently beenprovided by Alidaee and Womer [1]. Recent contributions to this field, not mentioned in theabove review, have been made by Bachman [2], Bachman and Janiak [3], Gawiejnowicz [9], andKononov [17].

We introduce a model more general than that suggested by Ho, Leung, and Wei [14]. Thismodel can provide a better approximation for some real-life situations where job processingtimes decrease with their start times. For example, in environments where learning effect takesplace, the productivity of an operator is at a low level initially, and it gradually increases to astable level after some time because of physical and safety limitations.

We prove that the bounded problems 1/pj � aj � bj(t � y), y � 0/Cmax and 1/pj � aj �bj(t � y), y � 0/¥ Cj are both NP-hard in the ordinary sense. We further establish someproperties for the optimal solutions of the single machine problems and construct a pseudopoly-nomial time algorithm for the problem 1/pj � aj � bj(t � y)/Cmax. It follows that this problemis only NP-hard in the ordinary sense. The strong NP-hardness of the problem 1/pj � aj �bj(t � y), y � 0/¥ Cj remains an open question. Derivation of a pseudopolynomial timealgorithm for this problem and for its extension to an arbitrary y could be possible if such analgorithm would exist for the problem with y � 0 and Y � �. The latter problem was studiedby Ng et al. [25]. The authors show that sequencing jobs in nondecreasing order of aj is optimalfor the cases bj � b, j � 1, . . . , n, and bj � kaj, j � 1, . . . , n. They derive apseudopolynomial time algorithm for the case aj � a, j � 1, . . . , n, but fail to derive suchan algorithm for the problem in its general setting.

533Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 4: Scheduling jobs with piecewise linear decreasing processing times

For practical purposes, we derive several heuristics to solve the NP-hard single machineproblems and conduct computational experiments to evaluate their efficiency.

We show that the unbounded problems with identical machines, P2/pj � aj � bj(t � y),bj � b, y � 0, Y � �/Cmax, and P/pj � aj � bj(t � y), bj � b, y � 0, Y � �/Cmax areNP-hard and strongly NP-hard, respectively, and the unbounded problem with unrelatedmachines, R/plj � alj � blj(t � y), y � 0, blj � b, Y � �/¥ Cj, is solvable in O(n3) timeby a transformation to a weighted bipartite matching problem. If the machines are uniform, thelatter time complexity can be further reduced to O(n log n).

We would like to contrast our results with those of Kubiak and van de Velde [19]. The presentstudy of problems with piecewise linear decreasing job processing times is a continuation of thework reported in Cheng and Kovalyov [7]. We have attempted to adapt the NP-hardness proofand pseudopolynomial algorithms provided by Kubiak and van de Velde for the Cmax-problemwith piecewise linear increasing job processing times. However, the effort was unsuccessfulbecause inversion of the sign in the formula for processing times does not lead to a “mirror”problem that could then be similarly analyzed. The cause can be due to the fact that processingtimes and, therefore, the Cmax value depend on the job sequence. In this case, a known inversionapproach like that for the classical problems 1/rj/Cmax and 1/ /Lmax (see Lawler et al. [22])does not work.

We have exploited the specific characteristics of our problem to construct the NP-hardnessproof, which is based on different ideas from those of Kubiak and van de Velde. Moreover, theydid not study the ¥ Cj criterion. Our dynamic programming algorithm has some similarities withthat of Kubiak and van de Velde. It is based on specific properties of the problem withdecreasing job processing times, which are different from those established by Kubiak and vande Velde for the problem with increasing job processing times. As the main goal of our paperis to provide classification of computational complexities, the derivation of a pseudopolynomialalgorithm is important as its existence, coupled with the NP-hardness proof, establishes that aproblem is only NP-hard in the ordinary sense.

2. SINGLE MACHINE

In this section, we prove that the bounded single machine problems are NP-hard in theordinary sense, establish some properties of their optimal solutions, and show that makespan canbe minimized in pseudopolynomial time. We further describe several heuristics for theseproblems and conduct computational experiments to evaluate their efficiency.

2.1. NP-Hardness Proofs

THEOREM 1: The problem 1/pj � aj � bj(t � y), y � 0/Cmax is NP-hard.

PROOF: We show that the decision version of the above problem is NP-complete by atransformation from the known NP-complete problem PARTITION (Garey and Johnson [8]): Givenpositive integers h1, . . . , hr, is there a set S � {1, . . . , r} such that ¥j�S hj � H, where2H � ¥j�1

r hj?Given any instance I of PARTITION, we define V � (r!)(2r)3r�6H2, A � V4, � � 1/V20, � �

1/V22, and construct the following instance II for our problem. There are 2r jobs: (1, j) and (2,j) with normal processing times

534 Naval Research Logistics, Vol. 50 (2003)

Page 5: Scheduling jobs with piecewise linear decreasing processing times

ai, j � A ai, j, i � 1, 2, where a1, j � 2jH hj, a2, j � 2jH,

and decreasing rates

bi, j � �ai, j � �bi, j, i � 1, 2, where b1, j � �2r�jH �hj

r � j 1, b2,j � �2r�jH.

It can easily be seen that the decreasing rates can be represented in the form vi, j/L, where vi, j

are some natural numbers and L � (r!)V22.The common initial decreasing date is y � 0 and the common final decreasing date is Y �

rA � 2r�1H � H. Define

E � 3Y2/2 � rA2/2 � A�2r�1H � H� and

F � �j�1

r � �r j � 1��2r�r�j�1H rhj

r � j 1� � H.

The threshold value for Cmax is G � 2Y � �E � �AF � 2�V.We prove that instance I of PARTITION has a solution if and only if there exists a solution for

the constructed instance II of our problem.It is easy to see that the construction of II can be done in time polynomial in the length of

I.From the definition of V, A, �, and �, we have

�V �1

V21 � � �1

V20 � �V �1

V19 � �A �1

V16 � �A2 �1

V12 . (1)

Consider a job sequence which is a solution for instance II. This solution is called a canonicalsolution if it is of the form ((kr, r), (kr�1, r � 1), . . . , (k1, 1), (3 � kr, r), (3 � kr�1, r �1), . . . , (3 � k1, 1)), where kj � {1, 2}, j � 1, . . . , r. We will show that if II has a solution,then there exists a canonical solution.

Let a job sequence � be a solution for II. Denote by [ j] the job in the position j of thissequence and denote by s[ j] the start time of this job. Define � ¥j�1

r a[ j] � Y and � ¥j�2r

( j � 1)b[ j] � r ¥j�r�12r b[ j] � F. The proof of the theorem is based on the following three

lemmas.

LEMMA 1: If � is a solution for II, then � 0.

PROOF: If � 0, then the (r � 1)st job starts after Y. Then we have

Cmax � �j�1

r

�a�j� � b�j�s�j�� �j�r�1

2r

�a�j� � b�j�Y�

� �j�1

2r

a�j� � ��j�2

r

b�j� �i�1

j�1

a�i� Y �j�r�1

2r

b�j��

535Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 6: Scheduling jobs with piecewise linear decreasing processing times

� 2Y � ��j�2

r �i�1

j�1

��a�j� � �b�j��a�i� Y �j�r�1

2r

��a�j� � �b�j���� 2Y � ���

j�2

r �i�1

j�1

a�j�a�i� Y �j�r�1

2r

a�j��. (2)

Define X1 � ¥j�2r ¥i�1

j�1 a[ j]a[i] � Y ¥j�r�12r a[ j]. Since ¥j�1

r a[ j] � Y � and a[ j] � A �a[ j], we have

X1 �1

2 � �j�1

r

a�j�� 2

�1

2 �j�1

r

a�j�2 Y�2Y � �

j�1

r

a�j���

1

2�Y �2 �

1

2 �j�1

r

�A2 2Aa�j� �a�j��2� Y�Y � �

� �1

2Y2 Y� � �1

2rA2 A�2r�1H � H �� Y2 � Y

� E � A. (3)

Recall that all aj are integral. Then � 0 implies � 1. From (1), (2), and (3), we obtain

Cmax � G � �A � �AF � 2�V � 0,

a contradiction. �

LEMMA 2: If � is a solution for II, then � 0.

PROOF: By Lemma 1, we have � 0 and hence s[r�1] � C[r] � ¥j�1r a[ j] � Y. Similar

to (2) and (3), we obtain

Cmax � 2Y � ���j�2

r�1 �i�1

j�1

a�j�a�i� Y �j�r�2

2r

a�j�� ���j�2

r�1 �i�1

j�1

b�j�a�i� Y �j�r�2

2r

b�j��� 2Y � ��X1 a�r�1���

j�1

r

a�j� � Y�� ���j�2

r�1 �i�1

j�1

b�j�A rA �j�r�2

2r

b�j��� 2Y � ��E a�r�1�� �A��

j�2

r

�j � 1�b�j� r �j�r�1

2r

b�j��� G �A � 2�V. (4)

536 Naval Research Logistics, Vol. 50 (2003)

Page 7: Scheduling jobs with piecewise linear decreasing processing times

If � 0, then from (4) and (1) we obtain Cmax � G � �A � 2�V � 0, a contradic-tion. �

LEMMA 3: If instance II has a solution, then it has a canonical solution.

PROOF: Given a solution for II, if jobs (1, r) and (2, r) are both in the last r positions, then � rb1,r � rb2,r � F � 2r(2r)rH � F � 0, contradicting Lemma 2. If they are both in thefirst r positions, then � a1,r � a2,r � 2r�1H � H � 0, contradicting Lemma 1. Thus, either(1, r) or (2, r) is in the first r positions but not both. Using this fact, we can prove that either(1, j) or (2, j) is in the first r positions but not both, for j � r � 1, . . . , 1. Further, aninterchange technique can be used to show that the jobs (i, j) in the first r positions, which arecompleted by Y, are arranged in the Longest Weighted Processing Time (LWPT) order, i.e., inthe order of nonincreasing values ai, j/bi, j, to deliver the minimum completion time for the jobin position r. It follows that there exists a canonical solution for II. �

We now have the necessary tools to prove the theorem.Suppose that II has a solution. By the above lemma, there is a canonical solution of the form

((kr, r), (kr�1, r � 1), . . . , (k1, 1), (3 � kr, r), (3 � kr�1, r � 1), . . . , (3 � k1, 1)),where kj � {1, 2}, j � 1, . . . , r. Define a set S for PARTITION so that j � S if and only if kj �1, j � 1, . . . , r. We have

� �j�1

r

akj, j � Y � � �j�1

r

a2, j � Y� �j�S

hj � �j�S

hj � H � 0. (5)

Hence, ¥j�S hj � H.Further,

� �j�1

r

��j � 1�bkr�j�1,r�j�1 rb3�kr�j�1,r�j�1� � F

� �j�S

�j � 1�hj

r � j 1� �

j�S

rhj

r � j 1 H

� H � �j�S

hj � 0. (6)

Hence, ¥j�S hj � H. Thus, S is a solution for instance I of PARTITION.Suppose that the set S is a solution for I. Construct a canonical solution ((kr, r), (kr�1, r �

1), . . . , (k1, 1), (3 � kr, r), (3 � kr�1, r � 1), . . . , (3 � k1, 1)), where kj � 1 if j �S and kj � 2, otherwise. From (5) and (6), we have � � 0. Similar to (4), we derive

Cmax � �j�1

r�1

�a�j� � b�j�s�j�� �j�r�2

2r

�a�j� � b�j�Y�

� �j�1

2r

a�j� � ��j�2

r�1 �i�1

j�1

b�j��a�i� � 2Yb�i�� Y �j�r�2

2r

b�j��

537Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 8: Scheduling jobs with piecewise linear decreasing processing times

� 2Y � ��j�2

r�1 �i�1

j�1

��a�j� � �b�j��a�i� Y �j�r�2

2r

��a�j� � �b�j��� 2Y �j�2

r�1 �i�1

j�1

b�j�a�i�

� 2Y � ���j�2

r�1 �i�1

j�1

a�j�a�i� Y �j�r�2

2r

a�j�� ���j�2

r�1 �i�1

j�1

b�j�a�i� Y �j�r�2

2r

b�j�� �. (7)

Define X2 � ¥j�2r�1 ¥i�1

j�1 a[ j]a[i] � Y ¥j�r�22r a[ j]. Since ¥j�1

r a[ j] � rA � 2r�1H � H �Y, � ¥j�1

r a[ j] � Y � 0 and a[ j] � A � a[ j], we have

X2 � �j�2

r �i�1

j�1

a�j�a�i� a�r�1� �j�1

r

a�j� Y �j�r�2

2r

a�j� � �j�2

r �i�1

j�1

a�j�a�i� Y �j�r�1

2r

a�j�

�1

2 � �j�1

r

a�j�� 2

�1

2 �j�1

r

a�j�2 Y�2Y � �

j�1

r

a�j���

1

2Y2 �

1

2 �j�1

n

�A2 2Aa�j� �a�j��2� Y2

�3

2Y2 � �1

2rA2 A�2r�1H � H�

1

2 �j�1

n

�a�j��2� � E � V. (8)

Define X3 � ¥j�2r�1 ¥i�1

j�1 b[ j]a[i] � Y ¥j�r�22r b[ j]. The following chain of inequalities holds:

X3 � �j�2

r�1 �i�1

j�1

b�j�A rA �j�2

2r

b�j� �2rH H�� �j�2

r�1 �i�1

j�1

b�j� �j�r�2

2r

b�j��� A� �

j�2

r

�j � 1�b�j� r �j�r�1

2r

b�j�� V � A �j�1

r

��j � 1�bkr�j�1,r�j�1 rb3�kr�j�1,r�j�1� V

� A �j�1

r � �r j � 1��2r�r�j�1H rhj

r � j 1� � A��j�S

�j � 1�hj

r � j 1� �

j�S

rhj

r � j 1� V

� A�F H � �j�S

hj� V � AF V. (9)

From (7), (8) and (9), we obtain

Cmax � 2Y � ��E � V� ��AF V� � � G �V �V � � 2�V � G.

Thus, the constructed canonical solution is a solution for instance II. �

538 Naval Research Logistics, Vol. 50 (2003)

Page 9: Scheduling jobs with piecewise linear decreasing processing times

THEOREM 2: The problem 1/pj � aj � bj(t � y), y � 0/¥ Cj is NP-hard.

PROOF: Again, we use a transformation from PARTITION. Given the instance I of PARTITION

stated in the previous theorem, we define V � (2rH)6, A � V3 and construct the followinginstance II of our problem. There are 2r � 1 jobs: 0 and (1, j), (2, j), j � 1, . . . , r withnormal processing times

a0 � 4r2A, a1,j � jA �jhj, a2,j � jA,

where �j � 12

� (2r � 3j � 2), j � 1, . . . , r, and decreasing rates

b0 � 1, b1, j � 0, b2, j �hj

jAfor j � 1, . . . , r.

As in the previous theorem, the decreasing rates can be represented as vi, j/L, where L � (r!) A.The common initial decreasing date is y � 0 and the common final decreasing date is Y �

¥j�1r (a1, j � a2, j) � ¥j�1

r ( j � 1)hj � H. Define

E � �j�1

r

��4r � 4j 3�jA �2r � 2j � 1��jhj�, F � 2 �j�1

r

�r � j 1��j � 1�hj.

The threshold value for ¥ Cj is G � E � a0 � F � H/ 2 � 1/V.The construction of II can be done in time polynomial in the length of I. We now show that

instance I of PARTITION has a solution if and only if there exists a solution for the constructedinstance II of the problem 1/pj � aj � bj(t � y), y � 0/¥ Cj.

Assume that there is a job sequence which is a solution for the instance II. Then aninterchange technique can be used to show that this solution can be transformed to a canonicalsolution of the form ((k1, 1), (3 � k1, 1), (k2, 2), (3 � k2, 2), . . . , (kr, r), (3 � kr, r), 0),where kj � {1, 2}, j � 1, . . . , r, which is a solution for II as well. It is easy to see that thefirst 2r jobs of this solution start before Y.

Given a canonical solution �, define a[ j] and e[ j] such that a[ j] � e[ j]A � a[ j]. Similar to(7), we get

C�k� � �j�1

k

a�j� � �j�2

k �i�1

j�1

b�j�e�i� 1/V2 for k � 1, . . . , 2r. (10)

Using (10), we estimate the actual processing times of jobs (1, j) and (2, j) sequenced inpositions 2j � 1 and 2j of �, j � 1, . . . , r:

p�2j�1� � a�2j�1� � b�2j�1� �i�1

j�1

�e1,i e2,i� � 1/V2 � a�2j�1� � j�j � 1�Ab�2j�1� � 1/V2 (11)

and

539Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 10: Scheduling jobs with piecewise linear decreasing processing times

p�2j� � a�2j� � b�2j�� �i�1

j�1

�e1,i e2,i� e1,j� � 1/V2 � a�2j� � j2Ab�2j� � 1/V2. (12)

Suppose a canonical solution � is a solution for II. Define a set S for I so that j � S if kj �1, j � 1, . . . , r. From (11), (12), equations a[2j�1] � a[2j] � �jhj and b[2j�1] � b[2j] �hj/( jA), and the observation that j � S implies a[2j�1] � �jhj and w[2j] � hj/( jA), we have

�j�1

2r

C�j� � �j�1

r

��2r � 2j 2�p�2j�1� �2r � 2j 1�p�2j��

� �j�1

r

��2r � 2j 2��jA a�2j�1� � j�j � 1�Ab�2j�1� � 1/V2�

�2r � 2j 1��jA a�2j� � j2Ab�2j� � 1/V2��

� �j�1

r

�2�2r � 2j 1��jA � 1/V2� �jA � 1/V2� �2r � 2j 1��jhj a�2j�1��

� �j�1

r

��2r � 2j 2��j � 1�Ahj jAb�2j��2r � 3j 2�� � 4/V2

� �E �j�1

r

a�2j�1�� � �F �j�1

r

�2r � 3j 2�jAb�2j�� � 4r2/V2

� E � F �j�S

��j � �2r � 3j 2��hj � 4r2/V2

� E � F 12 �

j�S

hj � 4r2/V2. (13)

In the same way as for (10), (11), (12), and (13), we obtain the following inequalities:

C�k� � �j�1

k

a�j� � �j�2

k �i�1

j�1

b�j�e�i� � 1/V2,

p�2j�1� � a�2j�1� � j�j � 1�Ab�2j�1� 1/V2,

p�2j� � a�2j� � j2Ab�2j� 1/V2,

and

�j�1

2r

C�j� � E � F 12 �

j�S

hj 4r2/V2.

540 Naval Research Logistics, Vol. 50 (2003)

Page 11: Scheduling jobs with piecewise linear decreasing processing times

Further, from (11) and (12) and similarly to (13),

C�2r� � �j�1

r

�p�2j�1� p�2j�� � Y H � �j�S

hj � 2r/V2

and

C�2r�1� � C�2r� �a0 � min Y, C�2r�� � a0 H � �j�S

hj � 2r/V2.

Based on the latter inequality and (13), we get

�j�1

2r�1

C�j� � �j�1

2r

C�j� C�2r�1� � G 12 �H � �

j�S

hj� � 2/V.

If ¥j�S hj � H, then ¥j�12r�1 C[ j] � G, a contradiction. On the other hand, if ¥j�S hj � H,

then we can similarly get C[2r] � Y, C[2r�1] � a0, and again ¥j�12r�1 C[ j] � G. Thus, ¥j�S

hj � H must hold, as required by part “if.”Suppose that, for instance I, there exists a set S such that ¥j�S hj � H. Then we construct

a canonical solution of the form ((k1, 1), (3 � k1, 1), (k2, 2), (3 � k2, 2), . . . , (kr, r), (3 �kr, r), 0), where kj � 1 if j � S and kj � 2, otherwise. Making necessary estimations, weobtain C[2r] � Y � 2r/V2, C[2r�1] � a0 � 2r/V2 and, finally, ¥j�1

2r�1 C[ j] � G � (4r2 �2r)/V2 � 1/V � G, as required by part “only if.” �

2.2. Properties of Optimal Solutions and a DynamicProgramming Algorithm for Cmax Minimization

Assume, without loss of generality, that ¥j�1n aj � y. Otherwise, all jobs are completed by

y in any schedule and, hence, all schedules have the same value of Cmax and ¥ Cj is minimizedby sequencing the jobs in the Shortest Processing Time (SPT) order, i.e., in the order ofnondecreasing values aj.

Given a schedule, there is a unique job that starts by y and completes after y. Using theterminology of Kubiak and van de Velde [19], we call this job pivotal. Any job that starts beforey and completes by y is called early, any job that starts after y but by Y is called tardy, and anyjob that starts after Y is called suspended.

We claim that there exists an optimal solution for the problem 1/pj � aj � bj(t � y)/Cmax

with the following properties.

Property Cmax-1. The sequence of the early jobs and the sequence of the suspendedjobs are immaterial.

Property Cmax-2. The tardy jobs are sequenced in nonincreasing order of aj/bj.

541Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 12: Scheduling jobs with piecewise linear decreasing processing times

Property Cmax-3. The pivotal job has a processing time no larger than that of any ofthe early jobs.

Property Cmax-4. If ai � aj and bi � bj, then job i precedes job j.

Property Cmax-1 is trivial. Properties Cmax-2–Cmax-4 can easily be proved by a job inter-change argument. Property Cmax-2 also follows from the result of Ho, Leung, and Wei [14].Property Cmax-4 implies that the Longest Processing Time (LPT) sequence (i1, i2, . . . , in) suchthat ai1

� ai2� . . . � ain

is optimal for the problem 1/pj � aj � bj(t � y), bj � b/Cmax withequal job decreasing rates and the sequence such that bi1

� bi2� . . . � bin

is optimal for theproblem 1/pj � aj � bj(t � y), aj � a/Cmax with equal normal job processing times.

Let P(s, k) denote the problem 1/pj � aj � bj(t � y)/Cmax in which the start time of theearliest tardy job and the pivotal job are fixed to be s and k, respectively, where max{ak, y �1} � s � y � ak. It is clear that the problem 1/pj � aj � bj(t � y)/Cmax reduces to solving¥k�1

n min{ak, y � 1} problems P(s, k) for s � max{ak, y � 1}, max{ak, y � 1} � 1, . . . ,y � ak and k � 1, . . . , n.

A dynamic programming algorithm based on Properties Cmax-1 and Cmax-2 can be con-structed for the problem P(s, k). Renumber the jobs, except the pivotal job k, in nonincreasingorder of aj/bj: a1/b1 � . . . � an�1/bn�1. In the algorithm, the jobs are considered in the order1, . . . , n � 1. Each successive job can be scheduled early, tardy, or suspended in some partialschedule. It is assumed that the actual processing time of job j determined to be suspended iscalculated as pj � aj � bj(Y � y) even if in a current partial schedule its start time is less thanY. The makespan value is calculated accordingly.

A schedule is said to be in the state ( j, Ae, As, Bs) if it includes jobs 1, . . . , j, the sum ofthe normal processing times aj of the early jobs is equal to Ae, the sum of the normal processingtimes aj of the suspended jobs is equal to As, and the sum of the decreasing rates bj of thesuspended jobs is equal to Bs. For such a schedule, if f and g are the values of the makespanand the completion time of the last tardy job, respectively, then f � g � As � Bs(Y � y).

Consider a pair of partial schedules in the same state. Assumption bj � 1, j � 1, . . . , n,ensures that the schedule with the smaller completion time of the last tardy job, or equivalently,with the smaller Cmax value will have a smaller Cmax value after expanding it by unscheduledjobs in the same way as the other schedule of the pair. This observation shows that the onlyschedule with the minimum Cmax value among the schedules in the same state can be chosen forfurther expansion.

Let fj( Ae, As, Bs) be the minimum Cmax value among the schedules in the state ( j, Ae, As,Bs) and let gj

0( Ae, As, Bs) and g*j( Ae, As, Bs) be the start and completion times of the last tardyjob in the corresponding schedule, respectively. A schedule with value fj( Ae, As, Bs) can beobtained from a schedule in some previous state by taking one of the following decisions aboutjob j.

1. Schedule j as an early job. In this case, the previous state is ( j � 1, Ae � aj, As,Bs) and fj( Ae, As, Bs) � fj�1( Ae � aj, As, Bs), gj

0( Ae, As, Bs) � gj�10 ( Ae, As,

Bs), g*j( Ae, As, Bs) � g*j�1( Ae, As, Bs).2. Schedule job j at the end of the sequence of tardy jobs. The previous state is ( j �

1, Ae, As, Bs). In this case, it must be satisfied g*j�1( Ae, As, Bs) � Y. We havefj( Ae, As, Bs) � fj�1( Ae, As, Bs) � aj � bj( g*j�1( Ae, As, Bs) � y), gj

0( Ae, As,Bs) � g*j�1( Ae, As, Bs), g*j( Ae, As, Bs) � g*j�1( Ae, As, Bs) � aj �bj( g*j�1( Ae, As, Bs) � y).

542 Naval Research Logistics, Vol. 50 (2003)

Page 13: Scheduling jobs with piecewise linear decreasing processing times

3. Schedule job j as a suspended job. In this case, the previous state is ( j � 1, Ae,As � aj, Bs � bj) and fj( Ae, As, Bs) � fj�1( Ae, As � aj, Bs � bj) � aj �bj(Y � y), gj

0( Ae, As, Bs) � gj�10 ( Ae, As � aj, Bs � bj), g*j( Ae, As, Bs) �

g*j�1( Ae, As � aj, Bs � bj).

The initial values are f0(0, 0, 0) � g00(0, 0, 0) � g*0(0, 0, 0) � s and all other values fj( Ae,

As, Bs) are set to infinity. The recursion for j � 1, . . . , n � 1, Ae � 0, 1, . . . , s � pk, As �0, 1, . . . , ¥i�1

n�1 ai, Bs � 0, 1/L, 2/L, . . . , ¥i�1n�1 vi/L is given by

fj�Ae, As, Bs�

� min� �i� fj�1�Ae � aj, As, Bs�,�ii� fj�1�Ae, As, Bs� aj � bj�g*j�1�Ae, As, Bs� � y�, if g*j�1�Ae, As, Bs� � Y,�iii� fj�1�Ae, As � aj, Bs � bj� aj � bj�Y � y�,

gj0�Ae, As, Bs� � � gj�1

0 �Ae, As, Bs�, if the above minimum is reached on �i�,g*j�1�Ae, As, Bs�, if the above minimum is reached on �ii�,gj�1

0 �Ae, As � aj, Bs � bj�, if the above minimum is reached on �iii�,

g*j�Ae, As, Bs� � �g*j�1�Ae, As, Bs�, if the above minimum is reached on �i�,g*j�1�Ae, As, Bs� aj � bj�g*j�1�Ae, As, Bs� � y�,

if the above minimum is reached on �ii�,g*j�1�Ae, As � aj, Bs � bj�, if the above minimum is reached on �iii�.

The three parts on the right-hand sides of the above equations correspond to the threepossibilities of scheduling job j.

The minimum makespan value is equal to min{ fn�1(s � pk, 0, 0)�g*n�1(s � pk, 0, 0) �Y} if this minimum is finite, and it is equal to min{ fn�1(s � pk, As, Bs)�gn�1

0 (s � pk, As,Bs) � Y � g*n�1(s � pk, As, Bs), As � 0, 1, . . . , ¥j�1

n�1 aj, Bs � 0, 1/L, 2/L, . . . , ¥j�1n�1

vj/L}, otherwise. The corresponding optimal job sequence can be found by backtracking.Since s � y � pk, the time requirement of the above algorithm for solving the problem P(k,

s) is equal to O(n( y � 1) ¥j�1n�1 aj ¥j�1

n�1 vj). Thus, the problem 1/pj � aj � bj(t � y)/Cmax

can be solved in O(n( y � 1)min{n( y � 1), ¥j�1n aj} ¥j�1

n aj ¥j�1n vj) time. For y � 0, this

time complexity reduces to O(n2 ¥j�1n aj ¥j�1

n vj). The efficiency of the presented method canbe improved by taking into account Properties Cmax-3 and Cmax-4.

Similar to Properties Cmax-1 and Cmax-2, it is easy to see that there exists an optimal solutionfor the problem 1/pj � aj � bj(t � y)/¥ Cj with the following properties.

Property ¥ Cj-1. The early jobs are sequenced in nondecreasing order of aj.Property ¥ Cj-2. The suspended jobs are sequenced in nondecreasing order of aj �bj(Y � y).

These properties are not sufficient to provide a basis for a pseudopolynomial dynamicprogramming algorithm to solve the problem 1/pj � aj � bj(t � y)/¥ Cj because we do notknow how to handle the tardy jobs. Since the processing time of each tardy job is a lineardecreasing function, the problem of handling these jobs is closely related to the problem 1/pj �aj � bj(t � y), y � 0, Y � �/¥ Cj studied by Ng et al. [25]. Unfortunately, the authors areable to obtain results only for the special cases of this problem.

543Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 14: Scheduling jobs with piecewise linear decreasing processing times

We complete this subsection by considering a special case of the problem with the objective¥ Cj. Let the jobs be numbered in the SPT order of their normal processing times, so that a1 �a2 � . . . � an.

LEMMA 4: The SPT sequence is optimal for the problem 1/pj � aj � bj(t � y), bj � b,y � 0, Y � �/¥ Cj.

PROOF: Consider an arbitrary sequence of jobs (i1, . . . , in). For this sequence, calculate thecompletion time of job il:

Cil � ai1 ai2 � bai1 · · ·

� ai1�1 � b� ai2 ai3 � b�ai1�1 � b� ai2� · · ·

� ai1�1 � b�2 ai2�1 � b� ai3 · · ·

� ai1�1 � b�l�1 ai2�1 � b�l�2 · · · ail.

Then the total completion time for this sequence is

�l�1

n

Cil � ai1�1 �1 � b� �1 � b�2 · · · �1 � b�n�1� ai2�1 �1 � b� · · ·

�1 � b�n�2� · · · ain � �l�1

n

ail�1 � �1 � b�n�l�1�/b.

Let (1 � (1 � b)n�l�1)/b � bl, l � 1, . . . , n � 1. We have bl � bl�1, l � 1, . . . , n �1. Thus, the value of ¥ Cj is minimized when the job with the lth minimum value of aj issequenced lth, i.e., the SPT sequence is optimal. �

2.3. Heuristic Algorithms

Since the problems 1/pj � aj � bj(t � y)/Cmax and 1/pj � aj � bj(t � y)/¥ Cj areNP-hard, we construct and experimentally test several heuristic algorithms for them. Thealgorithms for the Cmax minimization problem are denoted by C1, C2, and C3 and those for the¥ Cj minimization problem are denoted by S1, S2, S3, and S4.

In algorithm C1, Property Cmax-2 is used. A heuristic schedule is obtained by sequencing allthe jobs in nonincreasing order of aj/bj.

This property is also used in algorithm C2. However, it only applies to jobs which start theirexecution after y. The jobs starting their execution by y are sequenced in nondecreasing orderof aj/bj.

Property Cmax-4 is used in algorithm C3. A heuristic solution is constructed as follows: Twojob subsequences are constructed, and a final complete sequence is obtained by their concate-nation. From the set of unscheduled jobs, a job with the largest value of aj is chosen, and it isassigned to the end of the first (earlier) subsequence of jobs. From the set of the remaining jobs,a job with the largest value of bj is chosen, and it is assigned to the beginning of the second(later) subsequence of jobs. This operation is repeated until there is a job to be assigned.

544 Naval Research Logistics, Vol. 50 (2003)

Page 15: Scheduling jobs with piecewise linear decreasing processing times

In algorithm S1, a heuristic schedule is obtained by sequencing jobs in nondecreasing orderof aj.

Properties ¥ Cj-1 and ¥ Cj-2 are used in algorithm S2. The jobs are assigned to the end ofthe current schedule in nondecreasing order of aj until the condition C � y is satisfied, whereC denotes the completion time of the latest job in the current schedule. When C � y is satisfied,the value of the ratio (C � aj)/(1 � bj) is calculated for all the remaining jobs. A job with thesmallest value of this ratio is assigned to the end of the current schedule and the value C isupdated. This operation is repeated, each time with an updated value of C, until the conditionC � Y is satisfied. When C � Y is satisfied, the remaining jobs are sequenced in nondecreasingorder of aj � bj(Y � y).

Algorithm S3 is a modification of algorithm C3. Namely, until there are unscheduled jobs, ajob with the smallest value of aj is assigned to the end of the earlier subsequence of jobs anda job with the largest value of aj � bj(Y � y) is assigned to the beginning of the latersubsequence of jobs. Then the two subsequences concatenate.

Algorithm S4 is the same as algorithm C2.Detailed descriptions of the above heuristic algorithms are given in the Appendix. It can

easily be seen that all the above algorithms except for algorithm S2 can be implemented in O(nlog n) time. Algorithm S2 can be implemented in O(n2) time.

The algorithms are tested on the instances of the NP-hard problems for n � 10, n � 50, andn � 100. For each n, 15 different tests with 100 randomly generated instances are performed.Problem parameters are randomly generated according to the uniform distribution. In all thetests, the values of aj are generated from the interval (0, 100]. Set A � ¥j�1

n aj. The values ofthe common initial decreasing date y are generated from the intervals (0, A/3] and (0, 2 A/3].The values of the common final decreasing date Y are generated from the intervals ( A/3, 2 A/3],(2 A/3, A] and ( A, �) (unbounded case). Since the decreasing rates satisfy the conditions 0 �bj � 1 and bj(min{¥i�1

n ai � aj, Y} � y) � aj, their values are generated from the intervals(0, T/3], (0, T] and (T/ 2, T], where T � amin/(min{A � amin, Y} � y) and amin �min{aj�j � 1, . . . , n}. Since the condition y � Y has to be satisfied, the combination y � (0,2 A/3] and Y � ( A/3, 2 A/3] is invalid. Therefore, the combination of all the intervals gives 15different cases.

Given a problem instance, let OPT denote the optimal objective function value and let c1–c3and s1–s4 denote objective function values delivered by algorithms C1–C3 and S1–S4,respectively. OPT is found by explicit enumeration and because of time requirements it can beapplied only for n � 15. For each generated instance, we calculate a performance ratio xi/z,where x � {c, s}, i � 1, 2, 3, 4, and z � {OPT, c1, s1}.

Each numerical entry of the tables given below contains an average ratio xi/z obtained for all100 generated instances of a problem with a given n and a combination of the intervals for y,Y, and bj.

For n � 10 (see Tables 1 and 2), the objective function values for solutions constructed bythe heuristic algorithms are compared with the optimal ones.

For n � 50 and n � 100 (see Tables 3 and 4), a relative performance of the algorithms wasverified.

For the unbounded problems 1/pj � aj � bjt, y � 0, Y � �/Cmax, 1/pj � aj � bt, y �0, Y � �/¥ Cj, heuristics C1 and S1, respectively, are optimal algorithms. Therefore, for theseproblems, we are able to compare the solutions provided by heuristics C2, C3 and S2, S3, andS4 with the optimal solutions for larger values of n, in particular n � 500, and n � 1000 (seeTables 5 and 6).

545Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 16: Scheduling jobs with piecewise linear decreasing processing times

The average behavior of the heuristics on the generated instances can be summarized asfollows: For the Cmax minimization problem, algorithm C1 is the best for all the performed tests.For n � 10, a big difference between the results obtained for the cases y � (0, A/3] and y �(0, 2 A/3] is observed. The algorithms perform much better for y � (0, 2 A/3]. An explanationlies in the fact that for large values of the common initial decreasing date y, the order of the jobsbecomes immaterial. Another observation is that the values bj have no significant influence onthe relative performance of algorithms C1–C3. An exception is the case for n � 100 and bj �(0, T] and Cmax minimization (see marked results in Table 3). In this case (the only exceptionis the underlined result), the algorithms C2 and C3 perform much worse than for the othercombinations of the parameters for n � 100.

In all the tests performed for the ¥ Cj minimization problem, algorithm S1 dominates theother algorithms. Algorithm S4 performs better than algorithms S2 and S3 when y � (0, A/3].

Table 1. Experimental results for n � 10 and Cmax minimization.

Interval for

c1/OPT c2/OPT c3/OPTy Y bj

(0, A/3] (A/3, 2A/3] (0, 1/3T] 1.053 1.194 1.268(0, A/3] (A/3, 2A/3] (0, T] 1.058 1.238 1.324(0, A/3] (A/3, 2A/3] (T/2, T] 1.097 1.228 1.319(0, A/3] (2A/3, A] (0, T/3] 1.043 1.162 1.278(0, A/3] (2A/3, A] (0, T] 1.019 1.269 1.411(0, A/3] (2A/3, A] (T/2, T] 1.028 1.219 1.345(0, A/3] Y � A (0, T/3] 1.037 1.169 1.292(0, A/3] Y � A (0, T] 1.015 1.293 1.432(0, A/3] Y � A (T/2, T] 1.021 1.225 1.346(0, 2A/3] (2A/3, A] (0, T/3] 1.005 1.037 1.040(0, 2A/3] (2A/3, A] (0, T] 1.013 1.089 1.093(0, 2A/3] (2A/3, A] (T/2, T] 1.020 1.150 1.150(0, 2A/3] Y � A (0, T/3] 1.005 1.037 1.040(0, 2A/3] Y � A (0, T] 1.007 1.115 1.150(0, 2A/3] Y � A (T/2, T] 1.009 1.109 1.091

Table 2. Experimental results for n � 10 and ¥ Cj minimization.

Interval for

s1/OPT s2/OPT s3/OPT s4/OPTy Y bj

(0, A/3] (A/3, 2A/3] (0, T/3] 1.024 1.310 1.346 1.180(0, A/3] (A/3, 2A/3] (0, T] 1.085 1.381 1.412 1.277(0, A/3] (A/3, 2A/3] (T/2, T] 1.061 1.321 1.354 1.208(0, A/3] (2A/3, A] (0, T/3] 1.021 1.293 1.311 1.179(0, A/3] (2A/3, A] (0, T] 1.079 1.356 1.325 1.250(0, A/3] (2A/3, A] (T/2, T] 1.054 1.288 1.310 1.192(0, A/3] Y � A (0, T/3] 1.020 1.293 1.294 1.186(0, A/3] Y � A (0, T] 1.080 1.356 1.322 1.261(0, A/3] Y � A (T/2, T] 1.053 1.288 1.295 1.194(0, 2A/3] (2A/3, A] (0, T/3] 1.004 1.035 1.343 1.156(0, 2A/3] (2A/3, A] (0, T] 1.012 1.041 1.350 1.183(0, 2A/3] (2A/3, A] (T/2, T] 1.005 1.036 1.371 1.173(0, 2A/3] Y � A (0, T/3] 1.003 1.036 1.342 1.160(0, 2A/3] Y � A (0, T] 1.012 1.041 1.342 1.204(0, 2A/3] Y � A (T/2, T] 1.005 1.037 1.387 1.189

546 Naval Research Logistics, Vol. 50 (2003)

Page 17: Scheduling jobs with piecewise linear decreasing processing times

However, for y � (0, 2 A/3] the situation changes and algorithm S2 performs better than S3 andS4. This is because of the fact that, for large values of y, it is optimal to sequence the jobs innondecreasing order of aj. The quality of the solutions constructed by all the algorithms is ratherindependent on the values of bj.

In all tests performed, a solution delivered by algorithm C1 for Cmax minimization problemor by algorithm S1 for ¥ Cj minimization problem is on average no more than 9.7% worse thanan optimal solution. This value decreases to 4.8% for other proposed algorithms when n � 500and n � 1000.

Additionally, we evaluate the worst case behavior of the heuristics. The best solutiondelivered by algorithms C1–C3 or S1–S4 is at most 2.4 times worse the optimal one in all testsperformed for n � 10 and n � 100. The relative error does not exceed 1.4% in all tests forn � 500 and n � 1000.

Table 3. Experimental results for n � 50, n � 100, and Cmax minimization.

Interval for n � 50 n � 100

y Y bj c2/c1 c3/c1 c2/c1 c3/c1

(0, A/3] (A/3, 2A/3] (0, T/3] 1.029 1.046 1.008 1.022(0, A/3] (A/3, 2A/3] (0, T] 1.044 1.068 1.056 1.078(0, A/3] (A/3, 2A/3] (T/2, T] 1.063 1.071 1.039 1.046(0, A/3] (2A/3, A] (0, T/3] 1.042 1.070 1.029 1.050(0, A/3] (2A/3, A] (0, T] 1.084 1.127 1.084 1.127(0, A/3] (2A/3, A] (T/2, T] 1.075 1.094 1.079 1.097(0, A/3] Y � A (0, T/3] 1.050 1.075 1.035 1.056(0, A/3] Y � A (0, T] 1.084 1.127 1.084 1.137(0, A/3] Y � A (T/2, T] 1.079 1.100 1.074 1.096(0, 2A/3] (2A/3, A] (0, T/3] 1.043 1.030 1.014 1.016(0, 2A/3] (2A/3, A] (0, T] 1.054 1.027 1.041 1.035(0, 2A/3] (2A/3, A] (T/2, T] 1.062 1.029 1.041 1.018(0, 2A/3] Y � A (0, T/3] 1.108 1.097 1.028 1.027(0, 2A/3] Y � A (0, T] 1.099 1.074 1.043 1.060(0, 2A/3] Y � A (T/2, T] 1.074 1.049 1.051 1.035

Table 4. Experimental results for n � 50, n � 100, and ¥ Cj minimization.

Interval for n � 50 n � 100

y Y bj s2/s1 s3/s1 s4/s1 s2/s1 s3/s1 s4/s1

(0, A/3] (A/3, 2A/3] (0, T/3] 1.302 1.438 1.178 1.281 1.466 1.187(0, A/3] (A/3, 2A/3] (0, T] 1.304 1.428 1.176 1.287 1.457 1.185(0, A/3] (A/3, 2A/3] (T/2, T] 1.304 1.432 1.173 1.284 1.454 1.183(0, A/3] (2A/3, A] (0, T/3] 1.299 1.426 1.172 1.281 1.452 1.187(0, A/3] (2A/3, A] (0, T] 1.298 1.401 1.176 1.280 1.426 1.184(0, A/3] (2A/3, A] (T/2, T] 1.293 1.405 1.168 1.277 1.429 1.177(0, A/3] Y � A (0, T/3] 1.299 1.420 1.178 1.280 1.446 1.187(0, A/3] Y � A (0, T] 1.297 1.393 1.176 1.279 1.416 1.184(0, A/3] Y � A (T/2, T] 1.292 1.400 1.168 1.276 1.423 1.176(0, 2A/3] (2A/3, A] (0, T/3] 1.037 1.440 1.187 1.034 1.468 1.199(0, 2A/3] (2A/3, A] (0, T] 1.041 1.433 1.198 1.035 1.462 1.206(0, 2A/3] (2A/3, A] (T/2, T] 1.034 1.453 1.195 1.032 1.477 1.206(0, 2A/3] Y � A (0, T/3] 1.039 1.438 1.193 1.035 1.465 1.203(0, 2A/3] Y � A (0, T] 1.042 1.426 1.207 1.037 1.458 1.215(0, 2A/3] Y � A (T/2, T] 1.036 1.460 1.204 1.034 1.483 1.214

547Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 18: Scheduling jobs with piecewise linear decreasing processing times

Since it is common that there is a 10% error in the input data for real-life problems, theproposed algorithms are acceptable solution procedures for the considered NP-hard problems.

3. PARALLEL MACHINES

In this section, we prove the NP-hardness of the problem P/pj � aj � bj(t � y), bj � b,y � 0, Y � �/Cmax with identical machines and show that the problem R/pj � aj � bj(t �y), bj � b, y � 0, Y � �/¥ Cj with unrelated machines is polynomially solvable by atransformation to a weighted bipartite matching problem.

THEOREM 3: The problem P2/pj � aj � bj(t � y), bj � b, y � 0, Y � �/Cmax isNP-hard and the problem P/pj � aj � bj(t � y), bj � b, y � 0, Y � �/Cmax is stronglyNP-hard.

PROOF: We show that a decision version of the problem P/pj � aj � bj(t � y), bj � b,y � 0, Y � �/Cmax is equivalent to a decision version of the traditional scheduling problemP/ /Cmax if b is sufficiently small. Assume that aj, j � 1, . . . , n, are the integer job processingtimes in the traditional problem. Denote the Cmax value for the traditional problem by Cmax. Forthe decision versions of our problem and the traditional problem, the question is to find aschedule with Cmax � G and Cmax � G, respectively, where G is an integer. Consider anarbitrary schedule. Denote the set of jobs assigned to machine l by Xl, l � 1, . . . , m. Denotethe completion times of the last job assigned to machine l by Tl and Tl for our problem and thetraditional problem, respectively. We have Tl � ¥j�Xl

aj, l � 1, . . . , m. Let jobs i1, . . . , ij

be sequenced on machine l in that order. It can easily be shown that Tl � ai1(1 � b)j�1 �

ai2(1 � b)j�2 � . . . � aij

. Since b � 1, we have aik(1 � b)k�1 � aik

for k � 1, . . . , j. Setamax � max{aj�j � 1, . . . , n}. Choose b so that aik

(1 � b)k�1 � aik� 1/(namax) for all jobs.

To do so, we can set b � 1 � (1 � 1/(amax2 ))1/n. Then we obtain

�j�Xl

aj � 1 � Tl � �j�Xl

aj, l � 1, . . . , m.

Table 5. Experimental results for n � 500, n � 1000, and the problem1/pj � aj � bjt, y � 0, Y � �/Cmax.

Interval forbj

n � 500 n � 1000

c2/OPT c3/OPT c2/OPT c3/OPT

(0, T/3] 1.001 1.004 1.001 1.004(0, T] 1.002 1.011 1.003 1.011(T/2, T] 1.003 1.011 1.003 1.012

Table 6. Experimental results for n � 500, n � 1000, and the problem1/pj � aj � bt, y � 0, Y � �/¥ Cj.

n � 500 n � 1000

s2/OPT s3/OPT s4/OPT s2/OPT s3/OPT s4/OPT

1.048 1.014 1.014 1.028 1.008 1.008

548 Naval Research Logistics, Vol. 50 (2003)

Page 19: Scheduling jobs with piecewise linear decreasing processing times

Hence, Cmax � 1 � Cmax � Cmax. Due to the integrality of aj, j � 1, . . . , n, and G, wehave Cmax � G if and only if Cmax � G. Since P2/ /Cmax is NP-hard and P/ /Cmax is stronglyNP-hard (see Garey and Johnson [8]), the theorem is proved. �

In the problem Q/plj � alj � blj(t � y)/Cmax with uniform machines, we have alj � aj/sl

and blj � bj/sl for all l and j, where sl, l � 1, . . . , m, are machine speeds. Observe thatdynamic programming approach presented in Section 2.2 can be generalized to solve thisproblem by introducing a set of state variables for each machine. Therefore, the problemQm/plj � alj � blj(t � y)/Cmax with a fixed number of machines can be solved inpseudopolynomial time.

Consider the problem R/plj � alj � blj(t � y), blj � b, y � 0, Y � �/¥ Cj. Introducethe variables x(l,r), j so that x(l,r), j � 1, if job j is sequenced rth last on machine l and x(l,r), j �0 otherwise. Applying the approach presented in Lemma 4 for the single machine case and theresult of Horn [15] and Bruno, Coffman, and Sethi [5], we can show that the problem R/plj �aj � blj(t � y), blj � b, y � 0, Y � �/¥ Cj is equivalent to the following weighted bipartitematching problem:

Minimize �l,r

�j

x�l,r�,jalj�1 � �1 � b�r�/b

subject to

�l,r

x�l,r�,j � 1, j � 1, . . . , n,

�j

x�l,r�,j � 1, l � 1, . . . , m, r � 1, . . . , n,

x�l,r�,j � 0, 1�, l � 1, . . . , m, j, r � 1, . . . , n,

where the summations are taken over all values of l and r or j. This matching problem can besolved in O(n3) time (see, e.g., Lawler [21]).

If the machines are uniform such that alj � aj/sl and blj � b for all l and j, then ¥ Cj is aweighted sum of aj values, where each weight is of the form bl,r � (1 � (1 � b)r)/(bsl), andno weight may be used more than once. To minimize ¥ Cj, it is obvious that we should selectthe n smallest of these mn weights and match the smallest weights with the largest aj values.The matrix of these weights has a structure that allows us to implement the matching procedurein O(n log n) time. We obviously have bl,1 � . . . � bl,n for each l. If we number machinesso that s1 � . . . � sm, then we additionally get b1,r � . . . � bm,r for each r. Number the jobsso that a1 � . . . � an. In the matching procedure, we consider jobs in the order 1, . . . , n andmatch the current job with the smallest available weight; i.e., we construct the schedulebackwards. The weight is chosen from a priority queue of the smallest m available weights. Dueto the special structure of the weight matrix, this queue can be initialized in O(m log m) timeand updated in O(log m) time. Since we need O(n log n) time to arrange jobs in nonincreasingorder of aj, the matching procedure runs in O(n log n) time. The idea of using the priority queuewas suggested by Horowitz and Sahni [16] for the traditional parallel machine problem. Thus,

549Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 20: Scheduling jobs with piecewise linear decreasing processing times

the problem Q/pj � aj � bj(t � y), bj � b, y � 0, Y � �/¥ Cj is solvable in O(n log n)time.

4. CONCLUSIONS

A scheduling model is proposed where the processing time of a job on a machine is apiecewise linear nonincreasing function of its start time. Computational complexities of severalspecial cases are established. They are presented in the following table.

Problem Complexity

1/pj � aj � bj(t � y), y � 0/Cmax NP-hard1/pj � aj � bj(t � y)/Cmax Pseudopolynomially solvable1/pj � aj � bj(t � y), y � 0, Y � �/Cmax O(n log n) [14]1/pj � aj � bj(t � y), bj � b/Cmax O(n log n)1/pj � aj � bj(t � y), aj � a/Cmax O(n log n)1/pj � aj � bj(t � y), y � 0/¥ Cj NP-hard, strong NP-hardness is

unknown1/pj � aj � bj(t � y), bj � b, y � 0, Y � �/¥ Cj O(n log n)1/pj � aj � bj(t � y), bj � kaj, y � 0, Y � �/¥ Cj O(n log n) [25]1/pj � aj � bj(t � y), aj � a, y � 0, Y � �/¥ Cj Pseudopolynomially solvable [25]P2/pj � aj � bj(t � y), bj � b, y � 0, Y � �/Cmax NP-hardQm/plj � alj � blj(t � y)/Cmax Pseudopolynomially solvableP/pj � aj � bj(t � y), bj � b, y � 0, Y � �/Cmax Strongly NP-hardQ/plj � alj � blj(t � y), blj � b, y � 0, Y � �/¥ Cj O(n log n)R/plj � alj � blj(t � y), blj � b, y � 0, Y � �/¥ Cj O(n3)

REMARK: (Strong) NP-hardness of a problem with y � 0 or y � 0 and Y � � implies(strong) NP-hardness of the corresponding problem with arbitrary y and Y.

Heuristic algorithms have been proposed for the NP-hard problems 1/pj � aj � bj(t �y)/Cmax and 1/pj � aj � bj(t � y)/¥ Cj. Computational experiments have demonstrated theirefficiency.

Further research should be focused on developing efficient enumerative solution proceduresfor the NP-hard problems. Resolving the question whether the single machine problem tominimize ¥ Cj is NP-hard in the strong sense is interesting as well.

APPENDIX

Algorithm C1

Step 1 Reindex the jobs according to the nonincreasing order of the ratio aj/bj; set i :� 1 and C :� 0.

Step 2 If C � Y, then calculate C :� C � ai � bi(Y � y).

Step 3 If y � C � Y, then calculate C :� C � ai � bi(C � y).

Step 4 If C � y, then calculate C :� C � ai.

Step 5 If i � n, then set i :� i � 1 and go to Step 2.

Step 6 The obtained value C is the makespan of the schedule.

550 Naval Research Logistics, Vol. 50 (2003)

Page 21: Scheduling jobs with piecewise linear decreasing processing times

Algorithm C2

Step 1 Reindex the jobs according to the nondecreasing order of the ratio aj/bj; set i :� 1, j :� n and C :� 0.

Step 2 If C � y, then calculate C :� C � ai, else go to Step 4.

Step 3 If i � n, then set i :� i � 1 and go to Step 2, else go to Step 7.

Step 4 If y � C � Y, then calculate C :� C � aj � bj(C � y).

Step 5 If C � Y, then calculate C :� C � aj � bj(Y � y).

Step 6 If j � i, then set j :� j � 1 and go to Step 4, else go to Step 7.

Step 7 The obtained value C is the makespan of the schedule.

Algorithm C3

Step 1 Set J � {1, . . . , n}, S1 � A and S2 � A.

Step 2 Find job j � J such that aj � maxi�J ai, add job j to the set S1, remove job j from the set J.

Step 3 Find job j � J such that bj � maxi�J bi, add job j to the set S2, remove job j from the set J.

Step 4 If J is empty, then go to Step 5, else go to Step 2.

Step 5 Schedule jobs from S1 in nonincreasing order of the parameter aj and then the jobs from S2 in nondecreasingorder of the parameter bj, set C :� 0 and i :� 1.

Step 6 If C � Y, then calculate C :� C � ai � bi(Y � y).

Step 7 If y � C � Y, then calculate C :� C � ai � bi(C � y).

Step 8 If C � y, then calculate C :� C � ai.

Step 9 If i � n, then set i :� i � 1 and go to Step 6.

Step 10 The obtained value C is the makespan of the schedule.

Algorithm S1

Step 1 Reindex the jobs according to the nondecreasing order of the parameter aj; set i :� 1, C :� 0 and sum :�0.

Step 2 If C � Y, then calculate C :� C � ai � bi(Y � y) and set sum :� sum � C.

Step 3 If y � C � Y, then calculate C :� C � ai � bi(C � y) and set sum :� sum � C.

Step 4 If C � y, then calculate C :� C � ai and set sum :� sum � C.

Step 5 If i � n, then set i :� i � 1 and go to Step 2.

Step 6 The obtained value sum is the total completion time of the schedule.

551Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 22: Scheduling jobs with piecewise linear decreasing processing times

Algorithm S2

Step 1 Reindex the jobs according to the nondecreasing order of the parameter aj; set J � {1, . . . , n}, i :� 1,C :� 0 and sum :� 0.

Step 2 If C � y, then calculate C :� C � ai, set sum :� sum � C and remove job i from the set J, else go toStep 4.

Step 3 If i � n, then set i :� i � 1 and go to Step 2.

Step 4 If J is not empty and if y � C � Y, then find job j such that the ratio (C � aj)/(1 � bj) is minimal, elsego to Step 6.

Step 5 Calculate C :� C � aj � bj(C � y), set sum :� sum � C, remove job j from J, and go to Step 4.

Step 6 If J is not empty and if C � Y, then find job j such that the ratio aj � bj(Y � y) is minimal, else go toStep 8.

Step 7 Calculate C :� C � aj � bj(Y � y), set sum :� sum � C, remove job j from J, and go to Step 6.

Step 8 The obtained value sum is the total completion time of the schedule.

Algorithm S3

Step 1 Set J � {1, . . . , n}, S1 � A and S2 � A.

Step 2 Find job j � J such that aj � maxi�J ai, add job j to the set S1, remove j from the set J.

Step 3 Find job j � J such that aj � bj(Y � y) � maxi�J ai � bi(Y � y), add job j to the set S2, remove j fromthe set J.

Step 4 If J is empty, then go to Step 5, else go to Step 2.

Step 5 Schedule jobs from S1 in nondecreasing order of the parameter aj and then the jobs from S2 in nondecreasingorder of the ratio aj � bj(Y � y), set C :� 0, i :� 1, and sum :� 0.

Step 6 If C � Y, then calculate C :� C � ai � bi(Y � y) and set sum :� sum � C.

Step 7 If y � C � Y, then calculate C :� C � ai � bi(C � y) and set sum :� sum � C.

Step 8 If C � y, then calculate C :� C � ai and set sum :� sum � C.

Step 9 If i � n, then set i :� i � 1 and go to Step 6.

Step 10 The obtained value sum is the total completion time of the schedule.

Algorithm S4

Step 1 Reindex the jobs according to nondecreasing order of the ratio aj/bj; set i :� 1, j :� n, C :� 0 and sum:� 0.

Step 2 If C � y, then calculate C :� C � ai, set sum :� sum � C, else go to Step 4.

552 Naval Research Logistics, Vol. 50 (2003)

Page 23: Scheduling jobs with piecewise linear decreasing processing times

Step 3 If i � n, then set i :� i � 1 and go to Step 2, else go to Step 7.

Step 4 If y � C � Y, then calculate C :� C � aj � bj(C � y) and set sum :� sum � C.

Step 5 If C � Y, then calculate C :� C � aj � bj(Y � y) set sum :� sum � C.

Step 6 If j � i, then set j :� j � 1 and go to Step 4, else go to Step 7.

Step 7 The obtained value sum is the total completion time of the schedule.

ACKNOWLEDGMENTS

This research was partially supported by the Croucher Foundation under a Croucher SeniorResearch Fellowship for T.C. Edwin Cheng, by The Hong Kong Polytechnic University underGrant No. G-S818 for T.C. Edwin Cheng and Qing Ding, and by INTAS under Grant No.00-217 for Mikhail Kovalyov.

REFERENCES

[1] B. Alidaee and N.K. Womer, Scheduling with time dependent processing times: Review andextensions, J Oper Res Soc 50 (1999), 711–720.

[2] A. Bachman, Single machine scheduling problems for the jobs with start time dependent processingtimes, Ph.D. dissertation, Wroclaw University of Technology, Wroclaw, Poland, 1998.

[3] A. Bachman and A. Janiak, Minimizing maximum lateness under linear deterioration, Eur J Oper Res126 (2000), 557–566.

[4] S. Browne and U. Yechiali, Scheduling deteriorating jobs on a single processor, Oper Res 38 (1990),495–498.

[5] J.L. Bruno, E.G. Coffman, Jr., and R. Sethi, Scheduling independent tasks to reduce mean finishingtime, Commun ACM 17 (1974), 382–387.

[6] Z.-L. Chen, A note on single-processor scheduling with time dependent execution times, Oper ResLett 17 (1995), 127–129.

[7] T.C.E. Cheng and M.Y. Kovalyov, Scheduling with learning effects on job processing times,Working Paper 06/94, Faculty of Business and Information Systems, The Hong Kong PolytechnicUniversity, 1994.

[8] M.R. Garey and D.S. Johnson, Computers and intractability: A guide to the theory of NP-complete-ness, Freeman, San Francisco, 1979.

[9] S. Gawiejnowicz, Scheduling jobs with varying processing times, Ph.D. dissertation, Poznan Uni-versity of Technology, Poznan, Poland, 1997.

[10] K.D. Glazebrook, Single-machine scheduling of stochastic jobs subject to deterioration or delay, NavRes Logistics 39 (1992), 613–633.

[11] K.D. Glazebrook, On permutation policies for the scheduling of deteriorating stochastic jobs on asingle machine, J Appl Probab 30 (1993), 184–193.

[12] R.L. Graham, E.L. Lawler, J.K. Lenstra, and A.H.G. Rinnooy Kan, Optimization and approximationin deterministic sequencing and scheduling: A survey, Ann Discrete Math 5 (1979), 287–326.

[13] J.N.D. Gupta and S.K. Gupta, Single facility scheduling with nonlinear processing times, Comput IndEng 14 (1988), 387–394.

[14] K.I.-J. Ho, J.Y.-T. Leung, and W.-D. Wei, Complexity of scheduling tasks with time dependentexecution times, Inf Process Lett 48 (1993), 315–320.

[15] W.A. Horn, Minimizing average flow time with parallel machines, Oper Res 21 (1973), 846–847.[16] E. Horowitz and S. Sahni, Exact and approximate algorithms for scheduling nonidentical processors,

J ACM 23 (1976), 317–327.[17] A.V. Kononov, On the complexity of scheduling problems with time dependent processing times,

Candidate of Sciences dissertation, Siberian Branch of Russian Academy of Sciences, Institute ofMathematics, Novosibirsk, Russia, 1998.

553Cheng et al.: Scheduling Jobs with Decreasing Processing Times

Page 24: Scheduling jobs with piecewise linear decreasing processing times

[18] M.Y. Kovalyov and W. Kubiak, A fully polynomial approximation scheme for minimizing makespanof deteriorating jobs, J Heuristics 3 (1998), 287–297.

[19] W. Kubiak and S.L. van de Velde, Scheduling deteriorating jobs to minimize makespan, Nav ResLogistics 45 (1998), 511–523.

[20] A.S. Kunnathur and S.K. Gupta, Minimizing the makespan with late start penalties added toprocessing times in a single facility scheduling problem, Eur J Oper Res 47 (1990), 56–64.

[21] E.L. Lawler, Combinatorial optimization: Networks and matroids, Holt, Rinehart and Winston, NewYork, 1976.

[22] E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, and D.B. Shmoys, “Sequencing and scheduling:algorithms and complexity,” Handbooks in operations research and management science, Volume 4:Logistics of production and inventory, S.C. Graves, A.H.G. Rinnooy Kan, and P. Zipkin (Editors),North-Holland, Amsterdam, 1994.

[23] G. Mosheiov, V-shaped policies for scheduling deteriorating jobs, Oper Res 39 (1991), 979–991.[24] G. Mosheiov, Scheduling jobs under simple linear deterioration, Comput Oper Res 21 (1994),

653–659.[25] C.T. Ng, T.C.E. Cheng, A. Bachman, and A. Janiak, Three scheduling problems with deteriorating

jobs to minimize the total completion time, Inf Process Lett 81 (2002), 327–333.

554 Naval Research Logistics, Vol. 50 (2003)


Recommended