14

Click here to load reader

Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Embed Size (px)

Citation preview

Page 1: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

JORC (2013) 1:239–252DOI 10.1007/s40305-013-0014-y

Uniform Parallel-Machine Scheduling with TimeDependent Processing Times

Juan Zou · Yuzhong Zhang · Cuixia Miao

Received: 16 January 2013 / Revised: 9 April 2013 / Accepted: 18 April 2013 /Published online: 24 May 2013© Operations Research Society of China, Periodicals Agency of Shanghai University, andSpringer-Verlag Berlin Heidelberg 2013

Abstract We consider several uniform parallel-machine scheduling problems inwhich the processing time of a job is a linear increasing function of its starting time.The objectives are to minimize the total completion time of all jobs and the totalload on all machines. We show that the problems are polynomially solvable whenthe increasing rates are identical for all jobs; we propose a fully polynomial-timeapproximation scheme for the standard linear deteriorating function, where the ob-jective function is to minimize the total load on all machines. We also consider theproblem in which the processing time of a job is a simple linear increasing functionof its starting time and each job has a delivery time. The objective is to find a sched-ule which minimizes the time by which all jobs are delivered, and we propose a fullypolynomial-time approximation scheme to solve this problem.

Keywords Scheduling · Uniform machine · Linear deterioration · Fully polynomialtime approximation scheme

1 Introduction

For most scheduling problems, the processing times of jobs are considered to beconstant and independent of their starting time. However, this assumption is not ap-propriate for the modeling of many modern industrial processes, we often encounter

This work was supported by the National Natural Science Foundation of China (Nos. 11071142,11201259), the Natural Science Foundation of Shan Dong Province (No. ZR2010AM034) and theDoctoral Fund of the Ministry of Education (No. 20123705120001).

J. Zou (�) · Y. ZhangSchool of Management, Qufu Normal University, Rizhao, Shandong, Chinae-mail: [email protected]

J. Zou · C. MiaoSchool of Mathematical Sciences, Qufu Normal University, Qufu, Shandong, China

Page 2: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

240 J. Zou et al.

situations in which processing time increases over time, when the machines gradu-ally lose efficiency. Such problems are generally known as scheduling with deteriora-tion effects. Scheduling with linear deterioration was first considered by Browne andYechiali [1] who assumed that the processing times of jobs are nondecreasing, starttime dependent linear functions. They provided the optimal solution for the singlemachine when the objective is to minimize the makespan. In addition, they solveda special case when the objective function is to minimize the total weighted com-pletion time. Mosheiov [2] considered simple linear deterioration where jobs have afixed job-dependent growth rate but no basic processing time. He showed that mostcommonly applied performance criteria, such as the makespan, the total flow time,the total lateness, the sum of weighted completion time, the maximum lateness, themaximum tardiness, and the number of tardy jobs, remain polynomially solvable.

In the standard form of linear models, the actual processing time of job Jj is givenby pj = aj + bj tj , where aj , bj , and tj denote the basic processing time, the dete-riorating rate, and the starting time of job Jj , respectively. Kuo and Yang [3] studiedseveral parallel-machine scheduling problems in which the actual processing time ofjob Jj is given by pj = aj + btj or pj = aj − btj . The objectives are to minimizethe total completion time of all jobs and the total load on all machines. They showedthat the problems are polynomially solvable. Kononov [4] established the ordinaryNP-completeness of P 2|pj = bj t, rj = t0|Cmax and P 2|pj = bj t, rj = t0|∑Cj .Mosheiov [5] studied multi-machine makespan minimization and total load mini-mization with linear deterioration, he proved that the two problems are NP-hard evenfor two machines. Liu et al. [6] considered m uniform machine scheduling with lin-ear deterioration to minimize the makespan, they proposed a fully polynomial-timeapproximation scheme for this problem. An extensive survey of different models andproblems was provided by Alidaee and Womer [7]. Cheng et al. [8] presented an up-dated survey of the results on scheduling problems with time-dependent processingtimes. In addition, the concept of the learning effect and deteriorating jobs has beenextensively studied. Lee [9] considered single-machine scheduling problem with de-teriorating jobs and the learning effect, the objective was to minimize the makespanand the total completion time. He introduced the polynomial solutions for schedulingproblems with the effects of learning and deterioration. But in the literature, there areonly a few studies dealing with uniform parallel-machine scheduling problems underlinear deterioration.

In this paper, we consider uniform-machine scheduling problems in which theactual processing time of job Jj is given by pj = aj + btj . The objectives are tominimize the total completion time of all jobs and the total load on all machines. Fol-lowing the three-field notation introduced by Graham et al. [10], the correspondingproblems are denoted by Qm|pj = aj + btj |∑Cj and Qm|pj = aj + btj |∑Ci

max.We show that the two problems are polynomially solvable. For the standard linear de-teriorating function, we denote the problem by Qm|pj = aj +bj tj |∑Ci

max. We pro-pose a fully polynomial-time approximation scheme for this problem. We also con-sider the scheduling problem with simple linear deteriorating jobs on uniform parallelmachine, each job Jj has a delivery time qj . The objective is to find a schedule whichminimizes the time by which all jobs are delivered. Let Cj denote the completion timeof job Jj , thus the delivery completion time of a job Jj is Cj +qj , we will continue to

Page 3: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 241

use Lj to represent Cj +qj . The problem is denoted by Qm|pj = bj tj , rj = t0|Lmax,we present a fully polynomial-time approximation scheme for it.

2 Problem Description

There are n independent jobs J = {J1, J2, · · · , Jn} to be processed on m uniform par-allel machines and each machine can process at most one job at a time. Preemption isnot allowed. For the problems Qm|pj = aj +btj |∑Cj , Qm|pj = aj +btj |∑Ci

maxand Qm|pj = aj + bj tj |∑Ci

max, we assume that all jobs are simultaneously avail-able at time 0 and each job Jj has a positive deteriorating rate b or bj . For the problemQm|pj = bj tj , rj = t0|Lmax, all the jobs are simultaneously available at time t0 > 0.Each job Jj has a positive deterioration rate bj and has a subsequent nonnegativedelivery time qj .

Let Mi and si denote the ith machine in the system and its speed factor, respec-tively. Given a schedule, we use ni to denote the number of jobs scheduled on ma-chine Mi , J[i,r] denotes a job when it is scheduled in position r on machine Mi ina sequence. Let C[i,r], Ci

max, and L[i,r] denote the completion time of job J[i,r], thetotal load on machine Mi , and the delivery completion time of job J[i,r], respectively,for i = 1,2, · · · ,m, r = 1,2, · · · , ni , and

∑mi=1 ni = n.

For Qm|pj = aj + btj |∑Cj and Qm|pj = aj + btj |∑Cimax, the completion

time of each job scheduled on Mi can be expressed as follows:

C[i,1] = a[i,1]si

,

C[i,2] = C[i,1] + a[i,2] + bC[i,1]si

= a[i,2]si

+(

1 + b

si

)a[i,1]si

,

C[i,3] = C[i,2] + a[i,3] + bC[i,2]si

= a[i,3]si

+(

1 + b

si

)a[i,2]si

+(

1 + b

si

)2a[i,1]si

,

...

C[i,r] = C[i,r−1] + a[i,r] + bC[i,r−1]si

= a[i,r]si

+(

1 + b

si

)a[i,r−1]

si+ · · · +

(

1 + b

si

)r−1a[i,1]si

,

Cimax = C[i,ni ] = a[i,ni ]

si+

(

1 + b

si

)a[i,ni−1]

si+ · · · +

(

1 + b

si

)ni−1a[i,1]si

.

The total completion time and the total load can be written as follows:

m∑

i=1

ni∑

r=1

C[i,r] =m∑

i=1

ni∑

r=1

1

si

(

1 +(

1 + b

si

)

+(

1 + b

si

)2

+ · · · +(

1 + b

si

)ni−r)

a[i,r],

Page 4: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

242 J. Zou et al.

m∑

i=1

Cimax =

m∑

i=1

ni∑

r=1

1

si

(

1 + b

si

)ni−r

a[i,r].

For the problem Qm|pj = aj + bj tj |∑Cimax, the total load on Mi can be ex-

pressed as follows:

Cimax = C[i,ni ] = a[i,ni ]

si+

ni−1∑

j=1

(

1 + b[i,ni ]si

)(

1 + b[i,ni−1]si

)

· · ·(

1 + b[i,j+1]si

)a[i,j ]si

.

For the problem Qm|pj = bj tj , rj = t0|Lmax, the delivery completion time of job

J[i,r] can be denoted by L[i,r] = C[i,r] + q[i,r] = t0∏r

j=1(1 + b[i,j ]si

) + q[i,r].

3 Minimizing Total Completion Time and Minimizing Total Load

3.1 Polynomial Algorithms

Lemma 1 Let xi and yi be two sequences of numbers, then the sum∑

i xiyi of prod-ucts of the corresponding elements is the least if the sequences are monotonic in theopposite sense.

Proof The proof is obtained from [11]. �

Then, according to Lemma 1 and the expression form of the total completion time,we can construct a polynomial algorithm for the problem Qm|pj = aj + btj |∑Cj .

Algorithm 1

Step 1 Select n smallest numbers from the set { 1si

, 1si

+ 1si

(1 + bsi

), 1si

+ 1si

(1 + bsi

) +1si

(1 + bsi

)2, 1si

+ 1si

(1 + bsi

) + 1si

(1 + bsi

)2 + 1si

(1 + bsi

)3, · · · | i = 1,2, · · · ,m} andplace these numbers in a nondecreasing sequence.

Step 2 Sort the jobs in the nonincreasing order of normal processing aj (i.e., an �an−1 � · · · � a1).

Step 3 Form a correspondence between these numbers and the jobs, if Jj corre-sponds to 1

si+ 1

si(1 + b

si) + · · · + 1

si(1 + b

si)k−1, then schedule job Jj on machine

Mi as the kth last position.

Theorem 1 Algorithm 1 gives an optimal schedule for the problem Qm|pj = aj +btj |∑Cj in O(n logn) time.

Proof Suppose that σ ∗ is an optimal schedule which is different from the scheduleσ obtained by Algorithm 1. Let k be the largest index such that job Jk is scheduledon different processors in σ ∗ and σ . Let job Jk be scheduled on machine Mi in σ ∗and ri denote the number of jobs scheduled to be processed after and including jobJk on machine Mi in σ ∗. Let job Jk be scheduled on machine Mj in σ and rj denote

Page 5: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 243

the number of jobs scheduled to be processed after and including job Jk on machineMj in σ . Since σ ∗ and σ are identical for job indices greater than k and Algorithm 1assigned job Jk to machine Mj , it follows that

1

sj+ 1

sj

(

1+ b

sj

)

+· · ·+ 1

sj

(

1+ b

sj

)rj −1

� 1

si+ 1

si

(

1+ b

si

)

+· · ·+ 1

si

(

1+ b

si

)ri−1

.

Let Jl be the job at position rj in σ ∗, by interchanging the position of job Jk andJl in σ ∗, the objective function of σ ∗ changes by

(rj −1∑

r=0

1

sj

(

1 + b

sj

)r)

ak +(

ri−1∑

r=0

1

si

(

1 + b

si

)r)

al

−(rj −1∑

r=0

1

sj

(

1 + b

sj

)r)

al −(

ri−1∑

r=0

1

si

(

1 + b

si

)r)

ak

=(rj −1∑

r=0

1

sj

(

1 + b

sj

)r

−ri−1∑

r=0

1

si

(

1 + b

si

)r)

(ak − al) � 0.

Based on the definition of k and the ordering of job’s normal processing time,it follows that ak − al � 0. Then, a finite number of repetitions of this argumentestablishes that there exists an optimal schedule in which the jobs are sequenced byAlgorithm 1. �

Similarly, we give an optimal algorithm for the problem Qm|pj = aj +btj |∑Cimax.

Algorithm 2

Step 1 Select n smallest numbers from the set { 1si

, 1si

(1 + bsi

), 1si

(1 + bsi

)2, · · · | i =1,2, · · · ,m} and place these numbers in a nondecreasing sequence.

Step 2 Sort the jobs in the nonincreasing order of normal processing aj .Step 3 Form a correspondence between these numbers and the jobs, if Jj corre-

sponds to 1si

(1 + bsi

)k−1, then schedule job Jj on machine Mi as the kth last posi-tion.

Theorem 2 Algorithm 2 gives an optimal schedule for the problem Qm|pj = aj +btj |∑Ci

max, which can be solved in O(n logn) time.

Proof The proof is similar to that of Theorem 1. �

3.2 An FPTAS

An algorithm A is a ρ-approximation (ρ � 1) algorithm for a minimization problemif it produces a solution that is at most ρ times that of the optimal one (ρ is alsoreferred to as the worst-case ratio). A family of algorithms {Aε : ε > 0} is called afully polynomial-time approximation scheme (FPTAS) if for each ε > 0 the algorithm

Page 6: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

244 J. Zou et al.

Aε is a (1 + ε)-approximation algorithm running in polynomial time in the input sizeand 1/ε. In the sequel, we assume 0 < ε � 1.

Since the problem P 2|pj = bj tj , rj = t0|∑Cimax has been proved to be ordinar-

ily NP-hard by Mosheiov [5], Qm|pj = aj + bj tj |∑Cimax is at least NP-hard in

the ordinary sense. In this section, we propose a fully polynomial-time approxima-tion scheme for the scheduling problem Qm|pj = aj + bj tj |∑Ci

max. In order to usethe procedure Partition proposed in Kovalyov and Kubiak [12] which requires that afunction used within it be a nonnegative integer function, we first modify the originalobjective function

∑Ci

max to satisfy this restriction without affecting the sequence.The transformation method from the original objective function to a nonnegative in-teger function is given by the following scale procedure:

For any i ∈ {1, · · · ,m} and j ∈ {1, · · · , n}, define θ1 = min{ aj

si}, θ2 = min{1+ bj

si}.

For simplicity, we suppose that θ1 and θ2 are finite decimals. Indeed, data are alwaysobtained within some error range in industrial production. If the error range is set tobe infinitesimal, the values will be real numbers. Find integers {l1, l2} ∈ N

+ such that10l1θ1 ∈ N

+ and 10l2θ2 ∈ N+. Since

C[i,r]a[i,r]si

+r−1∑

j=1

(

1 + b[i,r]si

)(

1 + b[i,r−1]si

)

· · ·(

1 + b[i,j+1]si

)a[i,j ]si

,

10l1+(r−1)l2C[i,r] can be verified as an integer. Then define L = 10l1+nl2 , where n

is the total number of jobs, the transformed objective function can be expressedas L

∑Ci

max. We use the new objective function instead of the original one inthe following. Now our concerned problem can be denoted by Qm|pj = aj +bj tj |L∑

Cimax.

Gupta and Gupta [13] proved that the problem 1|pj = aj + bj tj |Cmax is solvedby sequencing the jobs in nondecreasing order of aj /bj , which leads to the followinglemma.

Lemma 2 For Qm|pj = aj + bj tj |L∑Ci

max, on each machine Mi (i ∈ {1, · · · ,m})in an optimal solution, all the jobs are sequenced in the non-decreasing order of

aj

bj.

Based on Lemma 2, it is natural to consider the jobs in nondecreasing order ofaj /bj . So we index the jobs such that a1/b1 � a2/b2 � · · · � an/bn. We intro-duce variables xj , j = 1,2, · · · , n, where xj = i if job Jj is scheduled on ma-chine Mi , i = 1,2, · · · ,m. Let X be the set of all vectors x = (x1, x2, · · · , xn) withxj ∈ {1,2, · · · ,m}, and j = 1,2, · · · , n. We define the following initial and recursivefunctions on X:

Initial function: F i0(x) = 0, i = 1, · · · ,m.

Recursive function, for j = 1, · · · , n,

F ij (x) = aj

si10l1+j l2 + F i

j−1(x)

(

1 + bj

si

)

· 10l2, i = xj ,

F ij (x) = F i

j−1(x) · 10l2, i �= xj ,

Page 7: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 245

Gj(x) =m∑

i=1

F ij (x),

where F ij (x) is the magnified workload of machine Mi for the jobs among

J1, J2, · · · , Jj .

Note that in the recursion functions, the workload of each machine is enlargedby a scale factor exactly 10l1+nl2 . Therefore,

∑mi=1 F i

n(x) is our desired value for agiven x; the problem Qm|pj = aj + bj tj |L∑

Cimax reduces to the following mini-

mization problem:

minGn(x), x ∈ X.

First, we present the procedure Partition(A,h, δ) proposed by Kovalyov and Ku-biak [12], where A ⊆ X, h is a nonnegative integer function on X, and 0 < δ � 1.This procedure partitions A into disjoint subsets Ah

1,Ah2, · · · ,Ah

khsuch that |h(x) −

h(x′)| � δ min{h(x),h(x′)} for any x, x′ from the same subset Ahj j = 1,2, · · · , kh.

The following description gives the details of Partition(A,h, δ).

Procedure Partition(A,h, δ)

Arrange vectors x ∈ A in the order x(1), x(2), · · · , x(|A|) such that 0 � h(x(1)) �h(x(2)) � · · · � h(x(|A|)).Assign vectors x(1), x(2), · · · , x(i1) to the set Ah

1 until a certain i1 is found such thath(x(i1)) � (1 + δ)h(x(1)) and h(x(i1+1)) > (1 + δ)h(x(1)). If such an i1 does notexist, then set Ah

1 = A and stop.Assign vectors x(i1+1), x(i1+2), · · · , x(i2) to the set Ah

2 until a certain i2 is foundsuch that h(x(i2)) � (1 + δ)h(x(i1+1)) and h(x(i2+1)) > (1 + δ)h(x(i1+1)). If suchan i2 does not exist, then set Ah

2 = A\Ah1 and stop.

Continue the above construction until x(|A|) is included in Ahkh

for some kh.

Obviously, procedure Partition(A,h, δ) requires O(|A| log |A|) operations to arrangethe vectors of A in nondecreasing order of h(x) and O(|A|) operations to provide apartition. The two properties of Partition(A,h, δ) that will be used in the developmentof our FPTAS were presented in [12] as follows:

Lemma 3 |h(x) − h(x′)| � δ min{h(x),h(x′)} for any x, x′ ∈ Ahj , j = 1,2, · · · , kh.

Lemma 4 kh � logh(x|A|)δ

+ 2 if h(x|A|) � 1.

Furthermore, we need the following lemma which was presented in [14].

Lemma 5 For any 0 < x � 1 and for any integer n � 1, (1 + xn)n � 1 + 2x holds.

Motivated by the idea of Kovalyov and Kubiak [12], a formal description of the FP-TAS Aε for problem Qm|pj = aj + bj tj |L∑

Cimax is given below.

Page 8: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

246 J. Zou et al.

Algorithm Aε

Step 1 (Initialization) Number the jobs so that a1b1

� a2b2

� · · · � an

bn. Set Y0 =

{(0,0, · · · ,0)},F i0 = 0, i = 1,2, · · · ,m and j = 1.

Step 2 (Generation of Y1, Y2, · · · , Yn) For the set Yj−1, we generate the set Y ′j by

adding k, k = 1,2, · · · ,m, in position j of each vector from Yj−1. Calculate thefollowing for any x ∈ Y ′

j :

F ij (x) = aj

si10k1+jk2 + F i

j−1(x)

(

1 + bj

si

)

· 10k2, i = xj ,

F ij (x) = F i

j−1(x) · 10k2, i �= xj ,

Gj (x) =m∑

i=1

F ij (x).

If j = n, then set Yn = Y ′n and go to Step 3.

If j < n, then set δ = ε2(n+1)

and perform the following computations:

Call Partition(Y ′j ,F

ij , δ), (i = 1,2, · · · ,m) to partition the set Y ′

j into disjoint

subsets YF i

1 , YF i

2 , · · · , YF i

kF i

.

Divide the set Y ′j into disjoint subsets Yc1,··· ,cm = YF 1

c1∩ · · · ∩ YFm

cm, c1 =

1,2, · · · , kF 1; · · · ; cm = 1,2, · · · , kFm . For each nonempty subset Yc1,··· ,cm ,choose a vector x(c1,··· ,cm) such that

Gj

(x(c1,··· ,cm)

) = min{Gj(x)|x ∈ Yc1,··· ,cm

}.

Set Yj := {x(c1,··· ,cm) : YF 1

c1∩· · ·∩YFm

cm�= ∅, where c1 = 1,2, · · · , kF 1; · · · ; cm =

1,2, · · · , kFm} and set j := j + 1. Repeat Step 2.

Step 3 (Solution) Select a vector x0 ∈ Yn such that

Gn

(x0) = min

{Gn(x)|x ∈ Yn

}.

Let x∗ = (x∗1 , x∗

2 , · · · , x∗n) be an optimal solution for the problem Qm|pj = aj +

bj tj |L∑Ci

max, write x∗[j ] = (x∗1 , · · · , x∗

j ,0, · · · ,0) for 1 � j � n.

We have the following theorem.

Theorem 3 For the problem Qm|pj = aj + bj tj |L∑Ci

max, Algorithm Aε finds asolution x0 such that Qn(x

0) � (1 + ε)Qn(x∗).

Proof Suppose that x∗[j ] = (x∗1 , · · · , x∗

j ,0, · · · ,0) ∈ Yc1,··· ,cm ⊆ Y ′j for some j and

c1, · · · , cm. By the definition of Algorithm Aε , such j always exists, for instance,j = 1. Algorithm Aε may not choose x∗[j ] for further construction, however, a vector

x(c1,··· ,cm) is chosen instead of it. By Lemma 3, we have∣∣F i

j

(x∗[j ]

) − F ij

(x(c1,··· ,cm)

)∣∣ � δF i

j

(x∗[j ]

), i = 1,2, · · · ,m.

Page 9: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 247

Setting δ1 = δ, we consider the vectors x∗[j+1] = (x∗1 , · · · , x∗

j , x∗j+1,0, · · · ,0) and

x̃(c1,··· ,cm) = (x(c1,··· ,cm)1 , · · · , x

(c1,··· ,cm)j , x∗

j+1,0, · · · ,0).For x∗

j+1 = i, i ∈ {1,2, · · · ,m}, we have

∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(x̃(c1,··· ,cm)

)∣∣

=∣∣∣∣aj+1

si10l1+(j+1)l2 + F i

j

(x∗[j ]

)(

1 + bj+1

si

)

10l2

− aj+1

si10l1+(j+1)l2 − F i

j

(x(c1,··· ,cm)

)(

1 + bj+1

si

)

10l2

∣∣∣∣

=(

1 + bj+1

si

)

10l2∣∣F i

j

(x∗[j ]

) − F ij

(x(c1,··· ,cm)

)∣∣

�(

1 + bj+1

si

)

10l2δF ij

(x∗[j ]

)

� δ1Fij+1

(x∗[j+1]

).

Similarly, for x∗j+1 �= i, we have

∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(x̃(c1,··· ,cm)

)∣∣ � δ1F

ij+1

(x∗[j+1]

).

As a consequence, we have

∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(x̃(c1,··· ,cm)

)∣∣ � δ1F

ij+1

(x∗[j+1]

), i = 1,2, · · · ,m. (1)

This leads to

F ij+1

(x̃(c1,··· ,cm)

)� (1 + δ1)F

ij+1

(x∗[j+1]

), i = 1,2, · · · ,m. (2)

Assume that x̃(c1,··· ,cm) ∈ Yb1,··· ,bm ⊆ Y ′j+1 and Algorithm Aε chooses xb1,··· ,bm ∈

Yb1,··· ,bm instead of x̃(c1,··· ,cm) in the (j + 1)th iteration. From (2) and by Lemma 3,for i = 1,2, · · · ,m, we have

∣∣F i

j+1

(x̃(c1,··· ,cm)

) − F ij+1

(xb1,··· ,bm

)∣∣ � δF i

j+1

(x̃(c1,··· ,cm)

)

� δ(1 + δ1)Fij+1

(x∗[j+1]

). (3)

From (1) and (3), for i = 1,2, · · · ,m, we obtain

∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(xb1,··· ,bm

)∣∣

�∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(x̃(c1,··· ,cm)

)∣∣ + ∣

∣F ij+1

(x̃(c1,··· ,cm)

) − F ij+1

(xb1,··· ,bm

)∣∣

�(δ1 + δ(1 + δ1)

)F i

j+1

(x∗[j+1]

)

= (δ + δ1(1 + δ)

)F i

j+1

(x∗[j+1]

). (4)

Page 10: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

248 J. Zou et al.

Set δl = δ + δl−1(1 + δ), l = 2,3, · · · , n − j + 1. From (4), we have∣∣F i

j+1

(x∗[j+1]

) − F ij+1

(xb1,··· ,bm

)∣∣ � δ2F

ij+1

(x∗[j+1]

), i = 1,2, · · · ,m.

By repeating the above argument for j + 2, · · · , n, we eventually show that thereexists an x′ ∈ Yn such that

∣∣F i

n

(x∗) − F i

n

(x′)∣∣ � δn−j+1F

in

(x∗), i = 1,2, · · · ,m.

By Lemma 5, we have

δn−j+1 � δ

n∑

j=0

(1 + δ)j = (1 + δ)n+1 − 1 �(

1 + ε

2(n + 1)

)n+1

− 1 � ε.

Therefore,

F in

(x′) � (1 + δn−j+1)F

in

(x∗) � (1 + ε)F i

n

(x∗), i = 1,2, · · · ,m. (5)

From (5), we have

Gn

(x′) =

m∑

i=1

F in

(x′) � (1 + ε)

m∑

i=1

F in

(x∗) = (1 + ε)Gn

(x∗).

By Step 3 of Algorithm Aε , a vector x0 will be chosen such that

Gn

(x0) � Gn

(x′) � (1 + ε)Gn

(x∗).

This completes the proof. �

Define L = log max{n, 1ε,Amax,1 + Bmax,10l1,10l2}, where Amax = max{ aj

si},

Bmax = max{ bj

si}, i = 1, · · · ,m; j = 1, · · · , n. We have the following theorem.

Theorem 4 Algorithm Aε requires O(n2m+1Lm+1

εm ) time.

Proof The time complexity of Algorithm Aε can be established by noting that themost time-consuming operation of iteration j of Step 2 is a call of procedure Partition,which requires O(|Y ′

j | log |Y ′j |) time to complete. To estimate |Y ′

j |, recall that

∣∣Y ′

j+1

∣∣ � m|Yj | � mkF 1kF 2 · · ·kFm.

By Lemma 4, we have

kF i � 2(n + 1) log(10l1+nl2nAmax(1 + Bmax)n)

ε+ 2

� 2(n + 1)(2n + 3)L

ε+ 2, i = 1,2, · · · ,m.

Page 11: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 249

Thus |Y ′j | = O(n2mLm

εm ) and |Y ′j | log |Y ′

j | = O(n2mLm+1

εm ). There are at most n itera-

tions; therefore, Algorithm Aε runs in O(n2m+1Lm+1

εm ) time. �

4 Minimizing the Maximum Delivery-Completion Time

4.1 NP-Hardness

Theorem 5 The problem Qm|pj = bj tj , rj = t0|Lmax is NP-hard.

Proof If all delivery times are zero, the problem Pm|pj = bj tj , rj = t0|Lmax isequivalent to the problem Pm|pj = bj tj , rj = t0|Cmax. Since Pm|pj = bj tj , rj =t0|Cmax is ordinarily NP-hard according to Kononov [4], Qm|pj = bj tj , rj =t0|Lmax is at least NP-hard in the ordinary sense. �

4.2 An FPTAS

In this section, we derive a fully polynomial-time approximation scheme for the NP-hard problem Qm|pj = bj tj , rj = t0|Lmax. This is done by applying the “rounding-the-input-data” technique of Sahni [15] to a given problem instance. The definition ofthe modified deteriorating rates involves a geometric rounding technique developedby Sengupta [16]. And the rounding technique is stated as follows:

For any ε′ > 0 and x � 1, if (1 + ε′)k−1 < x < (1 + ε′)k , then we define xε′ =(1 + ε′)k, �x�ε′ = (1 + ε′)k−1. If x is an exact power of (1 + ε′), thenxε′ =�x�ε′ = x. Note that xε′ � (1 + ε′)x for any x � 1.For any 0 < ε � 1, we define the modified deteriorating rate of job Jj as b′

j =si(1 + bj

siε′ − 1), where ε′ = ε

2n, for i = 1, · · · ,m and j = 1, · · · , n. Let βji

denote the exponent of 1 + b′j

si, i.e., 1 + b′

j

si= 1 + bj

siε′ = (1 + ε′)βji , then βji =

log1+ bjsi

ε′log(1+ε′) = O(

n log(1+ bjsi

)

ε).

Theorem 6 For any 0 < ε � 1, the optimal objective function value for Qm|pj =bj tj , rj = t0|Lmax under the modified deteriorating rates is at most (1 + ε) times theoptimal value for the same problem under the original deteriorating rates.

Proof Let σ be an arbitrary feasible schedule. Consider an arbitrary job Jj scheduledin position r on machine Mi , 1 � i � m. We denote by Cj the completion timeof Jj under the original deteriorating rates, and by C′

j (C′[i,r]) the completion timeof Jj under the modified deteriorating rates in σ . By the definition of the modifieddeteriorating rate b′

j , we have

1 + b′j

si=

1 + bj

si

ε′�

(1 + ε′)

(

1 + bj

si

)

.

Page 12: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

250 J. Zou et al.

This yields

C′j = C′[i,r] = t0

r∏

l=1

(

1 + b′[i,l]si

)

� t0(1 + ε′)r

r∏

l=1

(

1 + b[i,l]si

)

�(1 + ε′)n

Cj .

Since (1 + ε′)n = (1 + ε2n

)n � 1 + ε for 0 < ε � 2,

L′j = C′

j + qj �(1 + ε′)n

(Cj + qj ) � (1 + ε)(Cj + qj ) = (1 + ε)Lj .

As a consequence, we have

L′max = max

{L′

j

}� (1 + ε)max{Lj } = (1 + ε)Lmax(σ ).

Since the above inequality is valid for any feasible schedule σ , the results holds. �

Lemma 6 For a single machine problem 1|pj = bj t, rj = t0|Lmax, sorting the jobsby nonincreasing delivery times yields an optimum schedule.

Proof Consider an optimal schedule σ ∗. If σ ∗ contains jobs that are not sequenced innonincreasing order of delivery times, then there exists a pair of adjacent jobs Ji, Jj

such that job Ji is followed by job Jj but qi < qj . Assuming that job Ji starts attime S, we have

Li

(σ ∗) = S + biS + qi, Lj

(σ ∗) = S + biS + bj (S + biS) + qj .

Consider a schedule σ which is obtained from σ ∗ by interchanging jobs Ji and Jj .Under σ , we have

Lj (σ ) = S + bjS + qj , Li(σ ) = S + bjS + bi(S + bjS) + qi,

then we obtain that

Lj

(σ ∗) > Lj(σ ), Lj

(σ ∗) > Li(σ ).

This contradicts the optimality of σ ∗ since all other completion times are unchanged.It follows that sorting the jobs by nonincreasing delivery times yields an optimumschedule for a single machine. �

Corollary 1 There exists an optimal job sequence for Qm|pj = bj tj , rj = t0|Lmaxsuch that on each machine the jobs are sequenced in nonincreasing order of deliverytimes.

As a result of Theorem 6, with (1 + ε) loss, we can design a dynamic program for theproblem Qm|pj = bj t, rj = t0|Lmax under the modified deteriorating rates. Basedon Corollary 1, it is natural to consider the jobs in nonincreasing order of qj . Let usindex the jobs such that q1 � q2 � · · · � qn. The dynamic programming algorithm isstated as follows:

Page 13: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

Uniform Parallel-Machine Scheduling with Time Dependent 251

Algorithm DP Let φj (γ1, γ2, · · · , γm) denote the optimal value when the jobs inconsideration are J1, J2, · · · , Jj , and the maximum completion time of jobs on ma-chine Mk is t0(1 + ε′)γk for k = 1,2, · · · ,m.

Step 1 (Initialization)

φ0(γ1, γ2, · · · , γm) ={

t0, if γ1 = · · · = γm = 0,

+∞, otherwise.

Step 2 (Recursion)

φj (γ1, γ2, · · · , γm)

= min1�k�m

{max

{φj−1(γ1, · · · , γk − βjk, · · · , γm), t0

(1 + ε′)γk + qj

}}.

If j = n, go to Step 3. Otherwise, set j := j + 1 and repeat Step 2.Step 3 (Output) The optimal value is determined by

min

{

φn(γ1, γ2, · · · , γm)

∣∣∣ 0 � γk �

n∑

j=1

βjk,1 � k � m

}

.

The corresponding optimal schedule can be found by backtracking.Some remarks should be made about Algorithm DP. In Step 2, we assume

that Jj is scheduled on machine Mk , 1 � k � m. For the jobs J1, J2, · · · , Jj−1,the maximum completion time of jobs on machine Mi must be t0(1 + ε′)γi , fori = 1, · · · , k − 1, k + 1, · · · ,m, and be t0(1 + ε′)γk−βjk on machine Mk . Hence thedelivery completion time of Jj is compared with the maximum delivery completiontimes of the jobs that have been scheduled earlier.

Let L = log(1 + bmaxsmin

), where bmaxsmin

= max{ bj

si: 1 � j � n,1 � i � m}. We have

the following theorem.

Theorem 7 There exists an FPTAS for the problem Qm|pj = bj t, rj = t0|Lmaxwhich runs in O(n2m−1Lm−1/εm−1) time.

Proof In the dynamic programming recursion, we have n states for j , and atmost

∑nj=1 βjk states for each γk , 1 � k � m, where only m − 1 of the values

γ1, γ2, · · · , γm are independent. So the total complexity is O(n(∑n

j=1 βjk)m−1) =

O(n2m−1Lm−1/εm−1). �

5 Conclusion

In this paper, we focus on uniform parallel machine scheduling problems with timedependent processing times. The actual processing time of a job is a linear increasingfunction of its starting time. The objectives are to minimize the total completion timeof all jobs and the total load on all machines. When all jobs are associated witha common deteriorating rate b, we provide an optimal O(n logn) time algorithm

Page 14: Uniform Parallel-Machine Scheduling with Time Dependent Processing Times

252 J. Zou et al.

for them. We also presented fully polynomial-time approximation schemes for theproblems Qm|pj = aj + bj tj |∑Ci

max and Qm|pj = bj tj , rj = t0|Lmax. For futureresearch, it would be interesting to focus on scheduling deteriorating jobs with otherobjectives.

Acknowledgements We thank the two anonymous reviewers for their helpful and detailed commentson an earlier version of our paper.

References

[1] Browne, S., Yechiali, U.: Scheduling deteriorating jobs on a single processor. Oper. Res. 38, 495–498(1990)

[2] Mosheiov, G.: Scheduling jobs under simple linear deterioration. Comput. Oper. Res. 2(1), 653–659(1994)

[3] Kuo, W.H., Yang, D.L.: Parallel-machine scheduling with time dependent processing times. Theor.Comput. Sci. 393, 204–210 (2008)

[4] Kononov, A.: Scheduling problems with linear increasing processing times. Oper. Res. Proc. 1996,208–212 (1997)

[5] Mosheiov, G.: Multi-machine scheduling with linear deterioration. Inf. Syst. Oper. Res. 36, 205–214(1998)

[6] Liu, M., Zheng, F.F., Chu, C.B.: An FPTAS for uniform machine scheduling to minimize makespanwith linear deterioration. J. Comb. Optim. 23, 483–492 (2012)

[7] Alidaee, B., Womer, N.K.: Scheduling with time dependent processing times: review and extension.J. Oper. Res. Soc. 50, 711–720 (1999)

[8] Cheng, T.C.E., Ding, Q., Lin, B.M.T.: A concise survey of scheduling with time-dependent process-ing times. Eur. J. Oper. Res. 152, 1–13 (2004)

[9] Lee, W.C.: A note on deteriorating jobs and learning in single-machine scheduling problems. Int. J.Bus. Econ. 3, 205–214 (2004)

[10] Graham, R.L., Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G.: Optimization and approximationin deterministic sequencing and scheduling: a survey. Ann. Discrete Math. 5, 287–326 (1979)

[11] Hardy, G.H., Littlewood, J.E., Polya, G.: Inequalities. Cambridge University Press, London (1967)[12] Kovalyov, M.Y., Kubiak, W.: A fully polynomial approximation scheme for minimizing makespan

of deteriorating jobs. J. Heuristics 3, 287–297 (1998)[13] Gupta, J.N.D., Gupta, S.K.: Single facility scheduling with nonlinear processing times. Comput. Ind.

Eng. 14, 387–394 (1988)[14] Engels, D.W., Karger, D.R., Kolliopoulos, S.G., Sengupta, S., Uma, R.N., Wein, J.: Techniques for

scheduling with rejection. J. Algorithms 49, 175–191 (2003)[15] Sahni, S.: Algorithms for scheduling independent tasks. J. ACM 23, 116–127 (1976)[16] Sengupta, S.: Algorithms and approximation schemes for minimum lateness/tardiness scheduling

with rejection. Lecture Notes in Computer Science, vol. 2748. pp. 79–90 (2003)