14
Optim Lett DOI 10.1007/s11590-012-0574-5 ORIGINAL PAPER Scheduling with a position-weighted learning effect T. C. E. Cheng · Wen-Hung Kuo · Dar-Li Yang Received: 11 March 2012 / Accepted: 24 September 2012 © Springer-Verlag Berlin Heidelberg 2012 Abstract In general, human learning takes time to build up, which results from a worker gaining experience from repeating similar operations over time. In the early stage of processing a given set of similar jobs, a worker is not familiar with the operations, so his learning effect on the jobs scheduled early is not apparent. On the other hand, when the worker has gained experience in processing the jobs his learning improves. So a worker’s learning effect on a job depends not only on the total processing time of the jobs that he has processed but also on the job position. In this paper we introduce a position-weighted learning effect model for scheduling problems. We provide optimal solutions for the single-machine problems to minimize the makespan and the total completion time, and an optimal solution for the single- machine problem to minimize the total tardiness under an agreeable situation. We also consider two special cases of the flowshop problem. Keywords Scheduling · Learning effect · Makespan · Total completion time · Total tardiness 1 Introduction Learning and its effect on productivity are well recognized in industry. In general, a worker learns from experience in performing repetitive tasks and the learning effect T. C. E. Cheng Department of Logistics and Maritime Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong W.-H. Kuo (B ) · D.-L. Yang Department of Information Management, National Formosa University, Yun-Lin 632, Taiwan, ROC e-mail: [email protected] 123

Scheduling with a position-weighted learning effect

Embed Size (px)

Citation preview

Page 1: Scheduling with a position-weighted learning effect

Optim LettDOI 10.1007/s11590-012-0574-5

ORIGINAL PAPER

Scheduling with a position-weighted learning effect

T. C. E. Cheng · Wen-Hung Kuo · Dar-Li Yang

Received: 11 March 2012 / Accepted: 24 September 2012© Springer-Verlag Berlin Heidelberg 2012

Abstract In general, human learning takes time to build up, which results from aworker gaining experience from repeating similar operations over time. In the earlystage of processing a given set of similar jobs, a worker is not familiar with theoperations, so his learning effect on the jobs scheduled early is not apparent. Onthe other hand, when the worker has gained experience in processing the jobs hislearning improves. So a worker’s learning effect on a job depends not only on thetotal processing time of the jobs that he has processed but also on the job position.In this paper we introduce a position-weighted learning effect model for schedulingproblems. We provide optimal solutions for the single-machine problems to minimizethe makespan and the total completion time, and an optimal solution for the single-machine problem to minimize the total tardiness under an agreeable situation. We alsoconsider two special cases of the flowshop problem.

Keywords Scheduling · Learning effect · Makespan · Total completion time ·Total tardiness

1 Introduction

Learning and its effect on productivity are well recognized in industry. In general, aworker learns from experience in performing repetitive tasks and the learning effect

T. C. E. ChengDepartment of Logistics and Maritime Studies, The Hong Kong Polytechnic University,Kowloon, Hong Kong

W.-H. Kuo (B) · D.-L. YangDepartment of Information Management, National Formosa University,Yun-Lin 632, Taiwan, ROCe-mail: [email protected]

123

Page 2: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

manifests itself in labour productivity improvement. However, a worker is not familiarwith the operations at the beginning of processing a set of similar jobs, so his learningeffect on the jobs scheduled early is not apparent. On the other hand, when the workerbecomes experienced in processing the jobs, he enhances his learning. There aretwo main classes of learning effect models studied in the scheduling literature. Oneis position-based learning (see, e.g., [1–5]) and the other is sum-of-processing-time-based learning (see, e.g., [6,7]). For a comprehensive survey on this stream of research,the reader may refer to Biskup [8].

As for the learning effect, Biskup [8] stated that it may be more appropriate toassume that position-based learning takes place during machine setups only, whilesum-of-processing-time-based learning occurs when workers have gained experiencefrom producing similar jobs. Practical examples of the former class of learning arememory chips/circuit boards manufacturing and the operation of bottling plants. Forthe latter class of learning, the production of high-end machine tools and the mainte-nance of aircraft are two practical examples. Recently, some researchers introducedmodels that combine the two classes of learning for scheduling problems. Such learn-ing models are appropriate for operations that contain both setup and human learning.For example, an operator performs the operations of drilling holes in parts, wheredifferent parts require different hole sizes. Hence, each part (i.e., job) requires a setuptime and a drilling time. In such an environment, Wu and Lee [9] proposed a learn-ing model whereby the actual processing time of job i scheduled in position r of asequence is

pir = pi

(1 +

∑r−1k=1 p[k]∑nk=1 pk

)a1

ra2 , (1)

where a1 < 0 and a2 < 0 denote the learning indices for sum-of-processing-time-based and position-based learning, respectively, pi denotes the normal processingtime of job i , and the subscript [k] denotes the job in position k of a sequence. Theyshowed that the single-machine problems to minimize the makespan and the totalcompletion time are polynomially solvable under the proposed learning model, andthat the problem to minimize the total weighted completion time has a polynomialoptimal solution under certain agreeable conditions. Cheng et al. [10] considereda similar learning model whereby the actual processing time of job i scheduled inposition r of a sequence is

pir = pi

(1 −

∑r−1k=1 p[k]∑nk=1 pk

)a1

ra2 , (2)

where a1 ≥ 1 and a2 < 0 denote the learning indices for sum-of-processing-time-based and position-based learning, respectively. They obtained results similar to thoseof Wu and Lee [9]. Besides, they provided polynomial optimal solutions for somespecial cases of the m-machine flowshop problems to minimize the makespan and thetotal completion time. Yin et al. [11] proposed another learning model whereby theactual processing time job i scheduled in position r of a sequence is

123

Page 3: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

pir = pi f

(r−1∑k=1

p[k]

)g(r)r = 1, 2, . . . , n, (3)

where∑0

k=1 p[k] = 0, f : [0,+∞) → (0, 1] is a differentiable non-increasing func-tion with f ′ non-decreasing on [0,+∞) and f (0) = 1, and g : [1,+∞) → (0, 1] is anon-increasing function with g(1) = 1. They showed that the single-machine problemsto minimize the makespan and the sum of the kth power of completion times remainpolynomially solvable, and the problems to minimize the total weighted completiontime and the maximum lateness are polynomially solvable under certain conditions.They also showed that for some special cases of the m-machine permutation flowshopproblem, similar results can also be obtained under the proposed learning model.

In general, jobs scheduled in different positions of a sequence should be subjectto different effects of human learning. In the early stage of processing a given setof similar jobs, a worker is unfamiliar with the operations, so the learning effecton the jobs scheduled early is not apparent. On the other hand, when the workerbecomes used to processing similar jobs, his learning improves. So a worker’s learn-ing effect on a job depends not only on the total processing time of the jobs thathe has processed but also on the job position. In this paper we introduce a position-weighted learning effect model for scheduling problems in the above situation. Weconsider some single-machine and flowshop scheduling problems under the pro-posed learning model to minimize three performance measures, namely the makespan(Cmax = max{Ci |i = 1, . . . , n }), the total completion time (T C = ∑n

i=1 Ci ), andthe total tardiness (

∑Ti = ∑n

i=1 max{Ci − di , 0}), where Ci and di are the comple-tion time and due date of job i , respectively. We note that our results are the same asthose in Cheng et al. [10] and Yin et al. [11] with the same objectives. Also, in [11],

a general learning model pir = pi f(∑r−1

k=1 p[k])

g(r) is proposed, so the learning

effect is a convolution of the function of position r and the function of∑r−1

k=1 p[k]. Our

learning model is p jr = p j

(1 + ∑r−1

k=1 wk p[k])a

rb. If we drop the position weight

wk , our learning model becomes p jr = p j

(1 + ∑r−1

k=1 p[k])a

rb. It becomes one of

the models in [11]. Therefore, our results generalize some of the results in [11].

2 Some single-machine scheduling problems

Assume that there are n jobs J1, J2, . . . Jn to be processed on a single machine. All thejobs are non-preemptive. The machine can handle at most one job at a time and cannotstand idle until the last job assigned to it has finished processing. If p j is the normalprocessing time of job j , then the actual processing time of job j if it is scheduled inposition r of a sequence is given by

p jr = p j

(1 +

r−1∑k=1

wk p[k]

)a

rb j, r = 1, . . . , n, (4)

123

Page 4: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

where a ≤ 0 and b ≤ 0 are the learning indices for sum-of-processing-time-basedand position-based learning, respectively, p[k] is the normal processing time of a jobscheduled in position k of a sequence, and wk > 0 is the associated weight of thekth position. Since the worker is not familiar with the job operations at the beginning,the learning effect on a job is less apparent if it is scheduled early in a sequence.Hence, we assume that the position-weights follow a non-decreasing sequence,i.e., 0 < w1 ≤ w2 ≤ · · · ≤ wn .

2.1 Makespan minimization

In this section we study the single-machine scheduling problem under the generallearning effect model (4) to minimize the makespan. For convenience, we denotethe general learning effect as L E pos . Using an extension of the three-field notationfor scheduling problems introduced by Graham et al. [12], we denote the problemunder study as 1/L E pos/Cmax. We show that the proposed problem can be solvedoptimally by the shortest processing time (SPT) rule. Before we prove this result, wefirst introduce the following lemmas, which are extended from the results in Kuo andYang [6].

Lemma 1 1 − (1 + t)alb + at (1 + t)a−1lb ≥ 0 if a ≤ 0, b ≤ 0, l ≥ 1 and t ≥ 0.

Proof The proof is similar to that in Kuo and Yang [6] and so is omitted. ��Lemma 2 λ(k − (1 + t)alb) − (k − (1 + λt)alb) ≥ 0 if λ ≥ 1, k ≥ 1, a ≤ 0, b ≤0, l ≥ 1 and t ≥ 0.

Proof The proof is similar to that in Kuo and Yang [6] and so is omitted ��Theorem 1 For the 1/L E pos/Cmax problem, there exists an optimal schedule in whichthe jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order.

Proof Let S1 = (π1, Jh, J j , Ji , π2) denote a sequence with job j (J j ) in position r andjob i(Ji ) in position (r + 1), where π1 and π2 are partial sequences of S1, which maybe empty. We show that swapping J j and Ji in S1 does not increase the makespan. ��

Let S2 be a new sequence created by swapping J j and Ji . In addition, let Cl(S1)

denote the completion time of Jl in sequence S1 and Cl(S2) denote the completiontime of Jl in sequence S2.

To prove that the makespan of the problem 1/L E pos/Cmax is minimized by the SPTrule (i.e., pi ≤ p j ), it suffices to show that (a) C j (S2) ≤ Ci (S1) and (b) Cm(S2) ≤Cm(S1) for any job Jm in π2.

We first prove (a) by pairwise interchanges of jobs. Note that

Ci (S1)=Ch(S1)+ p j

(1+

r−1∑k=1

wk p[k]

)a

rb+ pi

(1+wr p j +

r−1∑k=1

wk p[k]

)a

(r + 1)b

and

123

Page 5: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

C j (S2)=Ch(S2)+ pi

(1+

r−1∑k=1

wk p[k]

)a

rb+ p j

(1+wr pi +

r−1∑k=1

wk p[k]

)a

(r + 1)b.

Therefore,

Ci (S1)−C j (S2)=(p j − pi )+ pi (1 + w1 p j )a2b − p j (1 + w1 pi )

a2b if r = 1 (5)

and

Ci (S1)−C j (S2)= (p j − pi )

(1+

r−1∑k=1

wk p[k]

)a

rb+ pi

(1+wr p j +

r−1∑k=1

wk p[k]

)a

×(r +1)b− p j

(1+wr pi +

r−1∑k=1

wk p[k]

)a

(r +1)b if r ≥ 2 (6)

since Ch(S1) = Ch(S2).Case I: r = 1Letting λ = p j

pi, we have

p j = λpi . (7)

Substituting (7) into (5) yields

Ci (S1) − C j (S2) = (λpi − pi ) + pi (1 + λw1 pi )a2b − λpi (1 + w1 pi )

a2b

= pi

(λ(1 − (1 + w1 pi )

a2b) − (1 − (1 + λw1 pi )a2b)

)(8)

Letting t = w1 pi > 0, we can re-write (8) as

Ci (S1) − C j (S2) = pi

(λ(1 − (1 + t)a2b) − (1 − (1 + λt)a2b)

)(9)

Hence, if λ = p jpi

≥ 1, then from Lemma 2 we have

Ci (S1) − C j (S2) = pi

(λ(k − (1 + t)a2b) − (k − (1 + λt)a2b)

)≥ 0

since t = w1 pi > 0, l = 2, and k = 1. That is, in an optimal schedule for the1/L E pos/Cmax problem, if pi ≤ p j , then job i is scheduled before job j .

123

Page 6: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

Case II: r ≥ 2Letting x = 1 + ∑r−1

k=1 wk p[k] > 0, we have

Ci (S1) − C j (S2)

xarb= (p j − pi ) + pi

(1 + wr p j

x

)a(

r + 1

r

)b

−p j

(1 + wr pi

x

)a(

r + 1

r

)b

= p j

(1 −

(1 + wr pi

x

)a(

r + 1

r

)b)

−pi

(1 −

(1 + wr p j

x

)a(

r + 1

r

)b)

. (10)

Substituting (7) into (10) yields

Ci (S1) − C j (S2)

xarb= λpi

(1 −

(1 + wr pi

x

)a(

r + 1

r

)b)

−pi

(1 −

(1 + λwr pi

x

)a (r + 1

r

)b)

= pi

(1 −

(1 + wr pi

x

)a(

r + 1

r

)b)

−(

1 −(

1 + λwr pi

x

)a (r + 1

r

)b))

(11)

Letting t = wr pix , we can re-write (11) as

Ci (S1)−C j (S2)

xarb= pi

(1−(1 + t)a

(r +1

r

)b)

−(

1 − (1 + λt)a(

r + 1

r

)b))

Thus, if λ = p jpi

≥ 1, then from Lemma 2 we have

Ci (S1)−C j (S2)

xarb= pi

(k−(1+t)a

(r +1

r

)b)

−(

k−(1+λt)a(

r +1

r

)b))

≥0

since t = wr pix > 0, k = 1, and l = r+1

r ≥ 1. That is, in an optimal schedule forthe 1/L E pos/Cmax problem, if pi ≤ p j , then job i is scheduled before job j . Thiscompletes the proof of (a).

123

Page 7: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

Next, we give a proof of (b) as follows: Let Js be the first job in π2. Then

Cs(S1) = Ci (S1) + ps

(1 + wr p j + wr+1 pi +

r−1∑k=1

wk p[k]

)a

(r + 2)b

and

Cs(S2) = C j (S2) + ps

(1 + wr pi + wr+1 p j +

r−1∑k=1

wk p[k]

)a

(r + 2)b.

So

Cs(S1) − Cs(S2) = Ci (S1) − C j (S2)

+ps(r + 2)b

⎛⎝(

1 + wr p j + wr+1 pi +r−1∑k=1

wk p[k]

)a

−(

1 + wr pi + wr+1 p j +r−1∑k=1

wk p[k]

)a⎞⎠ (12)

If pi ≤ p j , then Ci (S1) − C j (S2) ≥ 0 from the result of (a). Since (wr p j +wr+1 pi ) ≤ (wr pi + wr+1 p j ) and a ≤ 0, the second term on the right-hand side of(12) is greater than or equal to zero. Therefore, Cs(S2) ≤ Cs(S1).

Similarly, we can show that the completion time of any job in S2 is earlier thanthat in S1. Therefore, the makespan of S2 is not greater than that of S1. Repeatingthis interchange argument for all the jobs not sequenced in the SPT order yields thetheorem.

2.2 Total completion time minimization

In this section we study the single-machine scheduling problem under the gen-eral learning effect model (4) to minimize the total completion time, i.e., the1/L E pos/

∑Ci problem.

Theorem 2 For the 1/L E pos/∑

Ci problem, there exists an optimal schedule inwhich the jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order

Proof We use the same notation in the proof of Theorem 1. To prove that thetotal completion time of the problem 1/L E pos/

∑Ci is minimized by the SPT rule

(i.e., pi ≤ p j ), it suffices to show that∑n

k=1 Ck(S2) ≤ ∑nk=1 Ck(S1).

By the proof of Theorem 1, we have C j (S2) ≤ Ci (S1) and Cm(S2) ≤ Cm(S1) forany job Jm in π2. Since Ch(S1) = Ch(S2) and Ck(S1) = Ck(S2) for any job Jk in π1,we only need to show that Ci (S2) ≤ C j (S1).

123

Page 8: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

Taking the difference between Ci (S2) and C j (S1), we have

C j (S1) − Ci (S2) =⎛⎝Ch(S1) + p j

(1 +

r−1∑k=1

wk p[k]

)a

rb

⎞⎠

−⎛⎝Ch(S2) + pi

(1 +

r−1∑k=1

wk p[k]

)a

rb

⎞⎠ ≥ 0.

Repeating this interchange argument for all the jobs not sequenced in the SPT orderyields the theorem. ��Corollary 1 For the 1/L E pos/

∑Ck

i problem, there exists an optimal schedule inwhich the jobs are sequenced in non-decreasing order of pi , i.e., in the SPT order.

Proof Since k is a positive integer, Corollary 1 follows immediately from Theorem 2.

2.3 Total tardiness minimization

In this section we study the single-machine scheduling problem under the generallearning effect model (4) to minimize the total tardiness, i.e., the 1/L E pos/

∑Ti

problem.

Theorem 3 For the 1/L E pos/∑

Ti problem, there exists an optimal schedule inwhich the jobs are sequenced in non-decreasing order of di , i.e., in the earliest due-date (EDD) order, if the job processing times and due-dates are agreeable, i.e., di ≤ d j

implies pi ≤ p j

Proof We use the same notation in the proof of Theorem 1. To prove that the totaltardiness of the 1/L E pos/

∑Ti problem is minimized by the EDD rule (i.e., di ≤ d j )

under the agreeable condition, it suffices to show that (a) Ti (S2)+ Tj (S2) ≤ Tj (S1)+Ti (S1) and (b)

∑Jk∈π2

Tk(S2) ≤ ∑Jk∈π2

Tk(S1).

Part (a) guarantees that the total tardiness of Ji and J j in S2 is not greater than thatin S1. Part (b) guarantees that the total tardiness of all the jobs scheduled after the pairof jobs Ji and J j in S2 is not greater than that in S1. If di ≤ d j implies pi ≤ p j , theresult of part (b) can be easily obtained from the result of Theorem 1. Therefore, weonly need to show that part (a) holds.

The tardiness of job i and job j in S1 are

Tj (S1) = max

⎧⎨⎩Ch(S1) + p j

(1 +

r−1∑k=1

wk p[k]

)a

rb − d j , 0

⎫⎬⎭

and

123

Page 9: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

Ti (S1) = max

⎧⎨⎩Ch(S1) + p j

(1 +

r−1∑k=1

wk p[k]

)a

rb

+ pi

(1 + wr p j +

r−1∑k=1

wk p[k]

)a

(r + 1)b − di , 0

⎫⎬⎭

Similarly, the tardiness of job iand job j in S2 are

Ti (S2) = max

⎧⎨⎩Ch(S2) + pi

(1 +

r−1∑k=1

wk p[k]

)a

rb − di , 0

⎫⎬⎭

and

Tj (S2) = max

⎧⎨⎩Ch(S2) + pi

(1 +

r−1∑k=1

wk p[k]

)a

rb

+ p j

(1 + wr pi +

r−1∑k=1

wk p[k]

)a

(r + 1)b − d j , 0

⎫⎬⎭

There are four possible cases for the relationship between the completion times anddue-dates of Ji and J j in S1 as follows:

Case I. Ci (S1) ≤ di and C j (S1) ≤ d j

In this case, from Theorem 1, swapping Ji and J j does not change therelationship between the completion times and due-dates of Ji and J j , i.e.,Ci (S2) ≤ di and C j (S2) ≤ d j . Therefore, the total tardiness of Ji and J j inS2 is not greater than that in S1, or more precisely, the total tardiness (zero)of Ji and J j in S2 is equal to that in S1.

Case II. Ci (S1) ≥ di and C j (S1) ≤ d j

In this case, after swapping Ji and J j , if Ci (S2) ≥ di , then C j (S2) ≤ d j orC j (S2) ≥ d j . If Ci (S2) ≤ di , then C j (S2) ≤ d j or C j (S2) ≥ d j .

Case III. Ci (S1) ≤ di and C j (S1) ≥ d j Clearly, since C j (S1) ≤ Ci (S1), this casedoes not exist under the condition di ≤ d j .

Case IV. Ci (S1) ≥ di and C j (S1) ≥ d j

In this case, after swapping Ji and J j , if Ci (S2) ≥ di then C j (S2) ≤ d j orC j (S2) ≥ d j . If Ci (S2) ≤ di , then C j (S2) ≤ d j or C j (S2) ≥ d j .

Based on the above analysis, we only need to consider Cases II and IV to showthat Ti (S2) + Tj (S2) ≤ Tj (S1) + Ti (S1). Besides, we only need to consider thesituation of Ci (S2) ≥ di and C j (S2) ≥ d j in the two cases since it is the mostrestrictive situation compared with the other situations. In the following we show thatTi (S2) + Tj (S2) ≤ Tj (S1) + Ti (S1) in Cases II and IV.

Case II. Ci (S1) ≥ di and C j (S1) ≤ d j versus Ci (S2) ≥ di and C j (S2) ≥ d j

123

Page 10: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

Tj (S1) + Ti (S1) − (Ti (S2) + Tj (S2))

= Ch(S1) + p j

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb + pi

⎛⎝1 + wr p j +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b − di

−⎧⎨⎩Ch(S2)+ pi

⎛⎝1+

r−1∑k=1

wk p[k]

⎞⎠

a

rb − di

+ Ch(S2)+ pi

⎛⎝1+

r−1∑k=1

wk p[k]

⎞⎠

a

rb+ p j

⎛⎝1+wr pi +

r−1∑k=1

wk p[k]

⎞⎠

a

(r +1)b − d j

⎫⎬⎭

= (p j − pi )

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb + pi

⎛⎝1 + wr p j +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b

−p j

⎛⎝1 + wr pi +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b − pi

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb − Ch(S2) + d j .

(13)

From (6) in Theorem 1, the sum of the first three terms on the right-hand side of(13) is greater than or equal to zero for pi ≤ p j . Also, under C j (S1) ≤ d j , we have

d j ≥ Ch(S1) + p j

(1 + ∑r−1

k=1 wk p[k])a

rb ≥ Ch(S2) + pi

(1 + ∑r−1

k=1 wk p[k])a

rb.

Thus, Tj (S1) + Ti (S1) − (Ti (S2) + Tj (S2)) ≥ 0.Case IV. Ci (S1) ≥ di and C j (S1) ≥ d j versus Ci (S2) ≥ di and C j (S2) ≥ d j

Tj (S1) + Ti (S1) − (Ti (S2) + Tj (S2))

= Ch(S1) + p j

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb + pi

⎛⎝1 + wr p j +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b − di

+Ch(S1)+ p j

⎛⎝1+

r−1∑k=1

wk p[k]

⎞⎠

a

rb − d j −⎧⎨⎩Ch(S2)+ pi

⎛⎝1+

r−1∑k=1

wk p[k]

⎞⎠

a

rb−di

+Ch(S2)+ pi

⎛⎝1+

r−1∑k=1

wk p[k]

⎞⎠

a

rb+ p j

⎛⎝1+wr pi +

r−1∑k=1

wk p[k]

⎞⎠

a

(r +1)b−d j

⎫⎬⎭

= (p j − pi )

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb + (p j − pi )

⎛⎝1 +

r−1∑k=1

wk p[k]

⎞⎠

a

rb

+pi

⎛⎝1 + wr p j +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b − p j

⎛⎝1 + wr pi +

r−1∑k=1

wk p[k]

⎞⎠

a

(r + 1)b

(14)

From (13) and pi ≤ p j , (14) is greater than or equal to zero. Thus, Tj (S1) +Ti (S1) − (Ti (S2) + Tj (S2)) ≥ 0. Repeating this interchange argument for all the jobsnot sequenced in the EDD order yields the theorem.

123

Page 11: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

3 Some flowshop scheduling problems

The flowshop scheduling problem is described as follows: Assume that there are njobs J1, J2, . . . , Jn to be processed on m machines M1, M2, . . . Mm . J j consists ofm operations (O1 j , O2 j , . . . , Omj ). Operation Oi j must be processed on machineMi , i = 1, 2, . . . , m. The processing of Oi+1, j may start only after Oi j has beencompleted and all the machines process the jobs in the same order, i.e., a permutationschedule. The jobs are non-preemptive. Each machine can handle at most one job at atime and cannot stand idle until the last job assigned to it has finished processing. Letpi jr be the actual processing time of J j ( j = 1, 2, . . . , n) on Mi (I = 1, 2, . . . , m) ifit is scheduled in position r of a sequence. As before, the position-weighted learningeffect model for the flowshop scheduling problem is given as follows:

pi jr = pi j

(1 +

r−1∑k=1

wk pi[k]

)a

rb, j, r = 1, 2, . . . , n; i = 1, 2, . . . , m, (15)

where pi j is the normal processing time of Oi j .For a given schedule �, let Ci j = Ci j (�) denote the completion time of Oi j , C j =

Cmj (�) denote the completion time of J j , and � = ([1], [2], . . . , [n]) represent apermutation of (1, 2, . . . , n), where [ j] denotes the job in the j th position of �.

3.1 Flowshop problems with dominating machines

We first consider a special case of the m-machine flowshop problem with dominatingmachines. Following Nouwelund et al. [13], and Yang and Chern [14], we say thatmachine Mr is dominated by Mk (denoted as Mr ≺ ·Mk) if and only if max{pr j | j =1, 2, . . . , n} ≤ min{pkj | j = 1, 2, . . . , n}. We study the flowshop scheduling problemwith an increasing series of dominating machines (idm), i.e., M1 ≺ ·M2 ≺ · · · ≺ ·Mm .Note that this special case is the same as that considered in Wang and Xia [15].

For a given schedule � = [J1, J2, . . . , Jn], since M1 ≺ ·M2 ≺ · · · ≺ ·Mm, Oi, j+1can start processing immediately after Oi j for each i ≥ 1 and j ≥ 1. We have

C1 =m∑

i=1

pi1,

C2 = C1 + pm2(1 + w1 pm1)a2b,

. . . ,

Ci = C1 +i∑

j=2

pmj

⎛⎝1 +

j−1∑k=1

wk pmk

⎞⎠

a

jb, (16)

. . . ,

Cn = C1 +n∑

j=2

pmj

⎛⎝1 +

j−1∑k=1

wk pmk

⎞⎠

a

jb. (17)

123

Page 12: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

The makespan is given by (19). The first term on the right-hand side of (19) is a con-stant (if J1 is fixed, then C1 =∑m

i=1 pi1). Meanwhile,∑n

j=2 pmj (1+∑ j−1k=1 wk pmk)

a jb

is minimized by a similar strategy in the proof of Theorem 1. Also, the proofs of The-orems 5 and 6 are similar to those of Theorems 2 and 3, respectively. Thus, Theorems4 to 6 follow immediately.

Theorem 4 For the problem Fm/pi jr = pi j

(1 + ∑r−1

k=1 wk pi[k])a

rb, idm/Cmax,

there exists an optimal schedule in which all the jobs are sequenced in non-decreasingorder of pmj except the first processed job.

Theorem 5 For the problem Fm/pi jr = pi j

(1 + ∑r−1

k=1 wk pi[k])a

rb, idm/∑

Ci ,

there exists an optimal schedule in which all the jobs are sequenced in non-decreasingorder of pmj except the first processed job.

Theorem 6 For the problem Fm/pi jr = pi j

(1 + ∑r−1

k=1 w[k] pi[k])a

rb, idm/∑

Ti ,

if the jobs satisfy the agreeable condition that pmi ≤ pmj implies di ≤ d j for all jobsi and j, then there exists an optimal solution in which all the jobs are sequenced innon-decreasing order of d j except the first processed job.

3.2 Flowshop problems with identical processing times on all the machines

Now we consider another special case of the flowshop scheduling problem with iden-tical processing times on all the machines, i.e., pi j = p j [16]. The completion timeC[ j] is

C[ j](�) =j∑

k=1

p[k] + (m − 1) max{p[1], p[2], . . . , p[ j]}. (18)

Hence, under the proposed learning model, the completion time C[ j] of J[ j] is

C[ j](�) =j∑

l=1

p[l]

(1 +

l−1∑k=1

wk p[k]

)a

lb + (m − 1) max

×⎧⎨⎩p[1], p[2](1 + w1 p[1])a2b, . . . , p[ j]

⎛⎝1 +

j−1∑k=1

wk p[k]

⎞⎠

a

jb

⎫⎬⎭ .

(19)

In the following the proofs of Theorems 7 to 9 are similar to those of Theorems 1to 3, respectively. Thus, we omit the proofs.

Theorem 7 For the problem Fm/pi jr=pi j

(1 + ∑r−1

k=1 w[k] pi[k])a

rb, pi j=p j/Cmax,

there exists an optimal schedule in which all the jobs are sequenced in non-decreasingorder of p j .

123

Page 13: Scheduling with a position-weighted learning effect

Scheduling with a position-weighted learning effect

Theorem 8 For the problem Fm/pi jr=pi j

(1 + ∑r−1

k=1 wk pi[k])a

rb, pi j=p j/∑

Ci ,

there exists an optimal schedule in which all the jobs are sequenced in non-decreasingorder of p j .

Theorem 9 For the problem Fm/pi jr=pi j

(1 + ∑r−1

k=1 wk pi[k])a

rb, pi j=p j/∑

Ti ,

if the jobs satisfy the agreeable condition that pmi ≤ pmj implies di ≤ d j for all jobsi and j, then there exists an optimal schedule in which all the jobs are sequenced innon-decreasing order of d j .

4 Conclusions

In this paper we study scheduling problems under a position-weighted learning effectmodel. We propose this learning model based on the following facts: (1) a worker’slearning effect on a job depends not only on the total processing time of the jobs thathe has processed but also on the job position, and (2) in the early stage of processinga given set of similar jobs, a worker is not familiar with the operations, so the learningeffect on the jobs scheduled early is not apparent. We provide optimal solutions forthe single-machine problems to minimize the makespan and the total completion time,and an optimal solution for the single-machine problem to minimize the total tardinessunder an agreeable situation. We also consider two special cases of the flowshopproblem. For future research, some other learning effect models related to realisticsituations should be studied for different scheduling problems.

Acknowledgments We are grateful to two anonymous referees for their helpful comments on earlierversions of our paper. This research was supported in part by the National Science Council of Taiwan,Republic of China, under grant number NSC-98-2221-E-150-032. Professor Yang is currently a Fulbrightvisiting scholar to Oregon State University and he was supported in part by the National Science Councilof Taiwan, Republic of China, under grant number NSC-99-2221-E-150-034-MY2.

References

1. Bachman, A., Janiak, A.: Scheduling jobs with position-dependent processing times. J. Oper. Res. Soc.55, 257–264 (2004)

2. Biskup, D.: Single-machine scheduling with learning considerations. Eur. J. Oper. Res. 115, 173–178(1999)

3. Biskup, D., Simons, D.: Common due date scheduling with autonomous and induced learning. Eur. J.Oper. Res. 159, 606–616 (2004)

4. Cheng, T.C.E., Wang, G.: Single machine scheduling with learning effect considerations. Ann. Oper.Res. 98, 273–290 (2000)

5. Mosheiov, G., Sidney, J.B.: Scheduling with general job-dependent learning curves. Eur. J. Oper. Res.147, 665–670 (2003)

6. Kuo, W.H., Yang, D.L.: Minimizing the total completion time in a single machine scheduling problemwith a time-dependent learning effect. Eur. J. Oper. Res. 174, 1184–1190 (2006)

7. Koulamas, C., Kyparisis, G.J.: Single-machine and two-machine flowshop scheduling with generallearning functions. Eur. J. Oper. Res. 178, 402–407 (2007)

8. Biskup, D.: A state-of-the-art review on scheduling with learning effects. Eur. J. Oper. Res. 188,315–329 (2008)

9. Wu, C.C., Lee, W.C.: Single-machine scheduling problems with a learning effect. App. Math. Model.32, 1191–1197 (2008)

123

Page 14: Scheduling with a position-weighted learning effect

T. C. E. Cheng et al.

10. Cheng, T.C.E., Wu, C.C., Lee, W.C.: Some scheduling problems with sum-of-processing-times-basedand job-position-based learning effects. Inf. Sci. 178, 2476–2487 (2008)

11. Yin, Y., Xu, D., Sun, K., Li, H.: Some scheduling problems with general position-dependent andtime-dependent learning effects. Inf. Sci. 179, 2416–2425 (2009)

12. Graham, R.L., Lawler, E.L., Lenstra, J.K.: Optimization and approximation in deterministic sequencingand scheduling: a survey. Ann. Discret. Math. 5, 287–326 (1979)

13. Nouweland, V.D., Krabbenborg, M., Potters, J.: Flowshops with a dominant machine. Eur. J. Oper.Res. 62, 38–46 (1992)

14. Yang, D.L., Chern, M.S.: A two-machine flowshop sequencing problem with limited waiting timeconstraints. Comput. Ind. Eng. 28, 63–70 (1995)

15. Wang, J.B., Xia, Z.Q.: Flow-shop scheduling with a learning effect. J. Oper. Res. Soc. 56, 1325–1330(2005)

16. Pinedo, M.: Scheduling: Theory. Algorithms and Systems. Prentice-Hall, Englewood Cliffs (1995)

123