8

Click here to load reader

A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Embed Size (px)

Citation preview

Page 1: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Computers & Industrial Engineering 60 (2011) 534–541

Contents lists available at ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

A two-agent single-machine scheduling problem with truncatedsum-of-processing-times-based learning considerations q

T.C.E. Cheng a, Shuenn-Ren Cheng b, Wen-Hung Wu c, Peng-Hsiang Hsu d, Chin-Chia Wu d,⇑a Department of Logistics and Maritime Studies, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kongb Graduate Institute of Business Administration, Cheng Shiu University, Kaohsiung County, Taiwanc Department of Business Administration, Kang-Ning Junior College, Taipei, Taiwand Department of Statistics, Feng Chia University, Taichung, Taiwan

a r t i c l e i n f o

Article history:Received 28 July 2010Received in revised form 15 December 2010Accepted 17 December 2010Available online 24 December 2010

Keywords:SchedulingTwo-agentSimulated annealingTruncated sum-of-processing-times-basedlearning

0360-8352/$ - see front matter � 2010 Elsevier Ltd. Adoi:10.1016/j.cie.2010.12.008

q This manuscript was handled by area editor T.C. E⇑ Corresponding author.

E-mail address: [email protected] (C.-C. Wu).

a b s t r a c t

Scheduling with learning effects has received a lot of research attention lately. By learning effect, wemean that job processing times can be shortened through the repeated processing of similar tasks. Onthe other hand, different entities (agents) interact to perform their respective tasks, negotiating amongone another for the usage of common resources over time. However, research in the multi-agent settingis relatively limited. Meanwhile, the actual processing time of a job under an uncontrolled learning effectwill drop to zero precipitously as the number of jobs increases or a job with a long processing time exists.Motivated by these observations, we consider a two-agent scheduling problem in which the actual pro-cessing time of a job in a schedule is a function of the sum-of-processing-times-based learning and a con-trol parameter of the learning function. The objective is to minimize the total weighted completion timeof the jobs of the first agent with the restriction that no tardy job is allowed for the second agent. Wedevelop a branch-and-bound and three simulated annealing algorithms to solve the problem. Computa-tional results show that the proposed algorithms are efficient in producing near-optimal solutions.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

Scheduling research often assumes that all the job processingtimes are known and fixed before the jobs are processed (Pinedo,2002; Smith, 1956). However, in many real-life situations, the effi-ciency of the production facilities (e.g., machines or workers) im-proves continuously over time. As a result, the production timeof a given product is shorter if it is scheduled (and processed) later.For instance, Biskup (1999) remarks that the repeated processingof similar tasks improves workers’ skills because workers are ableto perform setups, operate hardware and software, and handle rawmaterials and components at a faster pace as they gain experienceover time. This phenomenon is known as the ‘‘learning effect’’ inthe literature.

Biskup (1999) and Cheng and Wang (2000) were pioneers whointroduced Wright’s (1936) learning curve into the field of sched-uling. Many researchers have since expended vast amounts of ef-fort on this relatively young but vivid area of schedulingresearch. We refer the reader to the papers by Mosheiov (2001a,2001b), Mosheiov and Sidney (2003), Bachman and Janiak (2004),Lee and Wu (2004), Wang and Xia (2005), Kuo and Yang (2006),

ll rights reserved.

dwin Cheng.

Koulamas and Kyparisis (2007), Wu, Lee, and Wang (2007), Cheng,Wu, and Lee (2008), Eren (2009), Toksari, Oron, and Güner (2009),Janiak and Rudek (2008), Eren and Guner (2008), Janiak, Janiak,Rudek, and Wielgus (2009), Lee, Wu, and Liu (2009), Sun (2009),Wang and Liu (2009), Yin, Xu, Sun, and Li (2009), etc. In addition,Biskup (2008) provides a comprehensive review of scheduling re-search with learning considerations. For more recent results, thereader may refer to Cheng, Wu, and Lee (2010), Janiak and Rudek(2010), Ji and Cheng (2010), Koulamas (2010), Okołowski andGawiejnowicz (2010), Yang and Yang (2010), Wang, Wang, andWang (2010), Wu and Liu (2010), Wu, Hsu, Chen, and Wang(2010), etc.

On the other hand, another basic assumption in classical sched-uling theory is that only a single optimality criterion is used toevaluate schedules. Contrary to this, jobs might come from severalagents that have different requirements to meet. Agnetis, Mirchan-dani, Pacciarelli, and Pacifici (2004) point out that multiple-agentscompete on the usage of a common processing resource in differ-ent application environments and different methodological fields,such as artificial intelligence, decision theory, operations research,etc. One major stream of research in this context is related to mul-ti-agent systems, i.e., systems in which different entities (agents)interact to perform their respective tasks, negotiating among oneanother for the usage of common resources over time. Baker andSmith (2003) and Agnetis et al. (2004) introduce the concept of

Page 2: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541 535

agent scheduling. Other results in the area include Yuan, Shang,and Feng (2005), Cheng, Ng, and Yuan (2006), Ng, Cheng, and Yuan(2006), Agnetis, Pacciarelli, and Pacifici (2007), and Liu and Tang(2008), among others. As for multi-agent systems applied in man-ufacturing systems, the reader could may refer to Papakostas,Mourtzis, Bechrakis, Chryssolouris, and Doukas (1999) and Chrys-solouris, Papakostas, and Mourtzis (2000); for the use of multiplecriteria in scheduling problems in the manufacturing domain, toChryssolouris, Pierce, and Dicke (1992); and for the use of dis-patching rules in single-machine scheduling problems, to Gianne-los, Papakostas, Mourtzis, and Chryssolouris (2007) and Papakostasand Chryssolouris (2009).

However, scheduling research in the multiple-agent setting withlearning considerations is relatively limited. Moreover, it is notedthat the actual processing time of a job under an uncontrolled learn-ing effect will drop to zero precipitously as the number of the jobsalready processed increases or jobs with long processing times exist.Motivated by these limitations, we consider in this paper a two-agent scheduling problem with a truncated sum-of-processing-times-based learning effect, where the objective is to minimize thetotal weighted completion time of the jobs of the first agent, withthe restriction that no tardy job is allowed for the second agent.

The remainder of this paper is organized as follows: We presentthe problem formulation in the next section. In Section 3 we devel-op some dominance properties and a lower bound to enhance thesearch efficiency for the optimal solution, followed by descriptionsof a branch-and-bound algorithm and three simulated annealingalgorithms. We present the results of computational experimentsto test the performance of the proposed algorithms in Section 4.We conclude the paper in the last section.

2. Problem statement

We formulate the scheduling problem under study as follows:There are n jobs ready to be processed on a single-machine. The jobsbelong to one of two-agents AG0 and AG1. Associated with each job Jj,there is a normal processing time pj, a weight wj, a due date dj, and anagent code Ij, where Ij = 0 if Jj e AG0 and Ij = 1 if Jj e AG1. We assumethat all the jobs have a common learning threshold b, beyond whichlearning becomes insignificant. Let J[i] denote the job scheduled inthe ith position of a given schedule. Then the actual processing time

of Jj is pj max 1þPr�1

l¼1 p½l�� �a

; bn o

if it is scheduled in the rth position

in a schedule, where a is the learning index with a < 0 and b is thetruncation parameter with 0 < b < 1. (Note that the proposed modelwithout agent and learning considerations has been considered byKuo and Yang (2006).) For a given schedule S, let Cj(S) be the comple-tion time of Jj, Tj(S) = max {0, Cj(S) � dj} be the tardiness of Jj, andUj(S) = 1 if Tj(S) > 0 and zero otherwise. Our objective is to find anoptimal schedule to minimize

Pnj¼1wjCjðSÞð1� IjÞ, subject to

Pnj¼1

UjðSÞIj ¼ 0.

3. Branch-and-bound and simulated annealing algorithms

The problem is NP-hard because Ng et al. (2006) show that theproblem is NP-hard even without learning effects. In this sectionwe apply the branch-and-bound technique to search for the opti-mal solution. In order to facilitate the searching process, we firstdevelop some dominance rules, followed by a lower bound tofathom the search tree. We then present the branch-and-boundand three simulated annealing algorithms.

3.1. Dominance properties

Assume that schedule S1 has two adjacent jobs Ji and Jj with Ji

immediately preceding Jj, Ji, and Jj are in the rth and (r + 1)th

positions in S1, respectively, and t is the starting of Ji in S1. Weperform a pairwise interchange of Ji and Jj to generate a new sche-dule S2. We have the following results.

Property 1. For any two jobs Ji and Jj e AG0 to be scheduledconsecutively, if pj

pi> 1 P wj

wi, then S1 dominates S2.

Proof. Since jobs Ji and Jj 2 AG0, to show S1 dominates S2, we haveto show that Cj(S1) 6 Ci(S2) and wiCi (S1) + wjCj (S1) 6 wj Cj(S2)+wiCi

(S2). By pj

pi> 1 P wj

wi, we have pi < pj and wi 6wj.The completion

times of jobs Ji and Jj in S1 and S2 are, respectively,

CiðS1Þ¼ tþpi max 1þXr�1

l¼1

p½l�

!a

;b

( ); ð1Þ

CjðS1Þ¼ tþpi max 1þXr�1

l¼1

p½l�

!a

;b

( )þpj max 1þ

Xr�1

l¼1

p½l� þpi

!a

;b

( ); ð2Þ

CjðS2Þ¼ tþpj max 1þXr�1

l¼1

p½l�

!a

;b

( ); ð3Þ

and

CiðS2Þ ¼ t þ pj max 1þXr�1

l¼1

p½l�

!a

;b

( )

þ pi max 1þXr�1

l¼1

p½l� þ pj

!a

;b

( ): ð4Þ

Taking the difference between Equations (2) and (4), we have

CiðS2Þ � CjðS1Þ ¼ ðpj � piÞmax 1þXr�1

l¼1

p½l�

!a

;b

( )

þ pi max 1þXr�1

l¼1

p½l� þ pj

!a

;b

( )

� pj max 1þXr�1

l¼1

p½l� þ pi

!a

;b

( ):

Now we consider the following cases.

Case 1: b P 1þPr�1

l¼1 p½l�� �a

: Then CiðS2Þ � CjðS1Þ ¼ ðpj � piÞbþpib� pjb ¼ 0.

Case 2: 1þPr�1

l¼1 p½l�� �a

> b P 1þPr�1

l¼1 p½l� þ pi

� �a. Then

CiðS2Þ � CjðS1Þ ¼ ðpj � piÞ 1þXr�1

l¼1

p½l�

!a

þ pib� pjb

¼ ðpj � piÞ 1þXr�1

l¼1

p½l�

!a

� b

!> 0:

Case 3: 1þPr�1

l¼1 p½l� þ pi

� �a> b P 1þ

Pr�1l¼1 p½l� þ pj

� �a. Then

CiðS2Þ � CjðS1Þ ¼ ðpj � piÞ 1þXr�1

l¼1

p½l�

!a

þ pib� pj 1þXr�1

l¼1

p½l� þ pi

!a

P ðpj � piÞ 1þXr�1

l¼1

p½l�

!a

þ pi 1þXr�1

l¼1

p½l� þ pj

!a

� pj 1þXr�1

l¼1

p½l� þ pi

!a

: ð5Þ

Page 3: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

536 T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541

Substituting h = pj/pi, u ¼ 1þPr�1

l¼1 p½l�� �

, and y ¼ piPn

l¼1pl

� �into (5)

and simplifying, we obtain

ðpj�piÞ 1þXr�1

l¼1

pjlj

!a

þpi 1þXr�1

l¼1

pjlj þpj

!a

�pj 1þXr�1

l¼1

pjlj þpi

!a

¼piua½ðh�1Þþð1þh yÞa�hð1þyÞa�: ð6Þ

Taking the derivative of (6) with respective to h and noting that{h > 1}, a < 0, and y > 0, we have piu

a½ðh� 1Þ þ ð1þ h yÞa�hð1þ yÞa� > 0 so CiðS2Þ � CjðS1Þ > 0.

Case 4: 1þPr�1

l¼1 pjlj þ pj

� �a> b. Then

CiðS2Þ � CjðS1Þ ¼ ðpj � piÞ 1þXr�1

l¼1

pjlj

!a

þ pi 1þXr�1

l¼1

pjlj þ pj

!a

� pj 1þXr�1

l¼1

pjlj þ pi

!a

:

Analogous to the proof of Case 3, we have CiðS2Þ � CjðS1Þ > 0.Therefore, in any case, we have CiðS2Þ � CjðS1Þ > 0. Now, frompi < pj, it is clear that CjðS2Þ > CiðS1Þ. Thus, bywi P wj, we have

wiCiðS2Þ þwjCjðS2Þ �wiCiðS1Þ �wjCjðS1ÞP wiCiðS1Þ þwjCjðS1Þ �wiCiðS1Þ �wjCjðS1Þ¼ ðwi �wjÞðCiðS1Þ � CjðS1ÞÞP 0:

Consequently, S1 dominates S2.’’

Property 2. For any two jobs Ji and Jj e AG1 to be scheduledconsecutively, if pi < pj

t þ pi max 1þXr�1

l¼1

p½l�

!a

; b

( )þ pj max 1þ

Xr�1

l¼1

p½l� þ pi

!a

; b

( )< dj;

and t þ pi max 1þPr�1

l¼1 p½l�� �a

; bn o

< di, then S1 dominates S2.

Property 3. For job Ji e AG0 and job Jj e AG1 to be scheduled consec-utively, if pi < pj and

t þ pi max 1þXr�1

l¼1

p½l�

!a

; b

( )þ pj max 1þ

Xr�1

l¼1

p½l� þ pi

!a

; b

( )< dj;

then S1 dominates S2.

Property 4. For job Ji e AG1 and job Jj e AG0 to be scheduled consec-utively, if

pi max 1þXr�1

l¼1

p½l�

!a

; b

( )< pj max 1þ

Xr�1

l¼1

p½l�

!a

;b

( )(

�max 1þXr�1

l¼1

p½l� þ pi

!a

;b

( ))

and t þ pi max 1þPr�1

l¼1 p½l�� �a

; bn o

< di, then S1 dominates S2.

The proofs of Properties 2, 3, 4 are similar to that of Property 1and are omitted.

Next, we present a property to determine the feasibility of apartial schedule. Let (p, pc) be a schedule of jobs, where p is thescheduled portion with k jobs and pc is the unscheduled portion.Moreover, let C[k] be the completion time of the last job in p. Also,let p0 and p00 denote the unscheduled jobs in AG0 arranged in theweighted smallest processing times (WSPT) order and theunscheduled jobs in AG1 arranged in the earliest due date rule(EDD) order, respectively.

Property 5. If there is a job Jj e AG1 \ pc such that C[k] > dj, thenschedule (p, pc) is not a feasible solution.

Proof. Since C[k] > dj, one job of AG1 among the unscheduled jobsmust be tardy. So (p, pc) is not a feasible solution. h

Property 6. If there is a job Jj e AG1 \ pc such that

C½k� þ pj max 1þPk

l¼1p½l�� �a

; bn o

> dj, then schedule (p, pc) is not

a feasible solution.

Proof. Similar to Property 5. h

Property 7. If all the unscheduled jobs belong to AG0 and

1þPk

l¼1p½l�� �a

6 b, then schedule (p, pc) is dominated by schedule

(p, p0).

Proof. Since 1þPk

l¼1p½l�� �a

6 b, the completion times of all the

unscheduled jobs are independent of the learning rate a. So the

WSPT rule yields an optimal sub-schedule (Pinedo 2002; Smith

1956). h

Property 8. If all the unscheduled jobs belong to AG1 and

1þPk

l¼1p½l�� �a

6 b, then schedule (p, pc) is dominated by schedule

ðp;p00Þ.

Proof. Similar to Property 7. h

3.2. Lower bound

Let PS be a partial schedule in which the order of the first kjobs is determined and US be the unscheduled portion with(n � k) jobs, where n0 jobs belong to AG0 and n1 jobs belong toAG1 such that n0 + n1 = n � k. Moreover, let C[k] be the completiontime of the last job in PS. The completion time of the (k + 1)thjob is

C ½kþ1� ¼ C½k� þ p½kþ1� max 1þXk

l¼1

p½l�

!a

;b

!

P C ½k� þ pðkþ1Þ max 1þXk

l¼1

p½l�

!a

;b

!;

where pðkþ1Þ 6 pðkþ2Þ 6 : � � � 6 pðnÞ denotes the normal processingtimes of the (n � k) unscheduled jobs arranged in a non-decreasingorder. Similarly, the completion time of the (k + j)th job is

C½kþj� P C ½k� þ pðkþ1Þ max 1þXk

l¼1

p½l�

!a

;b

!þXj

i¼2

pðkþiÞ

� max 1þXk

l¼1

p½l� þ pðkþi�1Þ

!a

; b

!; for 1

6 j 6 n� k:

Based on the idea of Lee, Wang, Shiau, and Wu (2010), Lee,Chen, Chen, and Wu (2011), we see that the remaining problemis to assign job completion times to the jobs of agents AG0 andAG1. Since the jobs of agent AG1 cannot be tardy, the guiding prin-ciple is to assign the completion times to the jobs of agent AG1 aslate as possible without violating the non-tardiness constraint.The procedure is as follows:

Page 4: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Table 1Performance of the branch-and-bound and SA algorithms.

n s R a b Branch-and-bound algorithm Error percentages

Node CPU time IS SA1 SA2 SA3 Combine

Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max

16 0.25 0.50 �0.1 0.25 41,372 1,034,972 2.59 58.09 100 0.20 3.55 0.50 3.57 0.18 3.72 0.07 1.290.50 54,332 1,157,404 3.36 72.05 100 0.19 6.06 0.39 2.91 0.22 2.91 0.06 1.950.75 183,751 9,053,368 8.92 393.81 100 0.29 3.73 1.47 44.24 0.24 4.06 0.04 0.50

�0.2 0.25 28,653 1,650,115 1.80 97.23 100 1.13 10.66 0.58 5.00 0.51 6.93 0.14 4.090.50 28,976 395,453 1.90 24.27 100 0.93 12.25 0.67 4.92 0.33 6.96 0.09 2.500.75 83,327 2,818,181 4.54 148.64 100 0.88 36.27 0.88 23.71 0.42 4.50 0.05 0.85

�0.3 0.25 73,623 6,000,073 4.46 355.70 100 2.12 16.05 0.33 3.40 1.09 11.10 0.07 0.840.50 27,072 1,068,121 1.74 64.08 100 1.24 13.77 0.60 5.79 0.71 7.81 0.07 2.170.75 112,958 7,124,614 4.45 221.50 100 0.77 18.90 3.31 76.88 0.22 4.43 0.07 2.71

0.75 �0.1 0.25 35,600 675,278 2.28 41.00 100 0.16 1.86 0.50 6.34 0.19 2.47 0.04 0.570.50 35,909 1,241,385 2.13 61.50 100 0.22 4.41 0.45 4.82 0.26 2.87 0.04 1.000.75 317,193 8,294,439 11.59 294.94 100 0.80 18.35 2.44 34.41 0.31 6.51 0.11 2.08

�0.2 0.25 12,591 169,385 0.83 11.33 100 1.25 13.93 0.59 6.61 0.47 4.42 0.08 1.620.50 53,731 2,305,813 3.39 138.36 100 0.89 17.14 0.80 8.60 0.41 4.25 0.12 4.020.75 81,527 2,355,550 3.94 83.03 100 0.70 20.24 2.83 56.34 0.41 9.25 0.14 4.76

�0.3 0.25 12,917 570,298 0.82 32.55 100 1.79 15.51 0.41 5.26 0.70 8.71 0.11 1.830.50 61,831 2,141,906 3.57 109.20 100 1.64 14.19 0.39 6.78 0.91 8.99 0.16 3.710.75 420,286 19,185,481 14.43 484.81 100 1.33 20.84 4.34 54.72 0.49 7.73 0.17 4.80

0.50 0.50 �0.1 0.25 468,251 8,409,383 14.52 211.94 100 2.27 41.31 7.49 48.82 0.80 17.81 0.55 17.810.50 349,690 6,108,137 12.55 215.30 100 0.97 16.93 6.76 97.75 0.35 4.15 0.18 4.150.75 1,375,375 37,937,895 39.34 1037.31 100 5.79 74.00 18.72 93.01 1.41 17.84 0.64 12.98

�0.2 0.25 31,896 1,085,615 1.98 63.30 100 1.48 12.00 1.03 15.27 0.55 7.26 0.18 3.360.50 242,259 5,069,248 8.52 146.33 100 0.96 12.69 3.35 33.93 0.47 5.23 0.20 4.440.75 1,228,454 50,229,263 35.78 1499.59 100 5.07 41.37 20.20 229.02 0.78 13.08 0.28 3.92

�0.3 0.25 16,348 401,221 1.02 21.75 100 2.33 16.02 0.39 3.22 0.92 13.12 0.09 2.220.50 122,568 3,025,201 5.31 123.55 100 2.03 19.43 5.61 82.98 0.56 7.15 0.24 4.050.75 5,196,247 355,421,493 138.24 9076.67 100 4.35 37.95 14.74 89.57 1.29 16.02 0.45 7.39

0.75 �0.1 0.25 159,963 2,932,377 5.33 83.16 100 2.20 37.11 7.74 48.54 0.60 11.28 0.45 11.280.50 133,532 3,305,862 4.62 122.33 100 1.54 14.10 6.32 75.58 0.52 9.00 0.29 5.990.75 420,365 9,527,778 14.37 352.61 100 3.06 29.37 10.80 48.58 0.68 8.27 0.49 8.26

�0.2 0.25 53,174 894,001 2.63 47.84 100 1.35 16.02 1.52 18.82 0.62 6.08 0.21 3.530.50 65,525 1,873,201 2.50 57.94 100 1.85 21.21 5.24 71.26 0.57 10.16 0.40 10.020.75 211,262 5,168,910 6.71 196.03 100 3.52 27.11 10.01 126.41 1.49 27.35 0.71 15.25

�0.3 0.25 114,270 6,511,044 5.35 247.13 100 2.51 15.68 0.98 30.62 0.85 10.07 0.17 2.980.50 353,401 20,424,187 10.93 571.83 100 2.83 31.41 4.86 26.55 0.77 6.50 0.32 6.500.75 294,208 12,253,715 9.87 495.20 100 3.67 28.99 11.35 52.38 0.49 9.55 0.34 9.55

20 0.25 0.50 �0.1 0.25 1,639,877 28,678,005 192.61 3416.42 100 0.29 3.73 0.46 4.28 0.17 2.17 0.06 0.760.50 1,765,004 63,077,233 206.45 7123.88 100 0.26 5.12 0.48 2.20 0.20 2.06 0.06 0.980.75 10,993,502 479,181,735 940.69 44,260.18 100 0.41 14.06 1.62 27.39 0.37 10.82 0.08 1.44

�0.2 0.25 706,752 18,001,751 80.23 1810.72 100 0.92 9.32 0.81 6.57 0.51 3.99 0.16 1.710.50 2,665,740 117,286,062 285.09 11,648.52 100 0.83 7.73 0.76 5.53 0.40 4.15 0.13 2.130.75 1,306,702 24,971,292 155.34 2823.21 99 0.57 11.91 1.71 56.43 0.42 9.30 0.09 1.67

�0.3 0.25 618,623 36,925,967 65.79 3574.01 100 1.84 17.64 0.72 3.89 0.90 14.01 0.14 2.420.50 1,338,837 58,004,144 156.39 6695.39 99 0.95 16.15 0.52 5.14 0.36 4.05 0.07 1.330.75 3,045,051 67,766,193 349.47 8016.05 99 0.60 14.22 1.13 40.80 0.20 3.19 0.06 1.18

0.75 �0.1 0.25 1,871,180 56,275,887 190.49 4730.98 100 0.20 1.83 0.77 14.84 0.18 2.02 0.05 0.820.50 3,022,464 97,774,478 326.74 10,393.05 100 0.25 2.24 1.02 23.00 0.23 2.37 0.07 0.870.75 4,579,524 110,952,881 413.12 11,290.25 100 0.69 25.15 4.60 103.69 0.24 2.67 0.09 2.39

�0.2 0.25 764,876 14,563,066 84.93 1579.86 100 0.91 8.24 0.77 4.87 0.55 7.99 0.15 2.610.50 1,014,844 31,687,368 112.10 3330.91 100 0.57 5.24 0.77 5.28 0.32 3.64 0.10 1.630.75 14,379,357 730,528,076 862.97 35,558.84 99 0.82 26.02 3.82 54.31 0.33 6.01 0.13 1.82

�0.3 0.25 2,147,783 118,717,692 217.45 11,565.75 100 1.78 20.29 0.79 6.37 0.61 6.19 0.14 2.640.50 633,362 24,662,704 71.72 2603.13 100 1.26 15.06 0.57 4.61 0.69 7.74 0.13 4.090.75 2,538,725 55,366,451 239.83 6154.84 99 0.87 12.77 4.74 114.21 0.44 3.95 0.14 2.11

0.50 0.50 �0.1 0.25 25,100,443 504,402,254 1391.74 24,313.05 98 1.32 15.51 6.95 49.68 0.37 7.77 0.23 4.640.50 52,416,259 768,642,770 2566.12 41,075.19 98 1.20 17.75 6.66 78.35 0.51 12.33 0.23 3.740.75 96,116,358 833,955,472 4677.45 48,259.94 95 4.92 30.87 15.41 93.82 1.50 21.77 0.86 8.08

�0.2 0.25 714,603 18,229,476 76.44 1734.52 100 0.90 9.19 0.77 6.78 0.48 4.18 0.16 2.570.50 6,411,402 224,320,443 549.23 20,005.06 100 1.27 20.44 3.36 25.77 0.60 7.28 0.29 7.220.75 107,667,751 901,139,037 5272.63 52,144.63 95 3.70 30.34 19.74 118.06 1.80 33.29 0.84 19.59

�0.3 0.25 619,380 36,925,967 62.11 3369.09 100 1.71 16.89 0.73 7.07 0.83 14.14 0.16 2.580.50 14,465,467 658,390,215 869.69 31,799.28 98 1.81 53.96 4.13 41.02 0.60 11.48 0.13 2.430.75 103,279,139 891,042,542 4812.87 42,884.19 89 4.45 30.91 21.41 136.18 1.13 19.19 0.59 9.66

0.75 �0.1 0.25 8,223,904 240,586,054 478.09 15,368.52 99 1.13 12.09 6.16 47.63 0.38 3.34 0.20 2.440.50 9,310,349 454,771,145 530.42 26,703.70 99 2.00 26.08 7.02 34.26 0.83 26.49 0.61 24.080.75 30,052,130 728,958,698 1473.30 30,869.50 99 2.78 26.80 11.19 50.13 0.50 15.10 0.41 14.17

�0.2 0.25 2,941,986 131,218,326 218.30 8903.00 100 1.44 18.05 2.62 26.54 0.47 6.14 0.16 2.150.50 7,019,878 120,661,130 415.40 8169.28 100 1.67 10.85 9.22 50.21 0.90 18.58 0.46 7.28

(continued on next page)

T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541 537

Page 5: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Table 1 (continued)

n s R a b Branch-and-bound algorithm Error percentages

Node CPU time IS SA1 SA2 SA3 Combine

Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max

0.75 22,475,242 782,699,762 1231.85 54,342.42 99 3.23 63.70 12.72 73.32 0.92 12.95 0.54 10.70�0.3 0.25 1,118,574 40,572,116 116.65 3744.53 100 1.89 13.32 1.23 13.62 0.89 15.00 0.23 3.51

0.50 2,344,447 55,668,362 157.24 3172.66 100 2.48 31.11 6.44 53.86 0.96 15.79 0.54 15.790.75 31,971,495 941,615,448 1715.80 63,834.22 100 2.40 20.29 13.59 99.32 0.82 5.50 0.62 5.50

538 T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541

Algorithm

Step 1. Set

bC ½kþj� ¼ C ½k� þ pðkþ1Þ max 1þXk

l¼1

p½l�

!a

; b

!þXj

i¼2

pðkþiÞ

� max 1þXk

l¼1

p½l� þ pðkþi�1Þ

!a

;b

!

for 1 6 j 6 n� k.Step 2. Sort the jobs of agent. AG0 in non-decreasing order of their

weights and the jobs of agent AG1 in non-decreasing orderof their due dates. That is, w0

ð1Þ 6 w0ð2Þ 6 � � � 6 w0

ðn0Þ andd1ð1Þ 6 d1

ð2Þ 6 � � � 6 d1ðn1Þ.

Step 3. Set ic ¼ n, ia ¼ n0, and ib ¼ n1.Step 4. If ic 6 n� k, go to Step 7.Step 5. If bC ½ic� 6 d1

ðibÞ, set C1ðibÞ ¼ bC ½ic� and ib ¼ ib� 1.

Step 6. Otherwise, set C0ðiaÞ ¼ bC ½ic� and ia ¼ ia� 1.

Step 7. Set ic ¼ ic � 1, go to Step 4.Step 8. Compute LBUS ¼

Pn0j¼1w0

ðn0�jþ1ÞC0ðjÞ.

Therefore, a lower bound for partial sequence PS is

LB ¼Xj2PS

w½j�C½j�ð1� IjÞ þXn0

j¼1

w0ðn0�jþ1ÞC

0ðjÞ: ð6Þ

3.3. Simulated annealing algorithms

Simulated annealing (SA), proposed by Kirkpatrick, S,. Gelatt, C.,Vecchi, and M. (1983), is one of the most popular meta-heuristicmethods widely applied to solve combinatorial optimization prob-lems. This approach has the advantage of avoiding getting a solu-tion trapped in a local optimum. This is due to the hill climbingmoves in the SA approach, which are governed by a control param-eter. In the following we utilize the SA approach to derive near-optimal solutions for our problem. We apply three SA algorithmsas follows:

Step 1. Initial sequence: We use the same initial sequence for thethree SA algorithms. We arrange the jobs of agent AG0 inthe WSPT order and the jobs of agent AG1 in the EDD order.In order to guarantee the generation of at least one feasiblesolution, we append the partial schedule of the jobs ofagent AG0 in the WSPT order immediately after the partialschedule of the jobs of agent AG1 in the EDD order.

Step 2. Neighbourhood generation: Neighborhood generationplays an important role in determining the efficiency ofSA algorithms. In this paper we adopt three neighbour-hood generations in the algorithms. The first one is thebackward-shifted reinsertion neighbourhood generationmethod (denoted by SA1), the second one is the forward-shifted reinsertion neighbourhood generation method

(denoted by SA2), and the third one is the pairwise inter-change (PI) neighbourhood generation method (denotedby SA3). During the exchanging process, if a new schedulegenerated is not feasible, we regenerate another schedule.

Step 3. Acceptance probability: When a new schedule is gener-ated, it is accepted if its objective value is smaller than thatof the original schedule; otherwise, it is accepted withsome probability that decreases as the process evolves.The probability of acceptance is generated from an expo-nential distribution as follows:

PðacceptÞ ¼ expð�a� DTCÞ;

where a is a control parameter and DTC is the change in theobjective value. In addition, we use the method suggestedby Ben-Arieh and Maimon (1992) to change a in the kth iter-ation as follows:

a ¼ kd;

where d is an experimental constant. After preliminary tri-als, we used d = 2 in our experiments.If the total completion time increases as a result of a ran-dom pairwise interchange of jobs, the new schedule is ac-cepted when P(accept) > r, where r is randomly sampledfrom the uniform distribution U(0, 1).

Step 4. Stopping condition: Our preliminary trials indicate thatthe quality of the schedule is stable after 100n iterations,where n is the number of jobs.

4. Computational experiments

We conducted computational experiments to assess the effi-ciency of the proposed branch-and-bound algorithm and theaccuracy of the SA algorithms. We coded the algorithms in Fortranand performed the experiments on a PC with an Intel(R) Core(TM)2Quad 2.66 GHz CPU and 4 GB RAM under Windows Vista. We fol-lowed Fisher’s (1971) framework for the experimental design.We generated job processing times from a uniform distributionover the range of integers 1–100. We generated the weights ofthe jobs of the first agent from another uniform distribution overthe range of integers 1–100. We generated the due dates of the jobsof the second agent from a uniform distribution over the range ofintegers T(1 � s � R/2) to T(1 � s + R/2), where s is the tardinessfactor, R is the due date range, and T is the sum of the processingtimes of all the jobs, i.e. T ¼

Pni¼1pi: The combination of (s, R) took

the values (0.25, 0.5), (0.25, 0.75), (0.5, 0.5), and (0.5, 0.75). In addi-tion, we chose the value for the proportion of the jobs of agent AG2

independently of s and R at 0.5 in the experiments.The experiments consisted of two parts. In the first part, we

fixed the number of jobs at n = 16 and 20, and set the values ofthe truncated control parameter b at 0.25, 0.5, and 0.75. For eachproblem size, we set the learning effect at �0.1, �0.2, and �0.3,respectively. As a result, we examined 72 experimental conditionsin the first part. For each situation, we randomly generated 100replications, so we tested a total of 7200 instances. We conducted

Page 6: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Table 2Performances of the SA algorithms for large-sized problems.

n s R a b Cpu time Relative deviation percentage

SA1 SA2 SA1 SA2 SA1 SA2

Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max

50 0.25 0.50 �0.1 0.25 0.0325 0.0469 0.0329 0.0469 0.0463 0.0469 0.0754 1.0737 1.0188 4.1374 0.2615 1.33990.50 0.0316 0.0469 0.0330 0.0469 0.0469 0.0469 0.0788 1.1552 0.9887 2.5263 0.2165 1.06230.75 0.0349 0.0469 0.0491 0.0781 0.0489 0.0625 0.0871 1.9823 0.9432 7.5438 0.1576 1.0113

�0.2 0.25 0.0321 0.0625 0.0325 0.0469 0.0467 0.0469 0.1878 1.9184 1.0749 3.5754 0.2618 2.17060.50 0.0327 0.0469 0.0333 0.0469 0.0467 0.0469 0.1286 1.8030 1.0681 3.3934 0.2122 1.55810.75 0.0354 0.0469 0.0481 0.0625 0.0488 0.0625 0.0926 0.9000 1.1658 25.3334 0.1627 0.8757

�0.3 0.25 0.0318 0.0469 0.0324 0.0469 0.0463 0.0469 0.5488 4.3768 1.2166 3.2919 0.2364 6.33740.50 0.0325 0.0625 0.0324 0.0469 0.0469 0.0469 0.1596 2.5717 0.9382 3.1066 0.2614 1.63060.75 0.0360 0.0469 0.0486 0.0781 0.0499 0.0781 0.1065 1.8591 2.0114 45.4442 0.1657 2.0087

0.75 �0.1 0.25 0.0335 0.0469 0.0413 0.0469 0.0466 0.0625 0.0786 0.7911 0.8345 2.8030 0.1708 0.99330.50 0.0332 0.0469 0.0433 0.0781 0.0456 0.0781 0.0921 0.8477 0.8860 3.2415 0.1672 0.84260.75 0.0379 0.0469 0.0687 0.0938 0.0591 0.0781 0.1993 2.2535 3.9219 37.8842 0.1474 0.9976

�0.2 0.25 0.0322 0.0469 0.0325 0.0469 0.0466 0.0469 0.1996 2.1226 1.1328 3.0698 0.2163 2.01480.50 0.0335 0.0469 0.0408 0.0469 0.0463 0.0469 0.1006 1.3954 0.9493 3.5139 0.2173 1.32620.75 0.0374 0.0625 0.0672 0.0938 0.0574 0.0781 0.1213 1.2694 2.6528 35.5329 0.1905 1.9281

�0.3 0.25 0.0322 0.0469 0.0322 0.0469 0.0464 0.0469 0.4413 6.8132 1.2328 3.3835 0.2341 2.49090.50 0.0335 0.0625 0.0383 0.0469 0.0469 0.0469 0.2681 3.0246 0.9615 2.7078 0.2045 1.16180.75 0.0379 0.0469 0.0680 0.0938 0.0575 0.0781 0.2730 3.0075 3.2884 23.2478 0.1748 2.4174

0.50 0.50 �0.1 0.25 0.0383 0.0625 0.0813 0.1094 0.0634 0.0938 0.2562 3.1998 4.7661 22.4208 0.1407 1.27220.50 0.0393 0.0469 0.0817 0.1094 0.0652 0.0938 0.2204 2.1306 5.5019 38.3013 0.1561 1.37600.75 0.0481 0.0625 0.1161 0.1562 0.0939 0.1094 2.9788 13.4516 37.3758 130.3988 0.5626 11.9796

�0.2 0.25 0.0327 0.0625 0.0346 0.0469 0.0453 0.0625 0.1617 2.1907 1.0756 3.8786 0.2177 1.53960.50 0.0375 0.0469 0.0731 0.1094 0.0600 0.0938 0.2257 2.3471 3.5955 27.2985 0.1453 1.48300.75 0.0481 0.0781 0.1096 0.1562 0.0913 0.1250 3.8436 27.5185 42.4336 129.0029 0.3936 16.2764

�0.3 0.25 0.0324 0.0469 0.0325 0.0469 0.0464 0.0469 0.5062 6.0909 1.2702 4.4967 0.1625 2.31070.50 0.0379 0.0625 0.0702 0.0938 0.0589 0.0781 0.3483 4.9125 3.4867 52.5690 0.1895 1.59070.75 0.0480 0.0625 0.1071 0.1406 0.0917 0.1094 3.2292 23.4106 41.3347 95.1660 0.3413 6.0056

0.75 �0.1 0.25 0.0416 0.0469 0.1322 0.1875 0.0797 0.1094 1.1079 4.9869 11.5827 40.3518 0.0408 0.67360.50 0.0430 0.0625 0.1350 0.2188 0.0822 0.1094 1.4353 6.9373 12.5044 52.8190 0.0402 0.99510.75 0.0489 0.0625 0.1802 0.2500 0.1077 0.1562 2.2537 11.3152 21.7872 58.3045 0.1482 5.3190

�0.2 0.25 0.0350 0.0469 0.0755 0.1250 0.0561 0.0781 0.2802 4.0466 1.7363 12.2687 0.1284 1.08460.50 0.0410 0.0469 0.1233 0.1875 0.0772 0.1094 0.8735 6.3516 11.2904 30.5798 0.1387 1.79930.75 0.0491 0.0625 0.1767 0.2812 0.1074 0.1562 3.2248 15.0282 23.9163 66.1977 0.0540 3.1279

�0.3 0.25 0.0344 0.0469 0.0625 0.1094 0.0497 0.0625 0.5036 3.3414 1.1389 9.8702 0.1910 2.05260.50 0.0411 0.0469 0.1166 0.1875 0.0747 0.0938 0.8839 6.0671 10.4755 34.3194 0.1434 3.24710.75 0.0491 0.0625 0.1714 0.2500 0.1056 0.1250 3.1648 14.0709 26.9224 79.4859 0.1563 3.4753

100 0.25 0.50 �0.1 0.25 0.1262 0.1406 0.1286 0.1406 0.1653 0.1719 0.0410 0.6094 1.2594 2.5205 0.1918 0.84800.50 0.1267 0.1406 0.1291 0.1406 0.1664 0.1875 0.0393 0.7049 1.3083 3.2739 0.1542 0.80870.75 0.1395 0.1562 0.1969 0.2188 0.1937 0.2188 0.0624 0.7427 2.0631 29.9709 0.1569 0.9123

�0.2 0.25 0.1270 0.1406 0.1289 0.1406 0.1656 0.1719 0.0848 0.7992 1.2729 3.3366 0.1832 0.83000.50 0.1275 0.1562 0.1291 0.1406 0.1658 0.1719 0.0478 0.7551 1.3705 2.9898 0.2192 1.24190.75 0.1392 0.1562 0.1911 0.2344 0.1922 0.2031 0.0469 0.4916 3.2161 70.6646 0.1428 0.7026

�0.3 0.25 0.1270 0.1406 0.1291 0.1406 0.1675 0.2031 0.1716 1.0766 1.4827 3.4681 0.1918 1.42940.50 0.1267 0.1406 0.1286 0.1406 0.1667 0.1719 0.0578 0.5629 1.2213 3.2781 0.1936 1.20370.75 0.1400 0.1562 0.1950 0.2188 0.1939 0.2188 0.0776 0.5966 2.0434 32.8231 0.1684 0.6168

0.75 �0.1 0.25 0.1284 0.1406 0.1511 0.1875 0.1702 0.1875 0.0400 0.4788 1.0353 2.2452 0.1781 0.75750.50 0.1297 0.1406 0.1653 0.1875 0.1738 0.1875 0.0468 0.4420 1.0149 2.4412 0.2144 1.05060.75 0.1448 0.1562 0.2767 0.3594 0.2300 0.2656 0.0995 0.7181 11.1859 63.6066 0.1202 0.8368

�0.2 0.25 0.1277 0.1562 0.1291 0.1406 0.1664 0.1875 0.0847 0.7215 1.3337 3.1820 0.1966 0.84200.50 0.1298 0.1406 0.1598 0.1875 0.1733 0.1875 0.0707 0.6664 1.2000 2.6042 0.2150 0.98740.75 0.1451 0.1719 0.2715 0.3281 0.2281 0.2656 0.1529 1.6747 8.3995 69.1243 0.1030 0.5893

�0.3 0.25 0.1269 0.1406 0.1289 0.1406 0.1664 0.1875 0.1502 1.2208 1.4261 3.1348 0.2040 1.12970.50 0.1301 0.1406 0.1584 0.1875 0.1731 0.1875 0.0519 0.8015 1.0301 2.2372 0.1885 0.68430.75 0.1456 0.1562 0.2686 0.3125 0.2283 0.2656 0.1107 1.1468 9.6472 58.8310 0.1423 1.1518

0.50 0.50 �0.1 0.25 0.1447 0.1875 0.3014 0.4062 0.2288 0.2656 0.1324 0.8333 4.4606 20.9338 0.0875 0.46700.50 0.1468 0.1562 0.3144 0.4062 0.2431 0.2969 0.1614 2.1813 9.5732 64.5627 0.0833 0.69590.75 0.1877 0.2188 0.4477 0.5938 0.3658 0.4219 4.9181 13.2657 77.6986 166.2627 0.0246 1.1359

�0.2 0.25 0.1281 0.1562 0.1300 0.1406 0.1669 0.1875 0.0755 0.8157 1.2069 2.8479 0.2024 0.87210.50 0.1456 0.1562 0.2926 0.3594 0.2345 0.2656 0.1455 0.9070 8.4477 53.7103 0.0979 0.97210.75 0.1872 0.2188 0.4441 0.5469 0.3602 0.4062 5.4369 18.0914 77.1668 144.2220 0.0614 3.9119

�0.3 0.25 0.1267 0.1406 0.1289 0.1406 0.1664 0.1719 0.1645 1.2864 1.4706 3.8167 0.1757 1.19480.50 0.1453 0.1719 0.2819 0.3438 0.2299 0.2812 0.1214 0.9850 7.7421 51.8402 0.1495 0.84670.75 0.1878 0.2188 0.4328 0.5781 0.3577 0.4062 4.4508 14.2525 74.5860 183.3257 0.0191 1.7385

0.75 �0.1 0.25 0.1553 0.1719 0.5894 0.7656 0.3048 0.3750 1.4433 5.3141 17.4359 56.9474 0.0218 0.85850.50 0.1575 0.1719 0.5916 0.8438 0.3144 0.4062 1.5622 5.5152 21.2670 84.5828 0.0131 0.35990.75 0.1923 0.2344 0.7614 0.9844 0.4242 0.5312 4.9796 13.8776 46.7157 95.5237 0.0000 0.0000

�0.2 0.25 0.1355 0.1562 0.3034 0.4375 0.2034 0.2656 0.1341 0.7831 1.8134 14.1391 0.0987 0.55830.50 0.1573 0.1875 0.5430 0.7344 0.3047 0.3906 1.7339 8.1440 22.6240 69.1175 0.0320 0.9124

(continued on next page)

T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541 539

Page 7: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

Table 2 (continued)

n s R a b Cpu time Relative deviation percentage

SA1 SA2 SA1 SA2 SA1 SA2

Mean Max Mean Max Mean Max Mean Max Mean Max Mean Max

0.75 0.1919 0.2344 0.7306 0.9844 0.4175 0.4844 5.3675 13.4818 47.4695 86.6160 0.0000 0.0000�0.3 0.25 0.1339 0.1406 0.2700 0.3750 0.1944 0.2344 0.1969 1.5981 1.2674 6.8964 0.1279 1.0480

0.50 0.1565 0.1875 0.5362 0.6562 0.3023 0.3594 1.5185 10.9253 22.5060 112.1412 0.0152 0.98610.75 0.1916 0.2188 0.7392 0.9688 0.4122 0.5156 4.9308 13.5457 43.1325 78.1853 0.0015 0.1515

540 T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541

these tests to study the efficiency of the dominance rules and thelower bound, and the performance of the branch-and-bound andSA algorithms. Table 1 summarizes the results.

For the branch-and-bound algorithm, we report the mean andmaximum execution times (in seconds) and the mean and maxi-mum numbers of nodes. For the SA algorithms, we report the meanand maximum error percentages. The error percentage of a solu-tion produced by an SA algorithm is calculated as

ðV � V�Þ=V� � 100%;

where V is the solution generated by the SA algorithm and V⁄ is thetotal weighted completion time of the optimal schedule from thebranch-and-bound algorithm. We did not record the computationaltimes of the SA algorithms because they all were fast in generatingsolutions.

We note from Table 1 that the branch-and-bound algorithm isterminated if the number of nodes is over 109, which is approxi-mately 10 h of execution time. The instances with few than 109

numbers of nodes are denoted as IS in the test sample, which arealso reported in Table 1. We see from Table 1 that the number ofnodes and the execution time of the branch-and-bound algorithmgrow exponentially as the number of jobs increases. The resultsshow that the problem is not significantly affected by the trun-cated control parameter b. But we observe that out of the 72 testcases, the numbers of case with a minimum mean node were 20,4, and 0 corresponding to b = 0.25, 0.5, and 0.75, respectively. Thisindicates that the problem becomes difficult to solve when b be-comes larger. The impact of the learning effect is not significant.But in the same way, we observe that out of the 72 test cases,the numbers of case with a minimum mean node were 2, 12, and10 corresponding to a = �0.1, �0.2, and �0.3, respectively. Thisimplies that the problem is hard to solve when the learning effectbecomes weaker. Moreover, the truncated learning effect does notaffect the performance of the three SA algorithms. In addition, wenote that the mean error percentages of SA1, SA2, and SA3 were<5.79%, 21.41%, and 1.80%, respectively, while the worst error per-centages of SA1, SA2, and SA3 were 74%, 229.02%, and 33.29%,respectively, for all the tested cases. However, we note from theperformance of the three proposed SA heuristics that there is nodominating heuristic among the three under test. We also observethat the mean and maximum error percentages of the combinedalgorithm that takes the best solution from these three algorithmsfurther reduced to 0.86% and 24%, respectively.

To further study the performance of the proposed SA algorithmsfor large-sized problems, we used two different numbers of jobs(n = 50 and 100) in the second part of the experiments. For a fixednumber of jobs, we set the truncated control parameter b at 0.25,0.5, and 0.75. For each situation, we randomly generated 100 repli-cations. As a result, we covered 72 experimental conditions andtested a total of 7200 instances. Table 2 summarizes the results.We report the mean and maximum execution times (in seconds)and the average and maximum relative deviation percentages(RDPs) of each SA algorithm. The RDP of an SA algorithm is given by

ðV � V0Þ=V0 � 100%;

where V0 is the minimum value of the total weighted completiontime among the three SA heuristics.

We observe that none of the SA heuristic is dominant since therelative deviation percentages were all greater than 0. We also ob-serve that changes in the values of s or R have no effects on the per-formance of the heuristics. We note that changes in the learningeffect or in the value of b do not affect the mean relative deviationpercentage. We also note that the mean CPU times (in seconds) ofSA1, SA2, and SA3 were less than 0.1923, 0.7614, and 0.4242, respec-tively, while the mean RDPs of SA1, SA2, and SA3 were 5.43%,77.70%, and 0.56%, respectively, for all the tested cases. In addition,there is no significant difference between the performance of thethree simulated annealing algorithms. Overall, we recommendthe combined algorithm, i.e., the algorithm that combines the threeSA algorithms, since it is stable and able to produce near-optimalsolutions.

5. Conclusions

In this paper we study a two-agent single-machine schedulingproblem with a truncated learning effect. The objective is to mini-mize the total weighted completion time of the jobs of the firstagent with the restriction that no tardy job is allowed for the sec-ond agent. We develop a branch-and-bound algorithm incorporat-ing several dominance properties and a lower bound to derive theoptimal solution. We also propose three simulated annealing algo-rithms to obtain near-optimal solutions. The computational resultsshow that with the help of the proposed heuristic initial solutions,the branch-and-bound algorithm performs well in terms of thenumber of nodes and execution time when the number of jobs isfewer than or equal to 20. Moreover, the computational experi-ments also show that the proposed simulated annealing algo-rithms perform well since the mean error percentages of thecombined algorithm were <0.86% for all the tested cases. The con-sideration of more different initial sequences and experimentalconstants (d) for the SA algorithms are interesting topics for futureresearch.

Acknowledgements

We are grateful to the Editor and two anonymous referees fortheir constructive comments on an earlier version of our paper.This paper was supported in part by the NSC under Grant No.NSC 99-2221-E-035-057-MY3.

References

Agnetis, A., Mirchandani, P. B., Pacciarelli, D., & Pacifici, A. (2004). Schedulingproblems with two competing agents. Operations Research, 52, 229–242.

Agnetis, A., Pacciarelli, D., & Pacifici, A. (2007). Multi-agent single-machinescheduling. Annals of Operations Research, 50, 3–15.

Baker, K. R., & Smith, J. C. (2003). A multiple-criterion model for machinescheduling. Journal of Scheduling, 6, 7–16.

Bachman, A., & Janiak, A. (2004). Scheduling jobs with position-dependentprocessing times. Journal of the Operational Research Society, 55, 257–264.

Page 8: A two-agent single-machine scheduling problem with truncated sum-of-processing-times-based learning considerations

T.C.E. Cheng et al. / Computers & Industrial Engineering 60 (2011) 534–541 541

Ben-Arieh, D., & Maimon, O. (1992). Annealing method for PCB assembly schedulingon two sequential machines. International Journal of Computer IntegratedManufacturing, 5, 361–367.

Biskup, D. (1999). Single-machine scheduling with learning considerations.European Journal of Operational Research, 115, 173–178.

Biskup, D. (2008). A state-of-the-art review on scheduling with learning effect.European Journal of Operational Research, 188, 315–329.

Cheng, T. C. E., & Wang, G. (2000). Single-machine scheduling with learning effectconsiderations. Annals of Operations Research, 98, 273–290.

Cheng, T. C. E., Wu, C. C., & Lee, W. C. (2008). Some scheduling problems with sum-of-processing-times-based and job-position-based learning effects. InformationSciences, 178, 2476–2487.

Cheng, T. C. E., Ng, C. T., & Yuan, J. J. (2006). Multi-agent scheduling on a single-machine to minimize total weighted number of tardy jobs. Theoretical ComputerScience, 362, 273–281.

Cheng, T. C. E., Wu, C. C., & Lee, W. C. (2010). Scheduling problems withdeteriorating jobs and learning effects including proportional setup times.Computers and Industrial Engineering, 58(2), 326–331.

Chryssolouris, G., Pierce, J., & Dicke, K. (1992). A decision-making approach to theoperation of flexible manufacturing systems. International Journal of FlexibleManufacturing Systems, 4(3–4), 309–330.

Chryssolouris, G., Papakostas, N., & Mourtzis, D. (2000). A decision making approachfor nesting scheduling: A textile case. International Journal of ProductionResearch, 38(17), 4555–4564.

Eren, T. (2009). Minimizing the total weighted completion time on a single-machinescheduling with release dates and a learning effect. Applied Mathematics andComputation, 208, 355–358.

Eren, T., & Guner, E. (2008). A bi-criteria flowshop scheduling with a learning effect.Applied Mathematical Modelling, 32, 1719–1733.

Fisher, M. L. (1971). A dual algorithm for the one-machine scheduling problem.Mathematical Programming, 11, 229–251.

Giannelos, N., Papakostas, N., Mourtzis, D., & Chryssolouris, G. (2007). Dispatchingpolicy for manufacturing jobs and time-delay plots. International Journal ofComputer Integrated Manufacturing, 20(4), 329–337.

Janiak, A., & Rudek, R. (2008). A new approach to the learning effect: Beyondthe learning curve restrictions. Computers and Operations Research, 35,3727–3736.

Janiak, A., Janiak, W. A., Rudek, R., & Wielgus, A. (2009). Solution algorithms for themake span minimization problem with the general learning model. Computersand Industrial Engineering, 56(4), 1301–1308.

Janiak, A., & Rudek, R. (2010). A note on a makespan minimization problem with amulti-ability learning effect. Omega, 38(3–4), 213–217.

Ji, M., & Cheng, T. C. E. (2010). Scheduling with job-dependent learning effects andmultiple rate-modifying activities. Information Processing Letters, 110, 460–463.

Kirkpatrick, S., Gelatt, C., & Vecchi, M. (1983). Optimization by simulated annealing.Science, 220(4598), 671–680.

Koulamas, C., & Kyparisis, G. J. (2007). Single-machine and two-machine flowshopscheduling with general learning functions. European Journal of OperationalResearch, 178, 402–407.

Koulamas, C. (2010). A note on single-machine scheduling with job-dependentlearning effects. European Journal of Operational Research, 207(2),1142–1143.

Kuo, W. H., & Yang, D. L. (2006). Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect. EuropeanJournal of Operational Research, 174(2), 1184–1190.

Lee, W. C., & Wu, C. C. (2004). Minimizing total completion time in a two-machineflowshop with a learning effect. International Journal of Production Economics, 88,85–93.

Lee, W. C., Wu, C. C., & Liu, M. F. (2009). A single-machine bi-criterion learningscheduling problem with release times. Expert Systems with Applications, 36(7),10295–10303.

Lee, W. C., Wang, W. J., Shiau, Y. R., & Wu, C. C. (2010). A single-machine schedulingproblem with two-agent and deteriorating jobs. Applied Mathematical Modelling,34(10), 3098–3107.

Lee, W. C., Chen, S. K., Chen, C. W., & Wu, C. C. (2011). A two-machine flowshopproblem with two agents. Computers and Operations Research, 38, 98–104.

Liu, P., & Tang, L. (2008). Two-agent scheduling with linear deteriorating jobs on asingle-machine. Lecture Notes in Computer Science, 50(92), 642–650.

Mosheiov, G. (2001a). Scheduling problem with a learning effect. European Journalof Operational Research, 130, 638–652.

Mosheiov, G. (2001b). Parallel machine scheduling with a learning effect. Journal ofthe Operational Research Society, 52(10), 1165–1169.

Mosheiov, G., & Sidney, J. B. (2003). Scheduling with general job-dependent learningcurves. European Journal of Operational Research., 147(3), 665–670.

Ng, C. T., Cheng, T. C. E., & Yuan, J. J. (2006). A note on the complexity of the problemof two-agent scheduling on a single-machine. Journal of CombinatorialOptimization, 12, 387–394.

Okołowski, D., & Gawiejnowicz, S. (2010). Exact and heuristic algorithms forparallel-machine scheduling with DeJong’s learning effect. Computers andIndustrial Engineering, 59(2), 272–279.

Papakostas, N., Mourtzis, D., Bechrakis, K., Chryssolouris, G., Doukas, D. (1999). Aflexible agent based framework for manufacturing decision making. InProceedings of the 9th International Conference on Flexible Automation andIntelligent Manufacturing (FAIM1999), Tilburg, Netherlands (June 1999) (pp. 789–800). (ISBN:1-56700-133-5).

Papakostas, N., & Chryssolouris, G. (2009). A scheduling policy for improvingtardiness performance. Asian International Journal of Science and Technology,2(3), 79–89.

Pinedo, M. (2002). Scheduling: Theory, algorithms, and systems. Upper Saddle River,New Jersey: Prentice Hall.

Smith, W. E. (1956). Various optimizers for single state production. Naval ResearchLogistic Quarterly, 3, 59–66.

Sun, L. (2009). Single-machine scheduling problems with deteriorating jobs andlearning effects. Computers and Industrial Engineering, 57, 843–846.

Toksari, M. D., Oron, D., & Güner, E. (2009). Single-machine scheduling problemsunder the effects of nonlinear deterioration and time-dependent learning.Mathematical and Computer Modelling, 50, 401–406.

Wang, J. B., & Xia, Z. Q. (2005). Flowshop scheduling with a learning effect. Journal ofthe Operational Research Society, 56, 1325–1330.

Wang, J. B., & Liu, L. L. (2009). Two-machine flowshop problem with effects ofdeterioration and learning. Computers and Industrial Engineering, 57,1114–1121.

Wang, D., Wang, M. Z., & Wang, J. B. (2010). Single-machine scheduling withlearning effect and resource-dependent processing times. Computers andIndustrial Engineering, 59(3), 458–462.

Wright, T. P. (1936). Factors affecting the cost of airplanes. Journal of AeronauticalScience, 3, 122–128.

Wu, C. C., Lee, W. C., & Wang, W. C. (2007). A two-machine flowshop maximumtardiness scheduling problem with a learning effect. International Journal ofAdvanced Manufacturing Technology, 31, 743–750.

Wu, C. C., & Liu, C. L. (2010). Minimizing the makespan on a single-machine withlearning and unequal release times. Computers and Industrial Engineering, 59,419–424.

Wu, C.-C., Hsu, P.-H., Chen, J.-C., & Wang, N.-S. (2010). Genetic algorithm forminimizing the total weighted completion time scheduling problem withlearning and release times. Computers and Operations Research. doi:10.1016/j.cor.2010.11.001.

Yang, S. J., & Yang, D. L. (2010). Note on ‘‘A note on single-machine group schedulingproblems with position-based learning effect’’. Applied Mathematical Modelling,34(12), 4306–4308.

Yin, Y., Xu, D., Sun, K., & Li, H. (2009). Some scheduling problems with generalposition-dependent and time-dependent learning effects. Information Sciences,179, 2416–2425.

Yuan, J. J., Shang, W. P., & Feng, Q. (2005). A note on the scheduling with twofamilies of jobs. Journal of Scheduling, 8, 537–542.

Further reading

Wu, C. C., & Lee, W. C. (2009). A note on the total completion time problem in apermutation flowshop with a learning effect. European Journal of OperationalResearch, 192, 343–347.