9
Robotics and Computer Integrated Manufacturing 18 (2002) 223–231 Unrelated parallel machine scheduling with setup times using simulated annealing Dong-Won Kim a, *, Kyong-Hee Kim a , Wooseung Jang b , F. Frank Chen c a Department of Industrial and Systems Engineering, Chonbuk National University, Chonbuk 561-756, South Korea b Department of Industrial and Manufacturing Systems Engineering, University of Missouri-Columbia, USA c Grado Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, USA Abstract This paper presents a scheduling problem for unrelated parallel machines with sequence-dependent setup times, using simulated annealing (SA). The problem accounts for allotting work parts of L jobs into M parallel unrelated machines, where a job refers to a lot composed of N items. Some jobs may have different items while every item within each job has an identical processing time with a common due date. Each machine has its own processing times according to the characteristics of the machine as well as job types. Setup times are machine independent but job sequence dependent. SA, a meta-heuristic, is employed in this study to determine a scheduling policy so as to minimize total tardiness. The suggested SA method utilizes six job or item rearranging techniques to generate neighborhood solutions. The experimental analysis shows that the proposed SA method significantly outperforms a neighborhood search method in terms of total tardiness. r 2002 Elsevier Science Ltd. All rights reserved. Keywords: Parallel machine scheduling; Total tardiness; Simulated annealing 1. Introduction Compound semiconductors are used for electronic components in information displays, mobile telecom- munications, and wireless data communications. The demand for compound semiconductors is growing, especially the demand for the blue light emitting diode (LED) has significantly increased thanks to its high value-added aspects. Compound semiconductor wafers are so thin and fragile compared to silicon wafers that they are very difficult to handle and often requiring complicated production operations [1]. The dicing of semiconductor wafer manufacturing is a major bottle- neck operation since it takes longer processing times than other operations. Furthermore, the dicing opera- tion comes near the end of the manufacturing process and has a strong influence on the performance of such a production system. Hence, the production efficiency of this operation significantly affects the overall production efficiency. It is very important to design efficient scheduling plans based on current production capacity rather than to purchase more capacity considering the high capital costs associated with typical semiconductor production equipment. Machines used in the dicing operation are non-identical, that is, their processing times vary according to ages and manufacturers. Moreover, pro- duction machines should be adjusted whenever different types of wafers are diced. This means that different setup times are required depending on job sequences. However, the setup times are machine independent. Wafers move as a lot and each machine can process one wafer at a time. Thus, addressed in this paper is an unrelated parallel machine scheduling problem that could be applied to the dicing process and to those with similar processing characteristics. There are various published papers in parallel machine scheduling problems, as mentioned in Cheng and Sin’s review [2]. The common objectives studied in this area include the minimization of completion time, tardiness, and make-span. Karp [3] showed that even the minimization of total tardiness in two identical machine scheduling problem was NP-hard. Ho and Chang [4] also showed that a similar parallel machine scheduling problem was NP-hard. Due to the difficulty, it is a general and acceptable practice to find an appropriate heuristic rather than an optimal solution in complex parallel scheduling problems. *Corresponding author. 0736-5845/02/$ - see front matter r 2002 Elsevier Science Ltd. All rights reserved. PII:S0736-5845(02)00013-3

Unrelated parallel machine scheduling with setup times using simulated annealing

Embed Size (px)

Citation preview

Page 1: Unrelated parallel machine scheduling with setup times using simulated annealing

Robotics and Computer Integrated Manufacturing 18 (2002) 223–231

Unrelated parallel machine scheduling with setup times usingsimulated annealing

Dong-Won Kima,*, Kyong-Hee Kima, Wooseung Jangb, F. Frank Chenc

aDepartment of Industrial and Systems Engineering, Chonbuk National University, Chonbuk 561-756, South KoreabDepartment of Industrial and Manufacturing Systems Engineering, University of Missouri-Columbia, USA

cGrado Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, USA

Abstract

This paper presents a scheduling problem for unrelated parallel machines with sequence-dependent setup times, using simulated

annealing (SA). The problem accounts for allotting work parts of L jobs into M parallel unrelated machines, where a job refers to a

lot composed of N items. Some jobs may have different items while every item within each job has an identical processing time with

a common due date. Each machine has its own processing times according to the characteristics of the machine as well as job types.

Setup times are machine independent but job sequence dependent. SA, a meta-heuristic, is employed in this study to determine a

scheduling policy so as to minimize total tardiness. The suggested SA method utilizes six job or item rearranging techniques to

generate neighborhood solutions. The experimental analysis shows that the proposed SA method significantly outperforms a

neighborhood search method in terms of total tardiness. r 2002 Elsevier Science Ltd. All rights reserved.

Keywords: Parallel machine scheduling; Total tardiness; Simulated annealing

1. Introduction

Compound semiconductors are used for electroniccomponents in information displays, mobile telecom-munications, and wireless data communications. Thedemand for compound semiconductors is growing,especially the demand for the blue light emitting diode(LED) has significantly increased thanks to its highvalue-added aspects. Compound semiconductor wafersare so thin and fragile compared to silicon wafers thatthey are very difficult to handle and often requiringcomplicated production operations [1]. The dicing ofsemiconductor wafer manufacturing is a major bottle-neck operation since it takes longer processing timesthan other operations. Furthermore, the dicing opera-tion comes near the end of the manufacturing processand has a strong influence on the performance of such aproduction system. Hence, the production efficiency ofthis operation significantly affects the overall productionefficiency.It is very important to design efficient scheduling

plans based on current production capacity rather thanto purchase more capacity considering the high capital

costs associated with typical semiconductor productionequipment. Machines used in the dicing operation arenon-identical, that is, their processing times varyaccording to ages and manufacturers. Moreover, pro-duction machines should be adjusted whenever differenttypes of wafers are diced. This means that differentsetup times are required depending on job sequences.However, the setup times are machine independent.Wafers move as a lot and each machine can process onewafer at a time. Thus, addressed in this paper is anunrelated parallel machine scheduling problem thatcould be applied to the dicing process and to thosewith similar processing characteristics.There are various published papers in parallel

machine scheduling problems, as mentioned in Chengand Sin’s review [2]. The common objectives studied inthis area include the minimization of completion time,tardiness, and make-span. Karp [3] showed that even theminimization of total tardiness in two identical machinescheduling problem was NP-hard. Ho and Chang [4]also showed that a similar parallel machine schedulingproblem was NP-hard. Due to the difficulty, it is ageneral and acceptable practice to find an appropriateheuristic rather than an optimal solution in complexparallel scheduling problems.*Corresponding author.

0736-5845/02/$ - see front matter r 2002 Elsevier Science Ltd. All rights reserved.

PII: S 0 7 3 6 - 5 8 4 5 ( 0 2 ) 0 0 0 1 3 - 3

Page 2: Unrelated parallel machine scheduling with setup times using simulated annealing

Many researchers have studied the identical parallelmachine scheduling. Pourbabai [5] introduced theoptimal batch size to minimize total tardiness. Shuttenand Leussink [6] presented a branch-and-bound algo-rithm to minimize maximum lateness consideringrelease dates, due dates, and family setup times.Akkiraju [7] investigated identical parallel machinescheduling to minimize job tardiness. Balakrisnan et al.[8] studied uniform parallel machine scheduling withready times and sequence-dependent setup times usinga compact mathematical model for small sizedproblems.Several studies discuss unrelated parallel machine

scheduling problems for the purpose of minimizingtardiness. Koulamas [9] reviewed existing heuristicapproaches in this area. Suresh and Chaudhuri [10]and Adamopoulos and Pappis [11] suggested newheuristics for various due date and processing timecombinations. Slowinski [12] and Lee and Guignard [13]considered different setup times in unrelated machines.Both studies used Lagrange functions and two-phasemethods in order to find good solutions. Weng et al. [14]addressed the toll plaza repair scheduling problem withsetup times to minimize a weighted mean completiontime.Family or batch scheduling models are increasingly

being studied. Bruno and Sethi [15] introduced a tasksequencing problem in a batch environment with setuptimes. Ghosh and Gupta [16] addressed a single machinebatch scheduling problem to minimize maximumlateness. Liaee and Emmons [17] reviewed schedulingtheories concerning the processing of several families ofjobs on single or parallel facilities. Potts and Kovalyov[18] also reviewed lots of literature on scheduling withbatch decisions, according to scheduling models as wellas basic algorithms.Recently, meta-heuristics, which were developed to

solve complicated combinatorial problems, have beenapplied to scheduling problems. Tamimi and Rajan [19]used a Genetic algorithm to find a scheduling policy foridentical parallel machines with setup times. Koulamas[20] used simulated annealing (SA) in an identicalparallel machine scheduling problem. He used SA toexchange jobs assigned to machines using decomposi-tion. Applying Tabu search into parallel machinescheduling was reported by Suresh and Chaudhuri [21]and Armentano and Yamashita [22]. Park and Kim [23]compared SA and Tabu search under the objective ofminimizing maintenance costs with ready time and duedate constraints. Jozefowska et al. [24] also suggestedand compared SA, Tabu, and Genetic algorithmapproaches.We present the unrelated parallel machine scheduling

problem using SA. Based on the characteristics of ourproblem, six different methods of neighborhood solu-tion generation and appropriate parameter values for

the SA are also discussed. The suggested SA approach iscompared with a neighborhood search (NS) method toshow the performance of our proposed approach.

2. Problem definition and characteristics

Suppose there are M unrelated parallel machines andL jobs, where a job refers to a lot composed of N items.Each job may have different processing timesdepending on the assigned machine, but items in thesame lot are assumed to have the same processing timeswith the same machine. We also assume that eachmachine can process one item at a time, and theprocessing is non-preemptive. Items in a job lot may beprocessed in multiple machines. The completion timeof the last item in a lot becomes the completion time ofthe lot.The setup time might have different values depending

on job sequences. This means setup times depend onboth the job just completed and the job about to beprocessed. Because a job lot has only identical items,setup times between these items are equal to zero. Weassume setup times obey the widely accepted triangleinequality [8,16]. We also assume each job lot has itsown due date. The following notations shall be used todefine our problem:

i lot indexj item index in a lotk machine indexL number of lotsM number of machinesN number of items in each lotcij completion time of item j in lot i

Ci completion time of the last item in the ith lot,(Ci ¼ maxcij;j¼1;y;N)

di due date of the ith lotTi tardiness of lot i; (Ti¼ maxfCi2di; 0g)sk job sequence on machine k

lkðiÞ the index of the lot in position i on machine k

nk the number of lots assigned to machine k:pijk the operation time of the item j of lot i in

machine k

sijk sequence-dependent setup time in machine k whena lot i is changed to a lot j; where the setup timessatisfy the triangle inequality: sijk þ sjlkXsilk

The scheduling problem considered in this study is tominimize the total maximum tardiness:

Objective ¼MinimizeXL

i¼1

maxðCi � di; 0Þ:

A solution is represented as a set of (s1;y;sk;y;sM),where sk is a sequence of jobs on machine k: Thesequence sk on machine k can be represented by a set ofordered lots, flkð1Þ; lkð2Þ;y; lkðnkÞg:

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231224

Page 3: Unrelated parallel machine scheduling with setup times using simulated annealing

3. Simulated annealing

SA was introduced by Kirkpatrick et al. [25] as amethod to solve combinatorial optimization problems.Generally speaking, a search algorithm in combinatorialproblems first generates neighborhood solutions from acurrent solution, and then, continues to move to one ofthem until prespecified conditions are met. SA uses thisrepetitive improvement approach, but in particular itprobabilistically allows deteriorating movements so thata neighborhood solution out of a local optimum can betried.SA begins with an initial solution (X ), an initial

temperature (T), and an iteration number (L). Tempera-ture (T) controls the possibility of the acceptance of adeteriorating solution, and an iteration number (L) decidesthe number of repetitions until a solution reaches a stablestate under the temperature. The T may have the followi-ng implicit meaning of a flexibility index [27]. At hightemperature (early in the search), there is some flexibilityto move to a worse solution; but at lower temperature(later in the search) less of this flexibility exists.A new neighborhood solution (Y ) is generated based

on these T and L through a heuristic perturbation on theexisting solutions. The neighborhood solution (Y )becomes a new solution if the change of an objectivefunction (D ¼ CðY Þ � CðX Þ) is improved (i.e., Do0 for aminimization problem). Even though it is not improved,the neighborhood solution becomes a new solution withan appropriate probability based on e�D=T : This leavesthe possibility of finding a global optimal solution out ofa local optimum. The algorithm terminates if there is nochange after L repetitions. Otherwise, the iterationcontinues with a new temperature (T).

Simulated annealing algorithm

Begin;INITIALIZE(X ;T ;L);

RepeatFor i ¼ 1 to L do

Y=PERTURB(X ); {generate new neigh-borhood solution}D=C(Y )–C(X );If ((CðY ÞpCðX Þ) or (exp(�D/T)>RANDOM(0, 1))Then X ¼ Y ; {accept the movement}

EndifEndfor;UPDATE (T ;L);

Until(Stop-Criterion)End;

There are several factors to be decided first in orderto apply SA to practical problems. First of all, it isnecessary to define a procedure to generate neighbor-hood solutions from a current solution. In order to

efficiently generate these solutions, parameters such asan initial temperature, the ratio of temperature change,the number of repetitions, and conditions for comple-tion should be appropriately decided. The combinationof these parameters needs to be adjusted based onproblems to achieve good solutions.

3.1. Initial solution

A rule similar to the earliest due date rule is used toget an initial solution as follows. A lot with the earliestdue date is first selected, and then, items in the lot areassigned to all the machines sequentially by the machineindex. After finishing allocation, items in a lot, whichhas the next earliest due date, are assigned.

3.2. Generation of neighborhood solutions

It is often appropriate to generate neighborhoodsolutions considering lots rather than just items becauseof the same job characteristics in the same lot. In fact,we tested a rule considering not lots but items, but evenwith more search time the outcome was usually inferiorto the rules we are suggesting in this section. Neighbor-hood solutions are generated according to the followingsix methods.

3.2.1. Lot interchange

This method selects and exchanges two lots. A lot inthis section implies not all the items forming one lot buta portion of items in a lot assigned to one machine. Thefirst lot to exchange is selected based on the ratio oftardiness. That is, the probability that a lot A is selectedis equal to the tardiness of the lot A divided by the totaltardiness. Because the tardiness of a lot is determined bythe last item belonging to the lot, it can be computedeasily by looking at machines scheduled to process itemsof the lot. The other lot to exchange is randomlyselected, and a machine to process a portion of theselected lot is also randomly selected.For example, suppose that lot 5 is first selected based

on the predescribed probability. The portion of items inlot 5 to exchange is waiting in machine 1 because thecompletion time of the item in the machine is the largest.Fig. 1 shows the procedure when lot 2 in machine 2 is

... S542P512 P522

... P351P321 S351 P511 S511 P111 ...P521

... P252 P242 P272 S242 P412 S482 P862

... P351P321 S321 P251 P241 P271 S211 P111 ...

P412 S482 P862 ...

M1

M2

M1

M2

Fig. 1. Lot interchange.

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231 225

Page 4: Unrelated parallel machine scheduling with setup times using simulated annealing

arbitrarily selected and interchanged with lot 5 inmachine 1. However, if this causes additional unneces-sary setup time in machine 2 by processing two dividedsets of items in lot 5 as seen in Fig. 2, those two setsshould be combined and processed successively.Changing the sequence of lots may reduce the

completion times of the lots. This is because these setuptimes are sequence dependent. The rules may also reduceadditional setup times resulting from combining itemsthat are originally in the same lot.

3.2.2. Lot insert

One lot is inserted at the end of another lot as seenin Fig. 3. Lots are selected as described in the lotinterchange section. Two sets of items from one lot haveto be processed successively.

3.2.3. Lot merge

This selects two sets of items in the same lot scheduledto process in two different machines, and merges themso that they are processed in one machine. Thisprocedure decreases additional setup times that occurwhen items in the same lot are distributed to more thanone machine. The lot selection rule is the same as theone described in the lot interchange method. Once a lotis selected, two sets of items, one with the largestcompletion time and the other randomly selected, in thislot are merged. For example, lot 7 is first selected inFig. 4. If an item in lot 7 has the largest completion timein machine 1 then items of lot 7 in machine 1 should bemerged with randomly selected items of lot 7, which are,in this case, items in machine 2. We need this rulebecause merging lots may decrease unnecessary setuptimes that occur when the same types of items aredistributed into more than one machine.

3.2.4. Lot split

Items of one lot scheduled in one machine are splitinto two machines. This decreases tardiness by dividingitems that cause large completion times. The selectionrule of a lot is the same as before. The selected set ofitems is divided arbitrarily and scheduled randomly to amachine that currently does not have items from thesame lot. However, the new schedule should not cause asetup time increase by dividing items from the same lot.Fig. 5 shows selection and split of lot 1 in machine 1,and resulting new schedules of machine 1 and 2. Thisrule can decrease tardiness by dividing items in a lot thatmay cause large completion times when the items arescheduled to the same machine.

3.2.5. Item interchange

It is possible to consider a movement of items insteadof lots to achieve fine improvement. The item inter-change method randomly selects and interchanges twoitems from two machines. Note that items located atboth the ends of a single lot should be interchanged sothat unnecessary setup is avoided. Furthermore, if thereare already items from the same lot in a machine when anew item is interchanged, then connect them to avoidadditional setup time.

3.2.6. Item insert

A randomly selected item is inserted to a randomlyselected machine. The item should not be inserted in themiddle of items from the same lot so that it will notincrease another setup time. If there are already itemsfrom the same lot when a item is inserted they should becombined.

... P351P321 S351 P511 S511 P111...P521

... P252 P242 P272 S242 P412...

... P351P321 S311 P111...

... P252 P242 P272 S252 P522P512 S542 P412...

M1

M2

M1

M2

Fig. 3. Lot insert.

... P351P321 S351 P511 S571 P721 ...

P752 P732

P511 ...

... P752 P732 P722

S742 P412 S482 P862 ...

S742 P412 S482 P862 ...

...

...

P351P321 S351

M1

M2

M1

M2

Fig. 4. Lot merge.

... P351P321 P151...

... P252 S242 P412 S482 P862...

...

... P252

P121S351 P511 S511 P111

P122

P511 S511 P111...

S212 P152 S142 P412 S482 P862...

P351P321 S351

M1

M2

M1

M2

Fig. 5. Lot split.

... P351

P321

S351

P511

S511

P111

...P521

... P252

P242

P272

S262

...

... P351

P321

S321

P251

P241

P271

S211

P111

...

...

P632

S652

P542

P542

P632

S652

P522

P512

...

M1

M2

M1

M2

Fig. 2. Lot interchange.

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231226

Page 5: Unrelated parallel machine scheduling with setup times using simulated annealing

When a neighborhood solution is generated, it ispossible to choose one out of the six methods describedusing certain probability distribution based on our pastexperiences. Alternatively, we can test all six methodsand choose one that temporarily generates the bestsolution in terms of total tardiness. Because the latteroften gives significantly better solutions within a reason-able amount of time, although it obviously takes longerthan the first, we use it in our computational experimentsection.

Perturbation Procedure

Begin;Y1=Lot-Interchange(X );Y2=Lot-Insert(X );Y3=Lot-Merge(X );Y4=Lot Split(X );Y5=Item Interchange(X );Y6=Item Insert(X );

Find X � corresponding to MinðY1;Y2;Y3;Y4;Y5;Y6Þ;Y ¼ X �

End;

3.3. Determination of parameters

SA is a meta-heuristic that has proven to be veryeffective for solving complicated combinatorial pro-blems. The theory of SA indicates that SA converges toa globally optimal solution with probability 1 if alltheoretical conditions are satisfied [20,25]. However, theconditions for the asymptotic convergence, such ascooling schedule and stop criterion, cannot be met inpractice [20]. Thus, it is often critical to adjust values ofparameters such as initial temperature, cooling schedule,repetition number, and ending condition based onproblem characteristics.We first use standard parameters, which are applic-

able to our problem, and then determine parameters touse considering the performance and running time of thealgorithm. We used the standard setting of a problemwith 10 machines, 50 lots, and 20 items per lot todetermine the parameters.

3.3.1. Temperature

Theoretically speaking, initial temperature should behigh enough so that all movements are acceptable.However, it is necessary to control initial temperaturemore efficiently because very high initial temperaturecould consume too much time in preparation stage ofour problems (Fig. 6).Therefore, initial temperature is often determined by

the minimum temperature that is larger than or equal tothe acceptance ratio, the number of accepted movementsdivided by the number of total movements, tested by

preliminary experiments before actual SA [26]. We useda 60% acceptance ratio in this paper after 10,000experiments.

3.3.2. Cooling ratio

There are two widely accepted common ratios such asthe log ratio, which use the temperature of kth exteriorloop Tk:

Tk ¼T0 logðk0Þlogðk þ k0Þ

; k0 > 1

or geometric ratio Tk:

Tk ¼ aTk�1; k ¼ 1; 2;y; 0oao1:

Log ratio guarantees the convergence but is slow toconverge. Geometric ratio, used more commonly inpractice, use a between 0.5 and 0.99.

3.3.3. Epoch length

Epoch length is often proportional to the number ofpossible neighborhood solutions. It along with coolingratio significantly affects the performance of algorithm.We use epoch length, which is proportional to thenumber of lots, because the number of possibleneighborhood solutions is too large:

epoch length ¼ number of lots ðLÞ lot size ðNÞ b:

3.3.4. End count

End count is the condition that finishes the algorithm,and it finishes theoretically at the point that converges totemperature 0. Because the algorithm has to spend along time under low temperatures in this case, it istypical to stop the algorithm if the external loop doesnot improve after some iteration. This value is propor-tional to the number of lots in our study. The parameterg is set to be 14, and the algorithm also finishes when thetemperature reaches 0.1, which is usually low enough:

end count ¼ number of lots ðLÞ lot size ðNÞ g:

The next step adjusts initial parameters so that theyare more appropriate to our problems. We consider not

Fig. 6. Initial temperature and solution quality.

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231 227

Page 6: Unrelated parallel machine scheduling with setup times using simulated annealing

only the quality of solutions but also computation time,which is the drawback of SA compared to otherapproaches. We tested the algorithm using randomlygenerated variables 10 times while all the otherparameters are fixed except the one we wanted to adjust.The box-whisker plot, which provides center point,variability, and tail length of a distribution, is used fordata analysis.Fig. 7 represents the relationship between the solution

acceptance ratio, which is set to be 54%, 55%, 57%,60%, 70%, and 80%, and the quality of solutions underthe preliminary tests. Fig. 8 represents the relationshipbetween acceptance ratio and computation time. Whilesolutions have high quality when acceptance ratio isrelatively low such as 54%, 55%, and 57%, computa-tional times are similar except when it is 80%.We have found from Fig. 8 that computational times

do not influence significantly on the quality of solutionsif we take the acceptance ratio lower than 60%. Hence,we use initial temperature 261 that induces 57%acceptance ratio, which shows the best results in bothaverage and variance. Similarly, we decided on a coolingratio using a ¼ 0:95; epoch length using b ¼ 1:1; andend count using g ¼ 14:

4. Performance evaluation

The suggested SA algorithm (SA S) has to becompared to other algorithms to prove its efficiency.One algorithm is a conventional SA (SA C), whichexchanges and inserts an item as a unit withoutconsidering problem characteristics such as lots andsetup times. We selected the SA C since we have to showthe relative advantage of the SA S considering the six-neighborhood solution generation to find the next move.The other is a neighborhood search (NS) method

which is sometimes called a descent technique becauseeach new seed represents a lower value of the objectiveas introduced in Baker [27]. One of the problems of NSprocedures is their tendency to become trapped at localoptima [27]. The method of obtaining the initial seedand the neighborhood generating mechanism for our NSmethod are the same as the SA S, except selecting aparticular sequence to be a new seed. The NS finds thebest solution (the new seed) inside a searchable region,while it does not allow a movement to a worse solution.We apply the same conditions including the avoidance

of unnecessary searches to all algorithms for the validityof the comparison. We use the same order as shown inthe perturbation procedure in Section 3.2 to apply forthe generation of neighborhood solutions based on ourpast experience.Our study is based on the production data for 1 week

obtained from a compound semiconductor manufac-turing company located in Iksan, Chonbuk, Korea.The processing time for each item (wafer) dependson the machine used and it is generated from Uniform[30, 60]. Given the number of job lots to be processed,for each job pair (i; j), the sequence-dependent setuptimes sijk on a machine k were randomly chosenfrom Uniform [10, 90]. The resulting setup time matrixwas then repeatedly corrected to satisfy the triangularproperty. Setup times between the items in a joblot are equal to zero because a job lot has only identicalitems.Due dates of jobs are integer values generated from

Uniform ½Pð1� t� r=2Þ;Pð1� tþ r=2Þ� as suggestedby Potts and Van Wassenhove [28]. P; t; and r controlmake-span, priority factor, and due date range factor,respectively. Because P cannot be calculated accuratelyit is estimated to be the smallest of processing times andsetup times of a job. r is fixed to be 0.8, and t is 0.4(tight: T), 0.45 (moderate: M), and 0.5 (loose: L).In addition to the standard setting of 10 machines, 50

lots, and 20 items per lot, we considered variouscombinations of the number of machines (M ¼ 5; 10,15), the number of lots (L ¼ 25; 50, 75), the size of lots(N ¼ 10; 20, 30), and due dates t=(T ;M;L). Wecompared the suggested SA S, the conventional SA C,and the NS through the combinations of these factors.Because SA C uses only two out of six different

Fig. 7. Acceptance ratio and quality of solutions.

Fig. 8. Acceptance ratio and times.

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231228

Page 7: Unrelated parallel machine scheduling with setup times using simulated annealing

Table 1

Computational results: 25 lots

Lot size No. of M=C Due date SA S SA C NS

Final sol. (min) Com. time (s) Final sol. (min) Com. time (s) Final sol. (min) Com. time (s)

10 5 T 377 57 5771 15 2001 13

M 917 29 5459 24 2608 12

L 2544 37 7148 28 4088 12

10 T 2020 74 4613 53 5411 29

M 2664 68 7188 15 6705 29

L 3346 143 9808 12 8084 30

15 T 3366 51 6705 25 6977 50

M 3656 52 6259 52 7249 49

L 4426 55 8557 26 8676 49

20 5 T 2 178 3059 127 2010 67

M 1042 190 9322 61 3148 61

L 2161 179 3323 331 6157 60

10 T 1939 317 4607 213 4230 140

M 2636 254 9890 121 5030 141

L 4486 295 8200 287 5793 139

15 T 2933 413 4588 606 4407 238

M 4778 199 12394 94 5287 245

L 4740 644 8700 299 6024 245

30 5 T 0 269 623 637 2777 193

M 482 448 2762 389 4911 198

L 1795 298 4869 511 5314 165

10 T 7960 929 29500 39 12601 355

M 9479 727 33582 43 15839 354

L 12621 722 34894 40 19859 361

15 T 2036 1154 21210 43 6176 582

M 3302 729 21753 42 8256 589

L 5300 1316 23699 43 9945 582

Table 2

Computational results: 50 lots

Lot size No. of M=C Due date SA S SA C NS

Final sol. (min) Com. time (s) Final sol. (min) Com. time (s) Final sol. (min) Com. time (s)

10 5 T 4604 241 10121 268 12865 89

M 8021 236 15518 323 20703 87

L 14979 255 20057 303 25398 90

10 T 4405 320 10861 133 11617 174

M 6511 452 11665 245 14887 180

L 10728 347 13240 391 20022 178

15 T 2692 504 5545 350 9547 252

M 5100 802 5881 515 12152 255

L 7933 610 7649 681 14570 257

20 5 T 3185 856 26365 939 15221 444

M 11235 767 29975 2126 29369 460

L 22415 672 46692 2123 34988 438

10 T 7283 963 13870 2282 26633 923

M 13943 1325 32021 635 31682 913

L 21186 1177 31133 2316 38987 939

15 T 5330 1494 7787 2386 18448 1487

M 8877 1592 11788 2361 21787 1508

L 13262 1485 18233 2349 27229 1546

30 5 T 12946 1422 47680 2890 29626 1041

M 30991 1886 78762 2908 57600 1085

L 51516 1338 95278 2912 78343 966

10 T 7177 1506 26819 3311 25177 2267

M 14795 2135 33255 3287 40208 2414

L 23217 1493 49947 3284 51748 2492

15 T 9404 2278 16894 3374 23691 3737

M 16382 2397 27843 3362 31706 3778

L 22678 2357 34802 3372 39302 4196

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231 229

Page 8: Unrelated parallel machine scheduling with setup times using simulated annealing

neighborhood generation methods, the iteration of itsinternal loop is increased 3 times.Algorithms are written in C++, and a Pentium

500MHz computer is used to run computations. Tables1–3 represent the results of computational experimentswhen the number of lots is 25, 50, and 75, respectively.The parameter most significantly affecting computationtime is the number of lots, and the size of lot, thenumber of machines, and the due date follow next. NS isfaster for small size problems, but the computationspeed of SA S and SA C becomes relatively faster as thesize gets larger. It is also shown that SA C needs longercomputational time due to the increased number ofneighborhood solution comparison.However, SA C prematurely finishes early in some

cases because a solution is not improved for a while atthe high temperature. The solutions of SA S are alwaysbetter than the other two, while SA C is better than NS,37 times out of 81 tests under different settings.

5. Conclusion

This paper presented a simulated annealing (SA)approach for an unrelated parallel machine schedulingproblem to minimize total tardiness considering se-

quence-dependent setup times and lot processing. Wesuggested new parameter values appropriate to ourproblem through preliminary experiments based onstandard parameter setting. Neighborhood solutiongeneration to perturb initial or existing solutions wassuggested. The generation includes six methods such aslot interchange, lot insert, lot merge, lot split, item

interchange, and item insert so that the characteristics ofour problem are well reflected.A suggested SA approach (SA S) was compared with

a conventional SA method (SA C), which used neigh-borhood solutions generated just by item interchange orby item insert without considering setup times and lotprocessing. The purpose was to show the superiority ofSA S considering the six neighborhood solutions basedon the characteristics of our problem over the SA Cmethod. The SA S was also compared with a neighbor-hood search (NS) method that appears to be apromising heuristic procedure for solving sequencingproblems. The NS always finds a best solution, or a newseed, inside a searchable region without allowing a moveto a worse solution.The performance of SA S is always better than the

other two when they are compared based on the numberof machines, the lot size, the number of lots, and theurgency of due date. Although NS shows faster

Table 3

Computational results: 75 lots

Lot size No. of M=C Due date SA S SA C NS

Final sol. (min) Com. time (s) Final sol. (min) Com. time (s) Final sol. (min) Com. time (s)

10 5 T 12443 341 34600 366 28894 165

M 25547 513 42236 1180 43203 158

L 39292 450 61048 821 60222 161

10 T 2642 843 11660 386 19081 557

M 6269 1082 12818 1804 24540 561

L 13261 665 26714 487 34268 542

15 T 9301 1013 14293 834 25121 710

M 11995 1324 19937 517 27741 700

L 19868 1159 25571 839 35070 688

20 5 T 6617 1364 38173 2630 29384 1007

M 31466 1676 63570 2645 73150 1014

L 59430 1326 108460 2644 94291 1064

10 T 15959 2416 44132 3609 48918 3070

M 26421 2410 58809 3491 58700 3011

L 39351 2317 77231 3533 73376 3076

15 T 9300 2442 21823 3605 31070 4561

M 14666 2465 31416 3592 38545 4467

L 25634 2482 47742 3616 54146 4560

30 5 T 21557 2724 101615 3921 61139 2715

M 54293 2787 162860 3877 90540 2839

L 101354 1949 211747 3946 147938 2709

10 T 6944 3561 62109 5196 41704 8812

M 19951 3591 91999 5221 67103 7998

L 40991 3652 13024 5231 89526 9067

15 T 24185 3685 60647 5506 55761 11992

M 36428 3738 76370 5609 69272 11978

L 53437 3707 93988 5625 88038 11995

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231230

Page 9: Unrelated parallel machine scheduling with setup times using simulated annealing

computation for smaller problems used in this study, itactually takes longer as the size of a problem increases.This might be caused by the main property of NS, whichsearches all possibilities and selects the best.It might be desirable to apply other meta-heuristics

such as Tabu search and Genetic algorithm in additionto SA to our problem and compare outcomes. When wedetermine the values of parameters in this study, a box-whisker plot under the assumption of independence ofparameters was used. However, it will also be interestingto simultaneously consider those parameters withoutassumptions. Finally, other practical situations such asdifferent ready times of lots and priority lots could beincorporated into our future research.

Acknowledgements

This paper was partially supported by the researchfunds of Chonbuk National University. The authors areindebted to Dr. M.W. Park for his helpful suggestionson this research. They are also grateful for theanonymous referees whose suggestions helped improvethe presentation of this paper.

References

[1] Na DG, Jang W, Kim DW. Resource planning for compound

semiconductor manufacturing. Proceedings of the Ninth Interna-

tional Manufacturing Conference in China, The Hong Kong

Polytechnic University, Hong Kong, August 16–17, 2000.

[2] Cheng TCE, Sin CCS. A State-of-the-art review of parallel-

machine scheduling research. Eur J Oper Res 1990;77:271–92.

[3] Karp RM. Reducibility among combinatorial problems: complex-

ity of computer computations. New York: Plenum Press, 1972.

p. 85–103.

[4] Ho JC, Chang YL. Minimizing the number of tardy jobs for

m-parallel machines. Eur J Oper Res 1995;84:343–55.

[5] Pourbabai B. One stage scheduling of preemptive jobs on parallel

machines with setup times and due dates. Proc Am Inst Ind Eng,

Annu Conf Conv 1985;258:525–8.

[6] Schutten JMJ, Leussink RAM. Parallel machine scheduling with

release dates, due dates, and family setup times. Int J Produc Econ

1996;46–47:119–25.

[7] Akkiraju R, Murthy S, Keskinocak P, Wu F. Multi machine

scheduling: an agent-based approach. Innovative Applications

of Artificial IntelligenceFConference Proceedings, July 26–30,

1998. p.1013–18.

[8] Balakrishnan N, Kanet JJ, Sridharan SV. Early/tardy scheduling

with sequence dependent setups on uniform parallel machines.

Comput Oper Res 1999;26:127–41.

[9] Koulamas C. Total tardiness problem: review and extensions.

Oper Res 1993;42:1025–41.

[10] Suresh V, Chaudhuri D. Minimizing maximum tardiness for

unrelated parallel machines. Int J Prod Econ 1994;34:223–9.

[11] Adamopoulos GI, Pappis CP. Scheduling under a common

due-date on parallel unrelated machines. Eur J Oper Res

1998;105(3):494–501.

[12] Slowinski R. Production scheduling on parallel machines subject

to staircase demands. Eng Costs Prod Econ 1988;14:11–17.

[13] Lee H, Guignard M. Hybrid bounding procedure for the

workload allocation problem on parallel unrelated machines with

setups. J Oper Res Soc 1996;47:1247–61.

[14] Weng MX, Lu J, Ren H. Unrelated parallel machine scheduling

with setup consideration and a total weighted completion time

objective. Int J Prod Econ 2001;70:215–26.

[15] Bruno J, Sethi R. Task sequencing in a batch environment with

setup times. Found Control Eng 1978;3:105–17.

[16] Ghoshi JB, Gupta JND. Batch scheduling to minimize maximum

lateness. Oper Res Lett 1997;21:77–80.

[17] Liaee MM, Emmons H. Scheduling families of jobs with setup

times. Int J Prod Econ 1997;51:165–76.

[18] Potts CN, Kovalyov MY. Scheduling with batching: a review. Eur

J Oper Res 2000;120:228–49.

[19] Tamimi SA, Rajan VN. Reduction of total weighted tardiness on

uniform machines with sequence dependent setups. Industrial

Engineering ResearchFConference Proceedings, 1997. p. 181–5.

[20] Koulamas C. Decomposition and hybrid simulated annealing

heuristics for the parallel-machine total tardiness problem. Nav

Res Logistics 1997;44:105–25.

[21] Suresh V, Chaudhuri D. Bicriteria scheduling problem for

unrelated parallel machines. Comput Ind Eng 1996;30:77–82.

[22] Armentano VA, Yamashita DS. Tabu search for scheduling on

identical parallel machines to minimize mean tardiness. J Intell

Manuf 2000;11:453–60.

[23] Park MW, Kim YD. Search heuristics for a parallel machine

scheduling problem with ready times and due dates. Comput Ind

Eng 1997;33:793–6.

[24] Jozefowska J, Mika M, Rozycki R, Waligora G, Weglarz J. Local

search meta-heuristics for discrete-continuous scheduling pro-

blems. Eur J Oper Res 1998;107:354–70.

[25] Kirkpatric S, Gelatt Jr. CD, Vecci MP. Optimization by

simulated annealing. Science 1983;220:671–80.

[26] Johnson DS, Aragon CR, Mageoch LA, Schevon C. Optimization

by simulated annealing: an experimental evaluation; part 1, graph

partitioning. Oper Res 1989;37:865–92.

[27] Baker KR. Elements of sequencing and scheduling. Amos Tuck

School of Business Administration, Dartmouth College, Hanover,

NH, 1995.

[28] Potts CN, Van Wassenhove L. Decomposition algorithm for the

single machine total tardiness problem. Oper Res Lett 1982;

1(5):177–81.

D.-W. Kim et al. / Robotics and Computer Integrated Manufacturing 18 (2002) 223–231 231