10
Exact and heuristic algorithms for the aerial refueling parallel machine scheduling problem with due date-to-deadline window and ready times q Sezgin Kaplan a,, Ghaith Rabadi b,1 a Aeronautics and Space Technologies Institute, Turkish Air Force Academy, 34149 Istanbul, Turkey b Department of Engineering Management & Systems Engineering, Old Dominion University, Norfolk, VA 23529, USA article info Article history: Received 12 January 2011 Received in revised form 5 August 2011 Accepted 23 September 2011 Available online 29 September 2011 Keywords: Parallel machine scheduling Total weighted tardiness Due date-to-deadline window Ready time Dispatching rule Simulated annealing abstract The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for fighter aircrafts (jobs) on multiple tankers (machines) to minimize the total weighted tardiness. ARSP can be modeled as a parallel machine scheduling with ready times and due date-to-deadline win- dow to minimize total weighted tardiness. ARSP assumes that the jobs have different ready times and a due date-to-deadline window between refueling due date and a deadline to return without refueling. In this paper, we first formulate the ARSP as a mixed integer programming model. The objective function is a piece-wise tardiness cost that takes into account due date-to-deadline windows and job priorities. Since ARSP is NP-hard, two heuristics are proposed to obtain solutions in reasonable computation times, namely (1) modified ATC rule (MATC), (2) a simulated annealing method (SA). The proposed heuristic algorithms are tested in terms of solution quality and CPU time through computational experiments with data randomly generated to represent aerial refueling operations of an in-theater air operation. Solutions provided by both algorithms were compared to optimal solutions for problems with up to 12 jobs and to each other for larger problems with up to 60 jobs. The results show that, MATC is more likely to outper- form SA especially when the problem size increases, although it has significantly worse performance than SA in terms of deviation from optimal solution for small size problems. Moreover CPU time performance of MATC is significantly better than SA in both cases. Ó 2011 Elsevier Ltd. All rights reserved. 1. Introduction Aerial refueling (AR) is the process of transferring fuel from a tanker aircraft to a receiver aircraft during flight. AR is extensively used in large-scale military operations because of its advantages for an air force. In-theater aerial refueling is supported entirely through aerial refueling tracks (tracks 1 and 2 in Fig. 1), which are similar to gas stations floating in the sky. Empty alternative tracks 3–6 and wings 1–9 that are formed by various even numbers of aircrafts are also shown in Fig. 1. Tankers orbit in a track location with a constant speed and altitude waiting for receivers to arrive for refueling. The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the assignment of each fighter aircraft (job) to tanker (machine) and the refueling completion times for the aircrafts. ARSP assumes that aircraft wings stay together as a group, alternative track locations and assigned track stations (tracks 1 and 2) for the tankers are known, the number of tankers does not change during an operation (i.e., tankers replace each other without delay), and aircraft wings which move dynamically in the sky, can reach to available tankers in equal times. Since the fighting force endurance in the air operation is much more important than the fuel costs of the air operation, ARSP was modeled as receiver-based. Thus, fuel source is assumed continu- ous to supply the receivers’ demand without delay. ARSP can be modeled as an identical parallel machine scheduling problem with the tankers being machines and the aircraft fighters being jobs that have ready times (time they are available) and require certain amount of processing (refueling) time regardless to which tanker they are assigned. Minimizing the total weighted tardiness is a rea- sonable objective function to meet the aircrafts’ refueling due dates and ultimately the mission’s due date. Additionally, the air- crafts have a refueling deadline that they cannot miss; otherwise, they will have to go back to their base resulting in unscheduled jobs with a high cost. To effectively model both aspects of due dates and deadlines, a new piecewise tardiness cost function is de- fined to capture the cost structure over a time horizon that encom- passes the due dates and deadlines. In this paper, a mixed integer linear programming (MILP) model will be developed to find opti- mal solutions for the problem. However, since the identical parallel machine scheduling is NP-hard even with only two machines 0360-8352/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2011.09.015 q This manuscript was processed by Area Editor Subhash C. Sarin. Corresponding author. Tel.: +90 212 663 2490; fax: +90 212 663 2837. E-mail addresses: [email protected], [email protected] (S. Kaplan), grabadi@ odu.edu (G. Rabadi). 1 Tel.: +1 757 683 4918; fax: +1 757 683 5640. Computers & Industrial Engineering 62 (2012) 276–285 Contents lists available at SciVerse ScienceDirect Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Exact and heuristic algorithms for the aerial refueling parallel machine scheduling problem with due date-to-deadline window and ready times

Embed Size (px)

Citation preview

Computers & Industrial Engineering 62 (2012) 276–285

Contents lists available at SciVerse ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

Exact and heuristic algorithms for the aerial refueling parallel machinescheduling problem with due date-to-deadline window and ready times q

Sezgin Kaplan a,⇑, Ghaith Rabadi b,1

a Aeronautics and Space Technologies Institute, Turkish Air Force Academy, 34149 Istanbul, Turkeyb Department of Engineering Management & Systems Engineering, Old Dominion University, Norfolk, VA 23529, USA

a r t i c l e i n f o a b s t r a c t

Article history:Received 12 January 2011Received in revised form 5 August 2011Accepted 23 September 2011Available online 29 September 2011

Keywords:Parallel machine schedulingTotal weighted tardinessDue date-to-deadline windowReady timeDispatching ruleSimulated annealing

0360-8352/$ - see front matter � 2011 Elsevier Ltd. Adoi:10.1016/j.cie.2011.09.015

q This manuscript was processed by Area Editor Su⇑ Corresponding author. Tel.: +90 212 663 2490; fa

E-mail addresses: [email protected], [email protected] (G. Rabadi).

1 Tel.: +1 757 683 4918; fax: +1 757 683 5640.

The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completiontimes for fighter aircrafts (jobs) on multiple tankers (machines) to minimize the total weighted tardiness.ARSP can be modeled as a parallel machine scheduling with ready times and due date-to-deadline win-dow to minimize total weighted tardiness. ARSP assumes that the jobs have different ready times and adue date-to-deadline window between refueling due date and a deadline to return without refueling. Inthis paper, we first formulate the ARSP as a mixed integer programming model. The objective function is apiece-wise tardiness cost that takes into account due date-to-deadline windows and job priorities. SinceARSP is NP-hard, two heuristics are proposed to obtain solutions in reasonable computation times,namely (1) modified ATC rule (MATC), (2) a simulated annealing method (SA). The proposed heuristicalgorithms are tested in terms of solution quality and CPU time through computational experiments withdata randomly generated to represent aerial refueling operations of an in-theater air operation. Solutionsprovided by both algorithms were compared to optimal solutions for problems with up to 12 jobs and toeach other for larger problems with up to 60 jobs. The results show that, MATC is more likely to outper-form SA especially when the problem size increases, although it has significantly worse performance thanSA in terms of deviation from optimal solution for small size problems. Moreover CPU time performanceof MATC is significantly better than SA in both cases.

� 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Aerial refueling (AR) is the process of transferring fuel from atanker aircraft to a receiver aircraft during flight. AR is extensivelyused in large-scale military operations because of its advantagesfor an air force. In-theater aerial refueling is supported entirelythrough aerial refueling tracks (tracks 1 and 2 in Fig. 1), whichare similar to gas stations floating in the sky. Empty alternativetracks 3–6 and wings 1–9 that are formed by various even numbersof aircrafts are also shown in Fig. 1. Tankers orbit in a track locationwith a constant speed and altitude waiting for receivers to arrivefor refueling. The Aerial Refueling Scheduling Problem (ARSP) canbe defined as determining the assignment of each fighter aircraft(job) to tanker (machine) and the refueling completion times forthe aircrafts. ARSP assumes that aircraft wings stay together as agroup, alternative track locations and assigned track stations(tracks 1 and 2) for the tankers are known, the number of tankers

ll rights reserved.

bhash C. Sarin.x: +90 212 663 2837.du.edu (S. Kaplan), grabadi@

does not change during an operation (i.e., tankers replace eachother without delay), and aircraft wings which move dynamicallyin the sky, can reach to available tankers in equal times.

Since the fighting force endurance in the air operation is muchmore important than the fuel costs of the air operation, ARSP wasmodeled as receiver-based. Thus, fuel source is assumed continu-ous to supply the receivers’ demand without delay. ARSP can bemodeled as an identical parallel machine scheduling problem withthe tankers being machines and the aircraft fighters being jobs thathave ready times (time they are available) and require certainamount of processing (refueling) time regardless to which tankerthey are assigned. Minimizing the total weighted tardiness is a rea-sonable objective function to meet the aircrafts’ refueling duedates and ultimately the mission’s due date. Additionally, the air-crafts have a refueling deadline that they cannot miss; otherwise,they will have to go back to their base resulting in unscheduledjobs with a high cost. To effectively model both aspects of duedates and deadlines, a new piecewise tardiness cost function is de-fined to capture the cost structure over a time horizon that encom-passes the due dates and deadlines. In this paper, a mixed integerlinear programming (MILP) model will be developed to find opti-mal solutions for the problem. However, since the identical parallelmachine scheduling is NP-hard even with only two machines

Fig. 1. In-theater aerial refueling.

S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285 277

(Blazewicz, Ecker, Pesch, Schmidt, & Weglarz, 2007; Garey &Johnson, 1979; Karp, 1972), ARSP is also NP-hard, which meansthat obtaining optimal solutions for large instances will be compu-tationally difficult. Therefore, a composite dispatching rule, namelythe Modified ATC (MATC) is proposed based on the commonly usedApparent Tardiness Cost (ATC) rule, which is often applied to totalweighted tardiness problems. Dispatching (or priority) rules arevery common heuristics for scheduling problems due to their easyimplementation and low computational requirements. A Simu-lated Annealing (SA) metaheuristic is also developed for the prob-lem at hand to compare the effectiveness of the proposed MATCrule for large instances.

The rest of this paper is organized as follows. In Section 2, theARSP is defined and mapped to the abstract scheduling problem.Related research is summarized in Section 3. A MIP for the problemis developed in Section 4 and solution methods are introduced inSection 5. A computational study for small and large problem sizesis described in Section 6. Finally results are concluded in Section 7.

2. Problem definition

ARSP can be defined as scheduling n jobs (wings of aircrafts) onm identical parallel machines (tankers) where job j arrives (be-comes available) at ready time rj and should be complete by thedue date dj and before deadline Dj. The problem elements areshown in Fig. 2. Job j requires processing time pj which is the timerequired to approach the refueling area, anchoring to a tanker andfuel pumping starting at time sj and completing at time Cj. Readytime rj is the earliest time a receiver can start processing (i.e., recei-ver cannot be scheduled before rj). The due date dj is a planned lat-est date of a receiver to complete refueling and the deadline Dj isthe latest date of a receiver to finish refueling after which it mustreturn to base to avoid running too low on fuel level. Missing thedue date is not preferred but allowed and a weighted tardiness costwill be incurred for jobs that miss their due date but not theirdeadline. Weight wj represents job priority for refueling. If a jobmisses Dj, it must return to base incurring a high penalty and willnot be assigned to a machine. Due date-to-deadline window(Dj � dj) is the time window in which a weighted tardiness cost

Fig. 2. Scheduling

is incurred. There is also a scheduling window between readytimes and deadlines in which all jobs have to be started and com-pleted. The objective is to find a schedule that minimizes the totalweighted tardiness (TWT) as a performance measure to maintainthe quality of service with due dates.

ARSP assumes that the jobs are ready at different times rj andhave a due date-to-deadline (d-to-D) window of different sizes. Apiecewise tardiness cost function may be defined where if job j isnot completed on or before dj, the tardiness cost Tj = max (Cj

� dj,0) of job j is incurred in due date-to-deadline window. Thejob is not scheduled and a high unavailability cost, F will be in-curred, as shown in Fig. 3, if its completion time Cj passes Dj.

3. Literature review

Aerial refueling is generally employed in two cases of militaryoperations: inter-theater (e.g. overseas deployments) and in-theater (e.g. local conventional operations). This paper examinesthe in-theater ARSP which is far more complex and difficult tosolve than inter-theater ARSP. The only existing research on sched-uling in-theater aerial refueling is by Jin, Shima, and Schumacher(2006) who introduced the static autonomous refueling schedulingof multiple unmanned aerial vehicles (UAVs) on a single tanker tominimize the total time needed for refueling all UAVs in the se-quence. A dynamic programming method was used to develop anefficient recursive algorithm to find the optimal initial sequence.On the other hand, the only major work on inter-theater aerialrefueling is by Barnes, Wiley, Moore, and Ryer (2004) who studiedthe aerial fleet refueling problem (AFRP) and used a Group Theo-retic Tabu Search (GTTS) approach as a solution method.

There is very little existing research addressing the schedulingidentical parallel machines with ready times to minimize totalweighted tardiness (Pm|rj|RwjTj) problem as shown in Table 1.However, the related researches do not consider neither the d-to-D windows nor the piece-wise tardiness. The most closely relatedresearch is presented in Mönch, Balasubramanian, Fowler, andPfund (2005), Reichelt, Mönch, Gottlieb, and Raidl (2006), Pfund,Fowler, Gadkari, and Chen (2008), Gharehgozli, Tavakkoli, andZaerpour (2009), and Driessel and Mönch (2009) with the differen-tiators being the sequence dependent setup, batch machines, andprecedence constraints.

Mönch et al. (2005) attempted to minimize total weighted tar-diness on parallel batch machines with incompatible job familiesand unequal ready times (Pm|rj,batch, incomp.|RwjTj). They pro-posed two different decomposition approaches. Dispatching andscheduling rules were used for the batching phase and thesequencing phase of the two approaches. Reichelt et al. (2006)were interested in minimizing total weighted tardiness and make-span at the same time (Pm|rj,batch, incomp.|RwjTj,Cmax). In order todetermine a pareto efficient solution for the scheduling of jobswith incompatible families on parallel batch machines problem,they suggested a hybrid multi objective genetic algorithm. Pfundet al. (2008) addressed scheduling jobs with ready times onidentical parallel machines with sequence dependent setups by

time horizon.

Fig. 3. Piecewise tardiness.

Table 1Literature on the Pm|rj|RwjTj problem.

S/N Source Year Problem Solution approach

1 Hoitomt,Luh, Max,andPattipati

1990 Pm|rj,prec|RwjTj2 Lagrangian

relaxationtechnique and thelist-scheduling

2 Mönchet al.

2005 Pm|rj,batch, incomp.|RwjTj Two and threestagedecompositionapproaches, geneticalgorithm,dispatching rules.

3 Reicheltet al.

2006 Pm|rj,batch, incomp.|RwjTj,Cmax

Three-Phaseschedulingapproach: hybridmulti objectivemetaheuristiccombining NSGA-IIalgorithm and alocal search

4 Mönchet al.

2006 Pm|rj,batch, incomp.|RwjTj Apparent TardinessCost (ATC)dispatching rule

5 Cheng,Chiang, andFu

2008 Pm|rj,batch, incomp.|RwjTj Memetic algorithm,BATC-II dispatchingrule

6 Pfund et al. 2008 Pm|rj, skj|RwjTj Extension of theapparent tardinesscost with setups(ATCS) approach

7 Gharehgozliet al.

2009 Pm|rj, skj|RwjTj, RwjCj Fuzzy mixed-integer goalprogramming(FMIGP)

8 Driesseland Mönch

2009 Pm|rj, skj,prec|RwjTj Variableneighborhoodsearch

278 S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285

minimizing the total weighted tardiness (Pm|rj, skj|RwjTj). Their ap-proach was an extension of the Apparent Tardiness Cost with Set-ups (ATCS) approach by Lee and Pinedo (1997) to allow non-readyjobs to be scheduled. Gharehgozli et al. (2009) presented a newmixed-integer goal programming (MIGP) model for a parallel ma-chine scheduling problem with sequence-dependent setup timesand ready times (Pm|rj, skj |RwjTj,RwjCj). Fuzzy processing timesand two fuzzy objectives were considered in the model to mini-mize the total weighted flow time and the total weighted tardinesssimultaneously. Driessel and Mönch (2009) discussed a schedulingproblem for jobs on identical parallel machines where ready timesof the jobs, precedence constraints, and sequence-dependent setuptimes were considered to minimize the performance measure totalweighted tardiness (Pm|rj, skj,prec|RwjTj). They suggested a VariableNeighborhood Search (VNS) scheme for the problem.

A number of simple dispatching rules such as the Earliest DueDate (EDD), Shortest Processing Time (SPT), Minimum SLACK(MSLACK), and Slack per Remaining Processing Time (S/RPT) have

been applied to solve the total tardiness, total weighted tardiness,and maximum tardiness related problems. Vepsalainen andMorton (1987) firstly applied the principles of the Cost Over Time(COVERT) (Carroll, 1965) and the Modified Operation Due date(MOD) (Baker & Bertrand, 1982) rules to develop the apparent tar-diness cost heuristic (ATC) for the parallel machine total weightedtardiness problem. Some versions of ATC rule have also been devel-oped for different types of parallel machine scheduling environ-ments. Rachamadugu and Morton (1982) proposed the RMheuristic for parallel machine scheduling and Morton and Pentico(1993) developed X-RM heuristic by modifying the RM heuristicto consider dynamic job arrivals. Lee and Pinedo (1997) imple-mented the Apparent Tardiness Cost with Setups (ATCS) approachto find an initial schedule for a 3-phase approach in schedulingjobs with ready times on identical parallel machines with sequencedependent setups. Logendran and Subur (2004) developed themodified ATC method as the initial solution generation methodto be used with the tabu-search based heuristic. Mönch, Zimmer-mann, and Otto (2006) suggested a simple heuristic based on theATC dispatching rule to minimize total weighted tardiness on par-allel batch machines with incompatible job families and unequalready times of the jobs. Pfund et al. (2008) assumed an extensionof the Apparent Tardiness Cost with Setups (ATCS) approach byLee and Pinedo (1997) in scheduling jobs with ready times on iden-tical parallel machines with sequence dependent setups. In this pa-per, we model the in-theater ARSP as a parallel machine schedulingfor the first time and we consider both due dates and deadlines in adynamic scheduling environment. We modify the ATC rule to theMATC rule to schedule jobs with ready times on identical parallelmachines with d-to-D window and a piecewise tardiness cost func-tion. Such an objective function has nonmilitary important applica-tions such as capturing the tolerance of costumers to order delaysand stockout costs which are the revenue lost due to the inabilityto fill dynamic customer orders similar to unavailability cost inaerial refueling.

4. Mathematical model

In this paper, ARSP has the following assumptions: Any job canbe processed on at most one machine at any time. No preemptionis allowed; i.e., once an operation is started, it is continued untilcomplete. Machines are always available (no breakdowns). Eachmachine can process at most one job at any time. Setup timesare sequence-independent and are included in processing times.Processing times and technological constraints are deterministicand known. Since higher wing availability is desired and the rela-tive importance levels of the wings affect the scheduling in ARSP,the total weighted tardiness is used as a scheduling performancemeasure in the mathematical model.

4.1. Notations

In order to represent the starting point for each machine, adummy index 0 is defined for predecessor of the first job.

4.1.1. Parameters

� pj = processing time of job j;� rj = ready (release) time of job j;� wj = weight of job j;� dj = due date of job j;� Dj = deadline of job j;� M = a large positive integer.� F = fixed cost of returning back to base. It must be much larger

than Dj � dj values.

S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285 279

4.1.2. Decision variablesThe MILP model has two continuous (completion time and tar-

diness) and two binary (sequence and assignment) decisionvariables.

� Cj = completion time of job j;� Tj = piecewise tardiness of job j;

Xijk ¼1; if job i precedes job j on machine k;

0; otherwise:

Yjk ¼1; if job j is assigned to machine k;

0; otherwise:

�The solution set contains 2n continuous variables (n continuousvariables of Cj, and n continuous variables of Tj), and nm(1 + n) bin-ary variables (n � n �m binary variables for xijk, and n �m binaryvariables for yjk).

4.2. A mixed integer programming formulation

The complete Mixed Integer Linear Programming (MILP) modelis introduced here as follows:

min Z ¼XN

j¼1

wjTj ð1Þ

s.t.Xm

k¼1

yjk 6 1 j ¼ 1; . . . ;n ð2Þ

Xn

j¼1

x0jk 6 1 k ¼ 1; . . . ;m ð3Þ

Cj þM 1�Xm

k¼1

yjk

!P rj þ pj j ¼ 1; . . . ;n ð4aÞ

Cj þMð1� xijkÞ P Ci þ pj i ¼ 1; . . . ;n j ¼ 1; . . . ; n

k ¼ 1; . . . ;m ð4bÞ

Cj 6 Dj:Xm

k¼1

yjk j ¼ 1; . . . ;n ð5Þ

Xn

i¼0;i–j

xijk ¼ yjk j ¼ 1; . . . ;n k ¼ 1; . . . ;m ð6Þ

Xn

j¼1

xijk 6 yik i ¼ 1; . . . ; n k ¼ 1; . . . ;m ð7Þ

xijk þ xjik 6 1 i ¼ 1; . . . ;n j ¼ 1; . . . ; n k ¼ 1; . . . ;m ð8Þ

Tj P Cj � dj j ¼ 1; . . . ; n ð9aÞ

Tj P Fð1�Xm

k¼1

yjkÞ j ¼ 1; . . . ;n ð9bÞ

Cj; Tj P 0 j ¼ 1; . . . ;n ð9cÞ

xijk; yjk ¼ binary i ¼ 0;1; . . . ;n j ¼ 1; . . . ;n k ¼ 1; . . . ;m ð10Þ

Eq. (1) minimizes the total weighted tardiness. The constraints in(2) guarantee that no job can be assigned to more than one machineand allows for some jobs not to be assigned to machines at all in(returning back to base). The constraints in (3) ensure that no morethan one job can be in the first position at a certain machine and it ispossible for a machine not to have any jobs assigned to it at all. Two

linear constraints (4a) and (4b) are alternative formulations for thenonlinear constraints in (11).

Cj þMð1� xijkÞ P maxðrj;CiÞ þ pj i ¼ 0;1; . . . ;n

j ¼ 1; . . . ;n k ¼ 1; . . . ;m ð11Þ

In constraints (11), if job i precedes job j on machine k, then job jmust be scheduled after it becomes ready and after its predecessorjob i is complete (hence the term max(rj,Ci)). Constraints (4a) ensurethat job j must be scheduled on machine k after its ready time. Con-straints (4b) ensure that, job j must be scheduled after the comple-tion time of its predecessor job i when job i precedes job j onmachine k. Constraints (5) ensure that a job past its deadline isnot completed. Constraints (6) guarantee that each job j can be afirst on machine k or be preceded by another job i including startingwith a dummy job 0. Constraint (7) ensure that each job can be as-signed last to machine k or be succeeded by at most one job j when jis assigned to machine k. Constraints (8) enforce precedence rela-tionships that job j cannot proceed job i when job i precedes job jon machine k. Constraints (9a)–(9c) define the piecewise tardiness.Constraints (9b) turn into (9c) when job j is assigned to a machineand it takes the place of constraint (9a) when job j is not assigned toany machine. Finally, constraints (10) ensure the assignment vari-ables to binary decision variables.

5. Solution methods

Exact algorithms can be used to find the optimal solutions forscheduling problems. Most scheduling problems, including ARSP,are NP-hard and may take prohibitively long time to reach optimalsolutions for large problems. Therefore, it becomes necessary todevelop qualified and easily implemented methods to reach goodquality solutions in reasonable computational times. As a result,the literature is rich with heuristic algorithms for scheduling prob-lems. In this paper, a new composite dispatching rule, MATC (Mod-ified ATC rule) is introduced which extends the ATC rule by takinginto account the due date-to-deadline window and jobs’ readytimes. Additionally Simulated Annealing (SA) metaheuristic is ap-plied to the problems to obtain near-optimal solutions for theproblem and be used as a way to evaluate the performance ofthe MATC rule.

5.1. MATC rule

Dispatching (or Priority) rules are very common heuristics forscheduling problems as they are fast and easy to implement. Theyare designed so that the priority index for each job can be com-puted easily using the information available at any time (Pinedo,2008). In this paper, the composite dispatching rule MATC is amodification of the Apparent Tardiness Cost (ATC), which is awidely used rule for the classical total tardiness schedulingproblem.

The different versions of the ATC rule are based on prioritiesthat are estimated by the following formulation:

pj ¼wjpj

Uj ð12Þ

where Uj is the marginal cost of delay; in other words, it is an activ-ity time urgency function. The difficulty of this formula is the esti-mation of Uj (Morton & Pentico, 1993). In this paper, we attemptedto formulate this urgency function by taking into account due date-to-deadline (d-to-D) window and ready times.

The ATC for the total weighted tardiness problem combines theelementary Weighted Shortest Processing Time first (WSPT) dis-patching rule and the Minimum Slack first (MS) rule. Every timethe machine becomes free, the ATC calculates a ranking index for

280 S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285

each remaining job. The job with the highest ranking index definedin (13) is then selected to be processed next:

pjðtÞ ¼wjpj

exp �max½dj� pj� t;0�k1�p

� �ð13Þ

where �p is the average processing time of the remaining jobs, and kis a scaling parameter, called look-ahead parameter. If k1 is verylarge the ATC rule behaves similar to the WSPT rule, and if k1 is verysmall the rule behaves similar to the MS rule. The WSPT rule is opti-mal when all jobs are tardy, while the MS rule is optimal when alldue dates are sufficiently loose and spread out.

When the d-to-D window is considered, a new slack factorneeds to be included where at each decision time, getting closerto the deadline affects the priorities. Because a much higher tardi-ness cost will be incurred if jobs miss their deadlines, a secondslack factor may be defined as max[Dj � pj � t,0]. The deadline fac-tor is calculated by (14):

exp �max½Dj� pj� t;0�k2�p

� �ð14Þ

Moreover, if a job misses its deadline, it has to return to base andcannot be assigned to any machine. Therefore, its priority has tobe assigned to zero after applying a penalty because it will no longerbelong to the pool of jobs to be scheduled. In order to reflect this as-pect in the scheduling problem, the scheduling-decision factor in(15) is included as a multiplier. Before reaching the deadline forjob j, the scheduling decision factor has no effect on the priority,and therefore, the priority index for j is calculated only by the mul-tiplication of (13) and (14). A small number e is added to allowassigning the job if the completion time happens to be exactly equalto the deadline.

max½Dj� pj� t þ e;0�½Dj� pj� t þ e� ð15Þ

Finally, for the jobs’ ready times, the factor in (16) may be includedas a multiplier since job ready times influence the priority indexwhen it is larger than the current time, t.

exp �max½rj� t;0�k3�r

� �ð16Þ

As a result, the priority index for job j at time t is calculated by(17)

pjðtÞ ¼wjpj

exp �max½dj� pj� t; 0�k1�p

� �� exp �max½Dj� pj� t; 0�

k2�p

� �� exp �max½rj� t;0�

k3�r

� ��max½Dj� pj� t þ e; 0�

½Dj� pj� t þ e� ð17Þ

The proposed MATC heuristic is dynamic in a sense that after thecompletion of each job, the remaining jobs are prioritized accordingto newly introduced index in (17), and the job with the highestpriority is selected. The recalculated nominal priority takes intoaccount the previous decisions as they affect the earliest startingtime for the remaining jobs. According to the proposed modifiedrule, the job with the highest priority will be assigned to the earliestavailable machine. The recommended values of look-ahead param-eters (k1, k2, k3) will be determined by extensive experiments inSection 6.3.1. The pseudo code for MATC is given below inMATC_ALGORITHM.

MATC_ALGORITHM

t: decision time for assignmentCompTimej: completion time of job jCmaxi (t): makespan of the machine i at time tpj(t): priority of the job j at decision time tU: set of undecided jobsM: set of machinesn: number of jobsrj: ready time of job jpj: processing time of job jwj: weight of job jtardinessj: tardiness of job jL: Large integer

1. Set t = 0, U = {1,2, . . . ,n}, M = {1,2, . . . ,m}, CompTimej = 0"j e U, Cmaxi(t) = 0 "i e M

2. Calculate pj(t) "j e U by using equation (17)3. if pj(t) = 0 "j e U then remove j from U and set

CompTimej = L4. while U – u5. Find j ¼ fj 2 U : pjðtÞ ¼max

k2Ufpkgg

6. Find i ¼ fi 2 M : CmaxiðtÞ ¼maxk2MfCmaxmðtÞgg

7. Update CompTimej = Cmaxi (t) + max(rj, t) + pj and removej from U

8. Update Cmaxi (t) = CompTimej

9. Update t ¼minm2MfCmaxmðtÞg

10. Calculate pj(t) "j e U by using equation (17)11. if pj(t) = 0 "j e U then remove j from U and set

CompTimej = L12. end while13. Calculate and display the total weighted tardiness

The sample for the problem with 12 jobs and 3 machines is givenbelow to clarify the methodology.

Ready time = [11.5, 3.5, 3.0, 6.5, 23.5, 6.5, 21.0, 14.0, 6.5, 8.5,1.5, 11.0];

Processing time = [ 41, 39, 41, 44, 19, 30, 19, 32, 30, 37, 24, 42];Weight = [8, 8, 3, 4, 3, 1, 3, 8, 4, 7, 6, 4];Due date = [93.5, 81.5, 85.0, 94.5, 61.5, 66.5, 59.0, 78.0, 66.5,

82.5, 49.5, 95.0];Deadline = [134.5, 120.5, 126.0, 138.5, 80.5, 96.5, 78.0, 110.0,

96.5, 119.5, 73.5, 137.0];

A sample feasible solution of the MATC algorithm for this prob-lem instance is given in Table 2. Jobs whose completion timesmissed their deadline are denoted by very large completion timesand are assigned to a dummy machine, m + 1. In Table 2, the firstrow is the job indices, the second row is the machine indices,and the third row is the completion times. For example, job 7 isscheduled on the machine 1 to complete at time 44.5. Job 4 andjob 6 are not scheduled on any machines and this situation is rep-resented by scheduling them on the machine 4 (number of ma-chines + 1) with very long completion time.

5.2. Simulated annealing

Simulated Annealing (SA) is one of the most frequently usedmetaheuristics to find near optimal solutions to combinatorialoptimization problems through the process of probabilistic statetransition (Silver, 2002). It is based on the work of Metropolis,

Table 2Sample feasible solution.

Job index 11 7 9 1 2 5 10 3 8 12 4 6Machine index 1 1 1 1 2 2 2 3 3 3 4 4Completion time 25.5 44.5 74.5 115.5 42.5

S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285 281

Rosenbluth, Rosenbluth, Teller, and Teller (1956) who simulated theenergy levels in cooling solids by producing a sequence of states. SAis a local search improvement heuristic that generates neighborsolutions by simple moves from the current solution starting witha randomly generated initial solution. It has a mechanism that en-riches the search process by preventing getting trapped at local op-tima by allowing the acceptance of worse solutions with a certainprobability (Hazir, Gunalay, & Erel, 2008). The SA method was cho-sen as a robust meta-heuristic to provide immediate results to theproblem. SA can deal with arbitrary systems and cost functionsand is relatively easy to implement even for complex problems. Itgenerally gives a good solution and statistically guarantees findingan optimal solution, but SA cannot tell whether it has found an opti-mal solution (Tan, Lee, Zhu, & Ou, 2001).

A general SA metaheuristic algorithm will be used to improvethe random initial solution. SA typically keeps one solution inmemory and tries to move to improved solutions. The speed of aSA procedure depends on how fast the algorithm finds a goodneighbor, which may lead to an optimal or near-optimal solution.Moreover, taking most promising neighbors into account in thesearch for a good neighbor accelerates the algorithm significantly.Pseudo code for SA is given below.

S: search areah: current solutionf(h): objective function value of the current solutionMemory: memory set of current best solution and objective

function valuei: inner loop iteration counterimax: max number of inner loop iterationst: iteration countertmax: max number of iterationsT: temperaturek: initial temperature coefficientalpha: temperature cooling coefficient

N(h): neighborhood of h

h0: neighbor solution of the current solution

h�: best solution

1: Get a random solution as an initial solution h from S2: Calculate f(h)3: Initialize memory, Memory = {(h, f(h))}4: Set iteration counters i = 0, t = 05: Set initial temperature T = k. f(h)6: while t < tmax do7: while i < imax do8: Choose h

0e N(h) # S where M = {(h, f(h))}

9: Calculate f(h0)

10: if f(h0) 6 f(h) or rand½0;1� 6 e�

f ðh0 Þ�f ðhÞT then

11: M = {(h0, f(h

0))}

12: end if13: i = i + 114: t = t + 115: end while16: Update temperature T = alpha.T17: i = 018: end while19: Output best solution h� stored in M

SA has four parameters: maximum number of iterations (tmax),maximum number of inner loop iterations (i ), initial temperature

max

coefficient (k) and temperature cooling coefficient (alpha) and theseparameter will be tuned in Section 6.3.2. for better solutions.

A special perturbation is required to be able to deal withunscheduled jobs when generating a new solution. The mixture oftwo types of operations (job exchange and job insertion as shownin Table 3 for the sample feasible solution in Table 2) will be usedto perturb the current solution locally. In order to avoid highunavailability costs when there are unscheduled jobs, job insertionis executed by inserting one of the unscheduled jobs to replace ascheduled job and shifting the jobs to the right on the correspondingmachine. If there are no unscheduled jobs, then job exchange is usedfor perturbation. After each perturbation, completion times on thecorresponding machines are revised according to the new sequence.

As an example, a random feasible solution of 12 jobs and threemachines (Table 4) can be obtained by allocating an equal numberof jobs to machines according to their job index to use as an initialsolution for the SA. In this case, the first four jobs are assigned tomachine 1, the second four jobs to machine 2, and the last four jobsto machine 3 and then completion times are calculated accordingto this initial sequence.

6. Computational study

To measure the effectiveness of the MATC algorithm, it is com-pared with the SA metaheuristic with tuned parameters for low(n = 12) and high (n = 60) levels of number of jobs. Design of exper-iments (DOE) was performed for estimating the MATC’s look-aheadparameters and for setting the SA parameters. Multi factorial de-sign with four factors and three levels was employed and their highlevel interactions were analyzed to gain insight into the algo-rithms’ performance under different conditions. (See Montgomery(2001) for discussion on DOE).

The proposed and existing heuristic algorithms were imple-mented in C++. Optimal solutions were obtained by implementingthe MILP model (Section 4.2) in OPL Studio 6.3 with CPLEX 12.1solver. Intel Core 2 Duo 2.10 GHz CPU with 2.00 GB of RAM wasused to perform the computations. Note that the MILP model couldbe used only for instances with 12 jobs in order to limit the CPUtime to less than 10 min.

6.1. DOE factors

The following four factors are defined for our DOE:

1. Job machine factor: l = n/m, the average number of jobs pro-cessed per machine.

2. Due date tightness factor a: It characterizes how tight the duedates are and is defined as a coefficient in dj = rj + a � pj by tak-ing into account fuel level and refueling time. The due datetightness a is assessed by the decision maker, and shouldrationally take values greater than 1 to provide enough timeto complete the process. A large value of a indicates tightdue dates and a small a indicates loose due dates.

3. Due date-to-deadline (d-to-D) window tightness factor c: Itcharacterizes how tight the d-to-D windows are, and isdefined as c = b � a whereb is the deadline factor, whichcharacterizes how tight the deadlines are where Dj = rj + b � pj.

61.5 98.5 44 76 118 99,999 99,999

Table 3Example of SA perturbations.

Job exchange 10 7 9 1 2 5 11 3 8 12 4 6

Job 11 1 1 1 4 2 2 4 3 3 3 4 4Job 10 45.5 64.5 94.5 99,999 42.5 61.5 99,999 44 76 118 99,999 99,999

Job insertion 11 7 9 6 1 2 5 10 3 8 12 4

Job 6 1 1 1 1 4 2 2 2 3 3 3 4Job 1 25.5 44.5 74.5 118.5 99,999 42.5 61.5 98.5 44 76 118 99,999

Table 4Sample random solution of 12 jobs and three machines.

Job index 1 2 3 4 5 6 7 8 9 10 11 12Machine index 1 1 1 1 2 2 2 2 3 3 3 3Completion time 52.5 91.5 99,999 135.5 42.5 72.5 99,999 104.5 36.5 73.5 99,999 115.5

Table 5Coded values of factor levels.

Level Coded Actual

l a c q

Low �1 4 1.5 0 0.1Medium 0 5 2 1 0.2High 1 6 2.5 2 0.3

282 S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285

Alternatively, Dj = dj + c � pj. The d-to-D window can then becalculated by Dj � dj = c � pj. The deadline factor b is assumedto be higher than a and d-to-D window factor c is alwayshigher than zero. A large value of c indicates tight d-to-D win-dows and a small c loose d-to-D windows.

4. Ready time range factor q is a measure of how spread out theready times are as compared to the estimated makespan(bCmax). The makespan (Cmax) is the maximum completion timeof all released jobs. Since Cmax is dependent on the scheduleand is not known apriori, an estimated makespan (bCmax) willbe developed in the following subsection.

6.2. Data generation

1. The number of jobs is set to a low value (n = 12) and a highvalue (n = 60) and then the number of machines is determinedby using m = n/l where l is the job machine factor.

2. The processing times pj are uniformly distributed integers in theinterval [15,45].

3. The weights for the jobs wj are uniformly distributed integers inthe interval [1,9].

4. Given the average of jobs’ processing times (�p), the average ofthe jobs’ ready times (�r), job machine factor (l), and coefficient(u), which takes into account the effect of the ready times onthe makespan (assumed here u = 0.1), then bCmax ¼ ð/�r þ �pÞ:lestimates Cmax of the problem. A similar approach was usedby Lee and Pinedo (1997).

5. Ready times rj are uniformly distributed in the interval ½0;2�r�where �r is the jobs’ ready time average. The maximum readytime is derived as 2�r ¼ 2�plq=ð2� /lqÞwhere ready time rangefactor (q) is q ¼ 2�r=Cmax and estimated makespan isbCmax ¼ ð/�r þ �pÞ:l as defined earlier.

6. Due dates are calculated by dj = rj + a � pj formula and deadlinesare calculated by Dj = rj + b � pj formula.

6.3. Parameter setting

Tuned values of the parameters to be employed in heuristicalgorithms have a significant impact on both the speed and solu-tion quality. The parameter values which make the algorithmswork effectively, are determined through extensive experimentsin the following subsections.

6.3.1. MATC parameter settingThe MATC parameter setting approach in this paper is similar to

other researchers’ approaches such as Lee and Pinedo (1997),Logendran and Subur (2004), and Pfund et al. (2008).The look-ahead parameters are dependent on the particular problem

instance in terms of job machine factor (l), due date tightness(a), d-to-D window tightness (c) and ready time range factor (q)factors. Thus, an experimental study may be conducted to deter-mine the values of k1 = f1(l,a,c,q), k2 = f2(l,a,c,q) andk3 = f3(l,a,c,q) in (17). The parameter values resulting in the min-imum total weighted tardiness values for each problem instanceare the response values of the experiment. Regression equationsmapping the factors of an instance into values of the three look-ahead parameters are determined to estimate the parameters.

For the proposed MATC algorithm, look-ahead parameters (k1, k2

and k3) are considered as additional factors of the experimentbesides the other factors mentioned earlier. Extensive experimentsfor 60 jobs were conducted over three levels of job machine factor(l = 4,5,6), three levels of the due date tightnessfactor (a = 1.5,2.0,2.5), three levels of the d-to-D window tightnessfactor (c = 0,1,2) and three levels of the ready time range factor(q = 0.1,0.2,0.3). Coded factor values given in Table 5, were usedto estimate the mathematical models of the look-ahead parameters.

81 (34) unique problem instances were generated according tothe statistical distributions discussed earlier for each input dataand replications. Within each problem type, five problem instancesare generated totaling 405 problem instances. Look-ahead param-eters have many more levels.

32 levels of k1 : f0:2;0:4; . . . . . . ;6:4g32 levels of k2 : f0:2;0:4; . . . . . . ;6:4g32 levels of k3 : f0:2;0:4; . . . . . . ;6:4g

For each of the 405 instances, the total weighted tardiness valuesof 32,768 (323) combinations of k1, k2 and k3 were evaluated.Similar to the study of Lee and Pinedo (1997), all k1, k2 and k3

values that yielded in the range between the minimum totalweighted tardiness (MTWT) and MTWT(1 + d) for each instance,were identified where d is the tolerance factor for the best MTWTvalues. The averages of k1, k2 and k3 values are denoted as �k1;

�k2

and �k3.The parameter d is a decreasing function of due date tightness

factor and ranges from 0 to 0.065 (Lee & Pinedo, 1997). In this

Table 6Results of the MATC and SA for problems with n = 12.

l a c q Avg. relative error Min. relative error Max. relative error Avg. CPU (s)

MATC SA SA MATC SA

3 1.5 0 0.1 0.14 0.04 0.01 0.10 0.00 0.070.3 0.24 0.09 0.00 0.19 0.00 0.08

2 0.1 0.30 0.46 0.14 0.77 0.00 0.110.3 0.23 1.01 0.40 1.64 0.00 0.11

2 0 0.1 1.33 0.28 0.00 0.58 0.00 0.090.3 1.40 0.30 0.27 0.38 0.00 0.09

2 0.1 0.52 5.67 3.09 8.20 0.00 0.100.3 1.91 7.21 3.55 11.11 0.00 0.09

6 1.5 0 0.1 0.06 0.00 0.00 0.01 0.00 0.070.3 0.16 0.05 0.00 0.09 0.00 0.07

2 0.1 0.11 0.04 0.01 0.08 0.00 0.100.3 0.47 0.15 0.01 0.23 0.00 0.09

2 0 0.1 0.12 0.02 0.00 0.03 0.00 0.070.3 0.28 0.17 0.02 0.28 0.00 0.08

2 0.1 0.21 0.15 0.03 0.27 0.00 0.100.3 0.62 0.34 0.02 0.62 0.00 0.09

Total average 0.51 1.00 0.47 1.54 0.00 0.09

Table 7Comparison of effectiveness of the algorithms for n = 12 problems.

Kruskal–Wallis test on relative error Kruskal–Wallis test on CPU

Method N Median Ave rank Z Method N Median Ave rank Z

MATC 160 0.20266 174.4 2.68 MATC 160 0.001000 80.5 �15.47SA 160 0.09735 146.6 �2.68 SA 160 0.091750 240.5 15.47Overall 320 160.5 Overall 320 160.5

H = 7.18 DF = 1 P = 0.007 H = 239.25 DF = 1 P = 0.000H = 7.21 DF = 1 P = 0.007 (adjusted for ties) H = 241.19 DF = 1 P = 0.000 (adjusted for ties)

Table 8Test for equal variances: relative error and CPU versus method.

Test for equal variances: relative error Test for equal variances: CPU95% Bonferroni confidence intervals for standard deviations 95% Bonferroni confidence intervals for standard deviations

Method N Lower St. Dev. Upper Method N Lower St. Dev. Upper

MATC 160 0.75518 0.85040 0.97188 MATC 160 0.0026969 0.0030370 0.0034708SA 160 2.93976 3.31041 3.78328 SA 160 0.0181035 0.0203861 0.0232981

Levene’s test (any continuous distribution) Levene’s test (any continuous distribution)Test statistic = 4.04; p-value = 0.045 Test statistic = 231.23; p-value = 0.000

Table 9Results of the MATC and SA for n = 60 problems.

l a c q Avg. relative difference Avg. CPU (s)

MATC SA MATC SA

3 1.5 0 0.1 0.00 0.63 0.00 0.480.3 0.00 1.00 0.01 0.50

2 0.1 0.00 0.98 0.01 0.980.3 0.00 1.50 0.01 0.98

2 0 0.1 0.00 1.75 0.01 0.590.3 0.00 1.72 0.01 0.61

2 0.1 0.00 3.20 0.01 0.990.3 0.00 7.83 0.01 0.99

6 1.5 0 0.1 0.00 0.33 0.00 0.420.3 0.00 0.56 0.00 0.44

2 0.1 0.00 0.43 0.00 0.610.3 0.00 0.55 0.00 0.64

2 0 0.1 0.00 0.53 0.00 0.450.3 0.00 1.01 0.00 0.47

2 0.1 0.00 0.56 0.01 0.690.3 0.00 1.05 0.01 0.73

Total average 0.00 1.48 0.01 1.48

S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285 283

study, the highest value (0.065) of this tolerance parameter wasused to evaluate a large range of competing k values. The averagesof the selected range of k1, k2, and k3 are used as the recommendedvalues for parameters. Thus, each problem instance has a triple ofrecommended values for k1, k2, and k3.

In order to find the best fit model, different kinds of transforma-tions for response values such as logarithmic, square root, square,reciprocal, and exponential were attempted. As a result of theregression analysis, the look-ahead parameters equations were ob-tained as follows:

k21 ¼ 4:514þ 0:693l� 1:626a� 0:921c� 1:499q

þ 1:064l2 � 1:659c2 � 0:668lc� 0:670lqþ 1:069aq ð18Þ

k2 ¼ 0:980þ 0:275cþ 0:438l2 þ 1:02c2 þ 0:304cq ð19Þ

k3 ¼ ð0:437� 0:152l� 0:090a� 0:144cþ 0:598a2

þ 0:226c2 � 0:359laþ 0:083acþ 0:100aqÞ2 ð20Þ

284 S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285

6.3.2. SA Parameter settingExperiments using the standard setting of 15 machines and 60

jobs has been conducted to obtain appropriate parameter valuesfor the SA algorithm over three levels of maximum number of iter-ations (tmax = 0.1,1,10), three levels of maximum number of innerloop iterations (imax = 5,10,15), three levels of initial temperaturecoefficient (k = 5000,10,000,20,000) and three levels of tempera-ture cooling coefficient (alpha = 0.7,0.8,0.9). For each 81 (34) com-binations of parameters, five problem instances were solved 30times by SA starting from random initial solutions.

After performing regression analysis of the total weighted tardi-ness values, the following parameter levels were determined: ini-tial temperature coefficient = 0.1, temperature coolingcoefficient = 0.7, maximum number of inner loop iterations = 5,and maximum number of iterations = 20,000.

6.4. Effectiveness of the heuristic algorithms for small size problems

The MILP model was used for solving small size problems onlywith up to 12 jobs in acceptable time. Thus, the performance of thealgorithms are measured for these problems by computing the dif-ference between the objective function values (total weighted tar-diness) of the algorithm and the optimal schedules as follows:

Relative Error ¼ TWTAlgorithm � TWTOptimal

TWTOptimalð21Þ

The performance of the MATC and SA algorithms were compared interms of relative error and computation time over low and high lev-els of the problem factors. 10 different unique problem instanceswere generated for each 16 (24) factor combinations. SA solutionswere replicated 10 times for each of the 160 instances and the aver-age, minimum and maximum relative errors were recorded. Aver-age relative errors (MATC, SA, Min. SA and Max. SA) and averageCPU times are found by averaging values of the 10 instances foreach combination. Table 6 summarizes the results.

The results show that SA has performed better than MATC interms of average relative error for most instances; however, theproposed heuristic algorithm MATC was superior to SA for loosed-to-D windows with small job-machine factor. The average CPUtimes of SA were less than 1 s for each problem instance, but it isobvious that they are much longer (�60 times on the average) thanMATC.

Since the relative error data for each combination does not fol-low a normal distribution according to Anderson–Darling normal-ity test, the statistical significance of performance betweenalgorithms are analyzed via using nonparametric Kruskal–Wallistest (using Minitab 15.1 statistical software program). Table 7shows that there is a statistical difference between the populationmean values of MATC and SA performance in terms of relative error(p = 0.007 < a = 0.05). SA has performed better because the medianof SA relative errors is less than the median value of MATC(0.097 < 0.203). On the other hand, MATC has better CPU time per-formance than SA by having significantly smaller median value.

Table 10Comparison of effectiveness of the algorithms for n = 60 problems.

Kruskal–Wallis test on relative difference

Method N Median Ave. rank Z

MATC 160 0.000000000 80.5 �15.47SA 160 0.896564841 240.5 15.47Overall 320 160.5

H = 239.25 DF = 1 P = 0.000H = 273.43 DF = 1 P = 0.000 (adjusted for ties)

If the results are analyzed separately for MATC and SA byKruskal–Wallis test, the average relative error effectiveness of bothalgorithms decreases significantly as the number of jobs permachine decreases (from 6 to 3), the due date tightness decreases(a increases from 1.5 to 2), and the ready time tightness increases(q increases from 0.1 to 0.3). When the d-to-D window tightnessdecreases (c increases from 0 to 2) only SA effectiveness decreasessignificantly. The due date tightness factor has the highest effect onthe MATC performance, while the d-to-D window tightness factorand job-machine factor have the highest effect on the SA perfor-mance. The average CPU time effectiveness of both algorithms doesnot change significantly between the problem factor levels.

Levene’s test which is suitable when the data come from contin-uous, but not necessarily normal distributions was used to test theequality of variance which is another important measure of robust-ness of the algorithm solutions. The test shows that there existssignificant difference between variances for both the relative error(p = 0.045 < a = 0.05) and CPU time (p = 0.000 < a = 0.05) (Table 8)indicating that it is more robust for this problem compared to SA.

6.5. Effectiveness of the heuristic algorithms for large size problems

The same MATC parameter estimations and parameter valuesare used to test the effectiveness for large size problems with 60jobs. Since no optimal solutions exist for these problem sizes, thequality of the solution was measured by relative difference fromthe best of both algorithms to avoid nonnegative performancevalues.

Relative Difference ¼ TWTAlgorithm �minðTWTMATC ; TWTSAÞminðTWTMATC ; TWTSAÞ

ð22Þ

The relative difference was calculated for each of the 160 probleminstances then the averages were used for each problem instancecombination. In (22), TWTSA is the average of SA solutions for 10replications. The comparison results for large size problems interms of average relative difference and average CPU time are givenin Table 9.

The results suggest that MATC clearly provided better solutionsin shorter CPU times in all cases (positive values) when comparedto SA for problems with 12 jobs. Especially for loose d-to-D win-dows with low job-machine factor (l = 3) and loose due dates,the solution quality performance of SA is very low similar to the re-sults in the earlier subsection for small size problems. Although SAfound the solutions in less than 1 s, CPU time performance is worseby more than 100 times than MATC. Table 10 shows that MATC hasoutperformed the SA algorithm in terms of relative difference andCPU times as it is obvious in Table 9.

The average relative difference effectiveness of MATC algorithmover SA increases significantly as the number of jobs per machinedecreases (from 6 to 3), the due date tightness decreases (a in-creases from 1.5 to 2), the d-to-D window tightness decreases (cincreases from 0 to 2), and the ready time tightness increases (q in-creases from 0.1 to 0.3). The job-machine and due date tightnessfactors have the highest effects on the average relative difference.

Kruskal–Wallis test on CPU

Method N Median Ave. rank Z

MATC 160 0.005000 80.5 �15.47SA 160 0.608000 240.5 15.47Overall 320 160.5

H = 239.25 DF = 1 P = 0.000H = 240.77 DF = 1 P = 0.000 (adjusted for ties)

S. Kaplan, G. Rabadi / Computers & Industrial Engineering 62 (2012) 276–285 285

The average CPU time effectiveness of both algorithms does notchange significantly among the problem factor levels.

7. Conclusions

This paper presented the ARSP, a real world problem that re-quires high quality solutions in an acceptable timeframe. ARSPwas modeled as parallel machine scheduling problem with duedate-to-deadline windows and dynamic ready times to minimizetotal weighted tardiness. As the dimensions of the problem get lar-ger, the solution process of mathematical modeling loses its effec-tiveness. A new composite dispatching rule called MATC wasdeveloped to obtain good solutions quickly and its effectivenesswas studied by comparing it with SA metaheuristic through com-putational experiments. MATC rule was modified from ATC rulekeeping its main rationale of calculating priority index for the jobswaiting to be scheduled to deal with specific ARSP constraints suchas d-to-D window and dynamic ready times.

Computational experiments demonstrate that although MATChas significantly worse deviation performance from optimal solu-tion than SA for small size problems, it is more likely to outperformSA when the problem size increases. Moreover CPU time of MATCis significantly shorter than SA in both cases. Starting initially froma better than random solution may be advantageous in problemslike ARSP for which good solutions cannot be easily obtained. Agood initial solution can reduce the CPU time considerably. There-fore, it is observed that the proposed MATC algorithm has potentialto be used to construct good initial solutions that can be improvedby another metaheuristic such as SA.

In future research, more constraints such as machine compati-bility, sequence dependent setup times and unexpected disrup-tions may be included in the model. Some hybrid metaheuristicsusing MATC as constructive heuristic may be developed to obtainbetter results.

References

Baker, K. R., & Bertrand, J. W. (1982). A dynamic priority rule for sequencing againstdue dates. Journal of Operations Management, 3, 37–42.

Barnes, J. W., Wiley, V. D., Moore, J. T., & Ryer, D. M. (2004). Solving the aerial fleetrefueling problem using group theoretic Tabu search. Mathematica andComputer Modeling, 39, 617–640.

Blazewicz, J., Ecker, K. H., Pesch, E., Schmidt, G., & Weglarz, J. (2007). Handbook onscheduling: From theory to applications. New York: Springer.

Carroll, D. C. (1965). Heuristic sequencing of jobs with single and multiplecomponents. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge.

Cheng, H. C., Chiang, T. C., Fu, L. C. (2008). A memetic algorithm for parallel batchmachine scheduling with incompatible job families and dynamic job arrivals. InIEEE international conference on systems, man and cybernetics.

Driessel, R., Mönch, L. (2009). Scheduling jobs on parallel machines with sequence-dependent setup times, precedence constraints, and ready times using variableneighborhood search. In IEEE international conference on industrial engineeringand engineering management.

Garey, M. R., & Johnson, D. S. (1979). Computers and intractability: A guide to thetheory of NP-completeness. San Francisco: W.H. Freeman and Company.

Gharehgozli, A. H., Tavakkoli, R., & Zaerpour, N. (2009). A fuzzy-mixed-integer goalprogramming model for a parallel-machine scheduling problem with sequence-dependent setup times and ready dates. Robotics and Computer-IntegratedManufacturing, 25, 853–859.

Hazir, O., Gunalay, Y., & Erel, E. (2008). Customer order scheduling problem: Acomparative metaheuristics study. International Journal of AdvancedManufacturing Technology, 37, 589–598.

Hoitomt, D. J., Luh, P. B., Max, E., & Pattipati, K. R. (1990). Scheduling jobs withsimple precedence constraints on parallel machines. IEEE Control SystemsMagazine.

Jin, Z., Shima, T., & Schumacher, C. J. (2006). Optimal scheduling for refuelingmultiple autonomous aerial vehicles. IEEE Transactions on Robotics, 22(4).

Karp, R. M. (1972). Reducibility among combinatorial problems. In R. E. Miller & J.W. Tatcher (Eds.), Complexity of computer computations (pp. 85–103). New York:Plenum Press.

Lee, Y. H., & Pinedo, M. (1997). Theory and methodology: Scheduling jobs on parallelmachines with sequence-dependent setup times. European Journal ofOperational Research, 100, 464–474.

Logendran, R., & Subur, F. (2004). Unrelated parallel machine scheduling with jobsplitting. IIE Transactions, 36, 359–372.

Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1956). Equationof state calculations by fast computing machines. Journal of Chemical Physics, 21,1087–1092.

Mönch, L., Balasubramanian, H., Fowler, J. W., & Pfund, M. E. (2005). Heuristicscheduling of jobs on parallel batch machines with incompatible job familiesand unequal ready times. Computers and Operations Research, 32, 2731–2750.

Mönch, L., Zimmermann, J., & Otto, P. (2006). Machine learning techniques forscheduling jobs with incompatible families and unequal ready times on parallelbatch machines. Engineering Applications of Artificial Intelligence, 19, 235–245.

Montgomery, D. C. (2001). Design and analysis of experiments. New York: Wiley.Morton, T., & Pentico, D. (1993). Heuristic scheduling systems: With applications to

production systems and project management. New York: John Wiley and Sons.Pfund, M., Fowler, J. W., Gadkari, A., & Chen, Y. (2008). Scheduling jobs on parallel

machines with setup times and ready times. Computers and IndustrialEngineering, 54, 764–782.

Pinedo, M. L. (2008). Scheduling theory, algorithms, and systems (3rd ed.). New York:Springer.

Rachamadugu, R. V., Morton, T. E. (1982). Myopic heuristics for the single machineweighted tardiness problem. Carnegie Mellon University Working Paper, GSIA,30, pp. 82–83.

Reichelt, D., Mönch, L., Gottlieb, J., & Raidl, G. R. (2006). Multi objective schedulingof jobs with incompatible families on parallel batch machines. In EvoCOP 2006.LNCS (Vol. 3906, pp. 209–221). Berlin, Heidelberg: Springer-Verlag.

Silver, E. (2002). An overview of heuristic solution methods. Haskayne School ofBusiness University of Calgary, Working Paper – 2002-15.

Tan, K. C., Lee, L. H., Zhu, Q. L., & Ou, K. (2001). Heuristic methods for vehiclerouting problem with time windows. Artificial Intelligence in Engineering, 15,281–295.

Vepsalainen, A., & Morton, T. (1987). Priority rules for job shops with weightedtardiness costs. Management Science, 33, 1035–1047.