16

Click here to load reader

Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Embed Size (px)

Citation preview

Page 1: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

European Journal of Operational Research 197 (2009) 1150–1165

Contents lists available at ScienceDirect

European Journal of Operational Research

journal homepage: www.elsevier .com/locate /e jor

Identical parallel-machine scheduling under availability constraints to minimizethe sum of completion times

Racem Mellouli *,1, Chérif Sadfi, Chengbin Chu, Imed KacemUniversité de technologie de Troyes, Institut Charles Delaunay, CNRS FRE 2848, Laboratoire d’optimisation des Systèmes Industriels (LOSI), 12 rue Marie Curie BP 2060,10000 Troyes Cedex, France

a r t i c l e i n f o

Article history:Available online 7 April 2008

Keywords:Scheduling problemAvailability constraintsParallel machinesTotal completion times

0377-2217/$ - see front matter Crown Copyright � 2doi:10.1016/j.ejor.2008.03.043

* Corresponding author.E-mail addresses: [email protected], mellouli@

1 Ph.D. research supported in part by the Regional C

a b s t r a c t

In this paper, we study the identical parallel machine scheduling problem with a planned maintenanceperiod on each machine to minimize the sum of completion times. This paper is a first approach for thisproblem. We propose three exact methods to solve the problem at hand: mixed integer linear program-ming methods, a dynamic programming based method and a branch-and-bound method. Several con-structive heuristics are proposed. A lower bound, dominance properties and two branching schemesfor the branch-and-bound method are presented. Experimental results show that the methods can givesatisfactory solutions.

Crown Copyright � 2008 Published by Elsevier B.V. All rights reserved.

1. Introduction

Most machine scheduling models assume that the machines are continuously available. However, this assumption does not always holdin many real industry situations because of the need of maintaining machines during certain periods in the planning horizon. In general,machines may be unavailable for stochastic and deterministic reasons. In the stochastic case, the machines are subject to unpredictablebreakdowns which imply the disability to process production tasks until the machine is recovered. In the deterministic case, machinesare subject to preventive maintenance with starting times and durations known in advance. Generally, preventive maintenances areplanned to preserve the equipments and to ensure a better global availability. According to Kubiak et al. (2002), the deterministic limitationof machine availability is due to preventive maintenance plannings or the overlap of two consecutive time horizons in the rolling time hori-zon planning context. The rolling horizon planning is frequently employed because most real production systems are dynamic. This meansthat input data need to be updated during the planning horizon.

Consequently, it is important to have plans which take into consideration the limits of the machine availabilities. Hence, the problems ofjoint planning of production and maintenance activities and particularly the scheduling problem with availability constraint have receivedthe interest of many researchers (see Lee et al., 1997; Sanlaville and Schmidt, 1998; Schmidt, 2000).

This study concerns the single machine and the identical parallel machines scheduling problems to minimize the sum of completiontimes. The preemption of jobs is not allowed. Without machine availability constraint, these problems are polynomially solvable withthe SPT (shortest processing time) rule (Smith, 1956).

For the single machine problem with one deterministic unavailability period, Adiri et al. (1989) proved that the problem is NP-hard ifthe jobs cannot finish before the machine unavailability.

Lee and Liman (1992) studied the same problem of Adiri et al. (1989), and proposed a simpler proof of the NP-hardness and proved atight worst case error bound of the SPT heuristic which is 2/7. Lee (1996) proved that the resumable problem 1jr � aj

PwiCi (with one

unavailable time period) is NP-hard even if wi ¼ pi. He proposed for this problem a dynamic programming method and a heuristic approachand evaluated its worst-case performance.

Wang et al. (2005) studied the single machine problem with availability constraints to minimize total weighted completion times withseveral unavailability periods. They showed that the resumable problem is NP-hard in the strong sense. They studied the case with oneperiod of unavailability and preemptive jobs. Two heuristics were provided with worst case analysis.

008 Published by Elsevier B.V. All rights reserved.

utt.fr (R. Mellouli), [email protected] (C. Sadfi), [email protected] (C. Chu), [email protected] (I. Kacem).ounsel of Champage-Ardenne and the European Social Fund.

Page 2: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1151

Sadfi et al. (2005) studied the problem of Lee and Liman (single machine scheduling to minimize total completion time subject to oneunavailability period, denoted as 1;h1k

PCi). They proposed the MSPT heuristic (modified shortest processing time) based on the best ex-

change of two jobs, respectively, before and after the unavailability period, where the initial schedule is obtained with the SPT rule. Theyshowed that the heuristic has a tight error bound of 3/17. Sadfi et al. (2002) also provided a dynamic programming model to optimallysolve the same problem. They integrated the variable fixing technique which provides a considerable rate of job fixation and an interestingreduction of the computational time. Sadfi et al. (2004) studied the worst-case with many unavailability periods on a single machine. Theyproved that there is no approximation scheme with a finite error bound unless P ¼ NP. They then solved the problem 1;h1k

PCi with a

branch-and-bound method using the MSPT and the SRPT (shortest remaining processing time) to provide an upper bound and a lowerbound, respectively. Experimental tests showed that instances with up to 1000 jobs can be solved within 5 minutes.

Kacem and Chu (2006) studied a single machine problem with one unavailability period to minimize total weighted completion time.They proposed a new lower bound for the problem. They studied two heuristics (WSPT and MWSPT – modified weighted shortest process-ing time) and showed that they had the same worst case error bound. Kacem et al. (2006) studied the same problem and compared threeexact methods, respectively, based on branch-and-bound, integer linear programming and dynamic programming. Recently, Kacem andChu (2007a) improved the branch-and-bound algorithm using new heuristics and lower bounds. The above algorithm is able to solve in-stances with up to 6000 jobs. The resumable case was also studied in Kacem and Chu (2007b). Kacem et al. (2005) studied the problem withmultiple unavailability periods and proposed branch-and-bound and dynamic programming methods.

For parallel machine scheduling problems, Schmidt (1984) studied the case of different unavailability intervals with job deadlines. Hedefined a preemptive feasible schedule with a complexity of Oðnmlog nÞ, where n is the number of jobs and m the number of machines.

Kaspi and Montreuil (1988) and Liman (1991) showed that the SPT algorithm optimally solves the parallel machine problem to mini-mize total completion time with non-simultaneous machine available times.

Lee and Liman (1993) studied the capacitated two-parallel machine scheduling problem with total completion time criterion where onemachine is unavailable since a specified date ðs2 > 0Þ. They proved that the problem is NP-hard and proposed a dynamic programmingalgorithm to solve it optimally and provided an SPT-based heuristic with an error bound of 1/2. Lee (1996) proposed dynamic programmingalgorithms for P2jr � aj

PwiCi and P2jnr � aj

PwiCi problems assuming that the second machine is subject to an unavailability period

[s2; t2], where 0 6 s2 < t2.Mosheiov (1994) studied the Pmjr � aj

PCi problem under the constraint that the machine j is only available in the time interval

[xj; yj� ð0 6 xj < yjÞ and showed that the SPT (for the m-machine problem) algorithm is asymptotically optimal.Lee and Chen (2000) studied the joint scheduling of production and maintenance activities on parallel machines to minimize the

weighted sum of completion times. Two cases are studied. In the first case, resources are sufficient to simultaneously maintain all the ma-chines. In the second case, at most one machine can be maintained at a time, as a consequence, there is no overlapping between theunavailability periods. They showed that even if the jobs have the same weight, both problems are NP-hard. They proposed a branch-and-price method using the column generation approach to solve the two problems.

In Section 2, we describe the problem and introduce the notation. In Sections 3–5, we provide, respectively, mixed integer linear pro-gramming methods, a dynamic programming method and a branch-and-bound method to optimally solve the problem. In Section 6, exper-imental results are reported to evaluate the proposed methods.

2. Problem presentation and notations

The problem consists in scheduling n independent jobs fJ1; J2; . . . ; Jng on m identical parallel machines to minimize the sum of comple-tion times

PCi. We assume that jobs fJ1; J2; . . . ; Jng are all available at time t ¼ 0. In addition, a machine can process at most one job at a

time and preemption of jobs is not allowed. In the generic version of the problem, each machine may be subject to several unavailabilityperiods during the scheduling horizon. The number of unavailability periods can be different from a machine to another. Unavailabilityperiods have not the same durations and they are not necessarily uniformly distributed in the time horizon. Indeed, although machinesare identical in term of productivity, they may have different requirements for maintenance activities. We are particularly interested inthe case where each machine is subject exactly to one unavailability period (see Fig. 1).

We denote this problem by Pm;hj1kP

Ci according to the conventional notation: P for parallel machines, m for m machines, h for hole,i.e. unavailability period and hj1 for one unavailability period on each machine Mj.

In this paper, we use the following notations to formulate the problem:

– J: the set of the n jobs J ¼ fJ1; J2; . . . ; Jng.– M: the set of the m machines M ¼ fM1;M2; . . . ;Mmg.– pi: the processing time of job Ji, i ¼ 1; . . . ;n.– Tj;1: the starting date the unavailability period on machine Mj.

Fig. 1. Pm; hj1kP

Ci problem.

Page 3: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 2. SPT sequencing of jobs.

1152 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

– Tj;2: the end date of the unavailability period on machine Mj.– Pi ¼

Pik¼1pk: the total processing time of jobs J1; J2; . . . ; Ji.

Since there is exactly one unavailability period on each machine, there are two availability periods, one before and the other after theunavailability period. These two availability periods on machine Mj are denoted as nj and nmþj, respectively.

Without loss of generality, we assume that the jobs are indexed in increasing order of processing times. This choice is justified by thefollowing propositions (see Figs. 2 and 3).

Property 1. In an optimal schedule of the Pm;hj1kP

Ci problem, the jobs which completed in a same available time window are sequencedaccording to the SPT rule.

Proposition 2. There is an optimal schedule for the Pm;hj1kP

Ci problem, such that the jobs which are assigned after the unavailability periodsare scheduled according to the SPT rule on all machines using a list scheduling scheme.

Proof. The proposition is a straightforward consequence of SPT rule optimality for the non-simultaneous machine available times problem(result of Kaspi and Montreuil, 1988). In this problem, parallel machines are available but with different starting availability dates. h

3. Mixed integer linear programming methods

Solving the problem Pm;hj1kP

Ci consists in constructing 2m sequences of jobs to be processed in 2m available time windowsnk ¼ ½Dk;Rk�, where

8k ¼ 1; . . . ;m : Dk ¼ 0 and Rk ¼ Tk;1;

8k ¼ mþ 1; . . . ;2m : Dk ¼ Tk�m;2 and Rk ¼ ubC;

ubC is a given upper bound of optimal solution’s makespan. At this stage, we assume Rk to be equal to Dk þPn

i¼1pi, for k ¼ mþ 1; . . . ;2m. Inthis section, we provide fore MILP formulations based on different definitions of decision variables.

3.1. First formulation (MILP1)

We use as decision variables the binary incidence variables on immediate precedence between jobs. Decision variables are defined asfollows:

xki;j ¼

1 if job Ji precedes immediately job Jj in sequence k;

0 otherwise;

for k ¼ 1; . . . ;2m; i ¼ 1; . . . ;n� 1; j ¼ iþ 1; . . . ;n (jobs indexed in SPT order)

xk0;j ¼

1 if job Jj is the first job in sequence k;

0 otherwise;

Fig. 3. SPT scheduling of jobs after unavailabilities.

Page 4: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1153

for k ¼ 1; . . . ;2m; j ¼ 1; . . . ;n. The problem can be formulated as follows:

MinXn

i¼1

Ci;

s:t:

ðiÞX2m

k¼1

Xj�1

i¼0

xki;j ¼ 1 8j ¼ 1; . . . ;n;

ðiiÞXj�1

i¼0

xki;j þ

X2m

t¼1;t–k

Xn

i¼jþ1

xtj;i 6 1 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

ðiiiÞXj�1

i¼0

xki;j P

Xn

i¼jþ1

xkj;i 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

ðivÞXn

j¼1

xk0;j 6 1 8k ¼ 1; . . . ;2m;

ðvÞ Cj P pj þX2m

k¼1

Dkxk0;j 8j ¼ 1; . . . ;n;

ðviÞ Cj P pj þ Ci þMX2m

k¼1

xki;j � 1

!8i ¼ 1; . . . ;n� 1; j ¼ iþ 1; . . . ;n;

ðviiÞ Cj 6X2m

k¼1

Xj�1

i¼0

xki;jRk 8j ¼ 1; . . . ;n;

xki;j 2 f0;1g 8k ¼ 1; . . . ;2m; i ¼ 0; . . . ;n� 1; j ¼ iþ 1; . . . ;n;

(i) and (ii) guaranty the uniqueness of assignment of each job (being the first job or preceded by a unique job (i), and being followed at mostby one job (ii), on a unique sequence). In (iii), if a job precedes an other job on a sequence, then it is either the first job or preceded by one jobon this sequence. (iv) guaranties that the schedule contains at most 2m sequences which cannot be superposed. (v) and (vi) compute jobcompletion times. (vii) concerns availability constraints. This model includes 3nþ 2mþ 4mnþ 1

2 nðn� 1Þ constraints, mnðnþ 1Þ binary vari-ables and n standard variables.

3.2. Second formulation (MILP2)

We use as decision variables the binary incidence variables on precedence between jobs. For this formulation the precedence is not nec-essary immediate. Decision variables are defined as follows:

xki;j ¼

1 if job Ji precedes job Jj in sequence k;

0 otherwise;

for k ¼ 1; . . . ;2m; i ¼ 1; . . . ; n� 1; j ¼ iþ 1; . . . ;n (jobs indexed in SPT order)

xk0;j ¼

1 if job Jj is the first job in sequence k;0 otherwise;

for k ¼ 1; . . . ;2m; j ¼ 1; . . . ;n.The problem can be formulated as follows:

MinXn

j¼1

Cj ¼Xn

j¼1

X2m

k¼1

dk;jðDk þ pjÞ þXj�1

i¼1

xki;jpi

" #;

s:t:

ðiÞXj�1

i¼0

xki;j 6 Mdk;j 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

ðiiÞXj�1

i¼0

xki;j P dk;j 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

ðiiiÞX2m

k¼1

dk;j ¼ 1 8j ¼ 1; . . . ;n;

ðivÞ dk;i þ dk;j P 2xki;j 8k ¼ 1; . . . ;2m 8i; j ¼ 1; . . . ;n; i < j;

ðvÞ dk;i þ dk;j 6 1þ xki;j 8k ¼ 1; . . . ;2m 8i; j ¼ 1; . . . ;n; i < j;

ðviÞ Dk þ dk;jpj þXj�1

i¼1

xki;jpi 6 Rk 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

xki;j 2 f0;1g 8k ¼ 1; . . . ;2m 8i; j ¼ 0; . . . ; n; i < j;

dk;j 2 f0;1g 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ;n;

Page 5: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

1154 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

(i), (ii) and (iii) guaranty the uniqueness of the sequence containing a job. (iv) and (v) establish the relation between precedence and assign-ment variables. They guaranty, consequently, the coherence of precedence between jobs on a same sequence (transitivity property). (vi)models availability constraints. This model includes nþ 2mnðnþ 2Þ constraints and mnðnþ 3Þ variables which are binary at all.

3.3. Third formulation (MILP3)

We use as decision variables the binary incidence variables on jobs positions in the sequences. These positions are obtained by by read-ing the sequence in backward way, starting from the end. Decision variables are defined as follows:

xki;j ¼

1 if job Ji is the jth last job in sequence k ðjth;by reading sequence k from its endÞ;0 otherwise;

for k ¼ 1; . . . ;2m; i ¼ 1; . . . ;n; j ¼ 1; . . . ; lk, where lk is the maximum number of insertable jobs in time window ½Dk;Rk�. It is obtained easilyby inserting the shortest jobs. In deed, SPT rule maximize scheduled after unavailability in single machine problem with one finite availabil-ity period. These variables are used by Horn (1973) and Bruno et al. (1974) to formulate the problem Rk

PCj where the parallel machines are

not identical and continuously available. We use these variables definition to formulate our problem as follows:

MinXn

i¼1

Ci ¼X2m

k¼1

Xlk

j¼1

Xn

i¼1

xki;jðDk þ jpiÞ;

s:t:

ðiÞX2m

k¼1

Xlk

j¼1

xki;j ¼ 1 8i ¼ 1; . . . ;n;

ðiiÞXn

i¼1

xki;j 6 1 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ; lk;

ðiiiÞXn

i¼1

Xlk

j¼1

xki;jpi 6 Rk � Dk 8k ¼ 1; . . . ;2m;

ðivÞXn

i¼1

pixki;j P

Xn

i¼1

pixki;jþ1 8k ¼ 1; . . . ;2m 8j ¼ 1; . . . ; lk � 1;

xki;j 2 f0;1g 8k ¼ 1; . . . ;2m; i ¼ 1; . . . ;n; j ¼ 1; . . . ; lk:

(i) models the existence and the uniqueness of each job position (assignment). (ii) guaranties that each position is associated to one job atmost. (iii) concerns availability constraints. (iv) is facultative. It imposes SPT-ordered jobs in sequences (dominance property). In addition, itforbids idle positions between two full ones. But it improves linear relaxation-based bounds during the solving of this model by MILP solversoftwares. This model includes nþ 2

P2mk¼1lk constraints (i.e. nð1þ 4mÞ at most) and n

P2mk¼1lk variables (i.e. 2mn2 at most). All variables are

binary.

3.4. Forth formulation (MILP4)

We use as decision variables binary variables of assignment of jobs on available time windows. These decisions are sufficient becausejobs are sequenced according to SPT rule in the optimal solution in each available time window (Property 1). In addition, jobs are indexed inthis order. Decision variables are defined as follows:

xi;j ¼1 if job Ji is assigned to nj;

0 otherwise;

for j ¼ 1; . . . ;2m; i ¼ 1; . . . ;n;. The problem can be modelled as follows:

MinXn

i¼1

Ci;

s:t:

ðiÞX2m

j¼1

xi;j ¼ 1 8i ¼ 1; . . . ;n;

ðiiÞXn

i¼1

xi;jpi 6 Rj 8j ¼ 1; . . . ;m;

ðiiiÞ Dj þXn

k¼1

pk

!xi;j �

Xi

k¼1

X2m

t¼1;t–j

xk;tpk 6 Ci; 8i ¼ 1; . . . ;n 8j ¼ 1; . . . ;2m;

xi;j 2 f0;1g 8i ¼ 1; . . . ;n 8j ¼ 1; . . . ;2m;

(i) is about the existence and the uniqueness of jobs assignment to available time windows. (ii) is about availability constraints. (iii) is used tocompute completion times of jobs Ci, respectively, before and after unavailability periods. The model contains 2nm binary variables, n stan-dard variables and nþmþ 2nm constraints. This proposed model generalizes the one proposed by Kacem et al. (2005).

Our fore proposed MILP formulations contains different numbers of binary variables and constraints. It is important to have a minimizednumber of binary variables, because this number is related to number of branchings achieved by solver softwares to find the optimal solu-

Page 6: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 4. Dynamic construction of schedules.

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1155

tion. However, the quality of linear relaxation-based evaluation of nodes is an important factor of convergence speed of these methods, andit dependents on decision variable definitions and corresponding constraint structures. For these reasons, computational study will com-pare these above methods.

4. Dynamic programming based method

4.1. The dynamic programming model

In this section, we provide a dynamic programming model for the Pm;hj1kP

Ci problem. Maintenance periods divide the schedulinghorizon on each machine Mj into two available time windows. nj ¼ ½0; Tj;1� is the first one and nmþj ¼ ½Tj;2; Tj;2 þ Pn� is the second (seeFig. 4). With this method, partial schedules of the k first jobs fJ1; J2; . . . ; Jkg are dynamically constructed. The algorithm can be formulatedas follows:

For k ¼ 1; . . . ;n; j ¼ 1; . . . ;m, tj ¼ 0; . . . ;minðTj;1, PkÞ and tmþj ¼ Tj;2; . . . ; Tj;2 þ Pk, we define the following recursive relation:

2 The

rðk; t1; t2; . . . ; t2m�1Þ ¼min

(

rðk� 1; t1 � pk; t2; . . . ; t2m�1Þ þ t1;

..

.

rðk� 1; t1; . . . ; tj � pk; . . . ; t2m�1Þ þ tj;

..

.

rðk� 1; t1; t2; . . . ; t2m�1 � pkÞ þ t2m�1;

rðk� 1; t1; t2; . . . ; t2m�1Þ þ Pk �X2m�1

j¼1

tj þXm

j¼1

Tj;2

!);

where rðk; t1; t2; . . . ; t2m�1Þ is the value of the objective function in an optimal solution of the k-jobs partial problem and tj is a variable lengthof the available time window nj (j ¼ 1; . . . ;2m).

rðk� 1; t1; . . . ; tj � pk; . . . ; t2m�1Þ þ tj is the value of rðk; t1; t2; . . . ; t2m�1Þ if the job Jk is scheduled on the available time window nj. Thisvalue is equal to þ1 if tj � pk < 0 for j 6 m and if tj � pk < Tj;2 for j > m. We precise that the (2m� 1) vector ðt1; t2; . . . ; t2m�1Þ is sufficientto define all completion time limits in a partial schedule. The last completion time limit t2m is calculated by the following relation:

t1 þ � � � þ t2m � ðT1;2 þ � � � þ Tm;2Þ ¼ Pk:

The initial conditions are

rð0; . . . ;0Þ ¼ 0;rð0; t1; t2; . . . ; t2m�1Þ ¼ þ1 if ðt1; t2; . . . ; t2m�1Þ–ð0; . . . ;0Þ:

The optimal criterion value of the studied problem is equal to mintjfrðn; t1; t2; . . . ; t2m�1Þg. From the above formulation, it can be estab-

lished that the model presents OðnðQm

j¼1Tj;1ÞðPnÞm�1Þ states. Each one is computed in OðmÞ time. Therefore, the complexity of the consideredalgorithm is Oðnmð

Qmj¼1Tj;1ÞðPnÞm�1Þ. We describe in the next paragraph a dominance property to reduce the algorithm complexity.

4.2. Dominance property for improving the complexity

It is obvious that the proposed model needs to limit as much as possible the tj variation intervals. For this purpose, we define upperbounds for the makespan Cmax and the variables tmþj for j ¼ 1; . . . ;m. We denote by n�b (respectively n�a) the number of jobs scheduled before(respectively after) unavailability periods in an optimal solution of the Pm;hj1k

PCi problem. In addition, for j ¼ 1; . . . ;m, we denote by t�mþj

the completion time of the last job scheduled at the time window nmþj in the optimal solution.Let lbnb ¼maxfi such that Jn�iþ1; . . . ; Jn can be inserted before the unavailability periods according to the SPT algorithm 2}.

Proposition 3. lbnb 6 n�b and i0 ¼ n� lbnb P n�a.

definition of the SPT algorithm is specific for our problem. In Section 5.2, an explicit definition is presented.

Page 7: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

1156 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

Proof. The two assertions of the proposition are equivalent. We prove the first one. Assume that lbnb > n�b. Then the n�b jobs which arebefore the unavailability periods in the optimal schedule r� can be sequenced according to the SPT algorithm (which is a better configu-ration). Thus, the schedule r� cannot be optimal. h

Let Cj;max be the completion time of the last job on the machine Mj by scheduling the i0 jobs Jn�i0þ1; . . . ; Jn after the unavailability periodsaccording to the SPT algorithm (Cj;max ¼ Tj;2 if no job is scheduled on nmþj; j ¼ 1; . . . ;m).

Let ðubC1; . . . ;ubCmÞ ¼ f ðC1;max; . . . ;Cm;maxÞ and ðt�ðmþ1Þ; . . . ; t�ð2mÞÞ ¼ f ðt�mþ1; . . . ; t�2mÞ, where f is the function that sorts a set of values in theincreasing order.

Proposition 4. For any j ¼ 1; . . . ;m; we have t�ðmþjÞ 6 ubCj and Cmax 6 ubC ¼ ubCm.

Proof. We prove the first assertion. The second one is a consequence of the first one. i0 ¼ n� lbnb is an upper bound of the number of jobsn�a scheduled after the maintenance period in the optimal solution. We compare two schedules. The first schedule is the part of the optimalsolution which is scheduled after the maintenance periods (this partial schedule is constructed according to the SPT algorithm and containsn�a jobs). The second is the scheduling problem of the i0 longest jobs Jn�i0þ1; . . . ; Jn after the maintenance periods according to the SPT algo-rithm. We note p½1�; . . . ; p½n�a � and p0½1�; . . . ; p0½i0 �, respectively, the processing times of the jobs used in the first and the second schedules. Thesejobs are indexed according to the SPT order. Since i0 P n�a and 8k ¼ 1; . . . ;n�b; p½k� 6 p0½kþi0�n�a �

, the comparison of the sorted machine make-spans between the two schedules completes the proof. h

4.3. Complexity

During the execution of the algorithm, all useful values of the partial objective functions rðk; t1; t2; . . . ; t2m�1Þ are computed. We note thatubC is an upper bound of the variables tmþj’s. Then, the complexity of the algorithm, which has a pseudo-polynomial form if m is fixed,becomes

OðnmYmj¼1

Tj;1

!Ym�1

j¼1

ðubC� Tj;2ÞÞ:

5. Branch-and-bound method

In this section, we provide a branch-and-bound method for the Pm;hj1kP

Ci problem. We first present a heuristic to compute an initialupper bound. Then, we develop a lower bounding scheme to compute a lower bound for each node of the search tree. Finally, we presenttwo branching strategies with their corresponding node representation and the adopted exploration strategy.

5.1. A SPT-based heuristic

A simple definition of the SPT heuristic for our problem is to assign jobs, in the SPT order (in increasing order of processing times), to thefirst available machine (Smith, 1956). Each assignment is definitive if it is feasible (satisfying the availability constraint and the non-pre-emption of jobs). We can adopt this rule for the case of identical unavailability periods (having the same start dates and durations). How-ever, when the machines are subject to non-identical unavailability periods, this rule may give different results because of several eligiblemachines for assignment. In our problem, the machines have the same availability date at the beginning but they have non-identical avail-able time before breakdowns. This has an impact on SPT-based schedules if the machine to process the current job is chosen in an arbitraryway. The heuristics that we propose use the SPT rule with different ways to choose the machine to process the current job.

We define four priority rules to choose the machine to process the current job from the SPT list:

– Rule 1 (SPT1): the machine chosen is the one with the earliest starting availability date by breaking ties with the shortest remaining avail-ability duration,

– Rule 2 (SPT2): the machine chosen is the one having the earliest starting availability date by breaking ties with the longest remainingavailability duration,

– Rule 3 (SPT3): the machine chosen is the one having the shortest remaining availability duration by breaking ties with the earliest startingavailability date,

– Rule 4 (SPT4): the machine chosen is the one having the longest remaining availability duration by breaking ties with the earliest startingavailability date.

SPT1 and SPT2 tend to ‘‘homogenize” or to make the availability dates of machines close one to another. Then, completion times ofscheduled jobs increase ‘‘slowly”. But we do not know if they guarantee a good and controlled insertion of jobs before maintenance.SPT4 tends to ‘‘homogenize” the remaining availability durations. So, it tends to balance the possibilities of assignment on all machines.SPT3 applies in one way the SPT algorithm machine by machine starting by narrow windows of assignment. Knowing that processing timesbecome longer and longer, SPT3 tends to avoid situations where unscheduled jobs can not be inserted in these narrow windows when theybecome eligible for assignment. All these heuristics have the same time complexity which is Oðn log nþ nmÞ. Because of the low time com-plexity, we define our SPT-based heuristic as the heuristic that runs all these methods and that returns the best solution.

5.2. SPT0 heuristic

In the previous section, we show that the simple definition of the SPT heuristic may give random solution when the machines are sub-ject to non-identical unavailability periods. Then, we present the solution to make this rule more precise.

Page 8: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 5. SPT schedules with arbitrary machines indexation.

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1157

In this section, with an other reasoning we propose a new definition of the SPT heuristic adapted for our problem. We call it ‘‘SPT0 heu-ristic”. Since the machines are identical in our problem, then if we index the machines in different ways, the problem is not changed. Then,the approach that we can use for the SPT0 heuristic, does not look for a precise rule of assignment on machines. But it considers that theindexation of machines must be adapted to obtain a good representation of the SPT schedule.

Fig. 5 gives an example of different SPT schedules obtained by arbitrary indexation of machines. We note that we can obtain m! sched-ules by straightforward applications of the SPT algorithm defined by Smith.

SPT0 heuristic principle: Our SPT0 heuristic aims, with an adapted construction, to provide an SPT schedule representation which is sim-ilar to the one provided by Smith for the problem Pk

PCi (see Fig. 6), and which maximizes the number of inserted jobs before the main-

tenance periods.One logical way is to switch, after each assignment, the maintenance periods on machines by aligning the unavailability dates Tj;1 with

the completion times of created job sequences (see Fig. 7).

Fig. 7. SPT0 schedule construction (steps 1 and 2).

Fig. 6. SPT schedule of Smith for the problem PkP

Ci .

Page 9: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 8. SPT0 schedule construction (step 3).

1158 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

In each iteration i of the SPT0 algorithm, we distinguish three steps:

– Step 1: Scheduling of job Ji. For k ¼ 1; . . . ;m, Di�1k is the availability date of machine Mk before the assignment of Ji.We schedule Ji on

machine Mk0 having the smallest availability date: k0 ¼ arg min16k6m Di�1k . Then, Di

k0 ¼ pi þ Di�1k0 and for k–k0 Di

k ¼ Di�1k . The completion

time of the job Ji is Ci ¼ Dik0 .

– Step 2: Switch of maintenance periods. m0ðiÞ is the new number of non-fixed maintenance periods (m0ð0Þ ¼ m). For k ¼ 1; . . . ;m0ðiÞ, Taðk;iÞ;1is the kth smallest starting date of non-fixed maintenance periods. Di

bðk;iÞ is the kth smallest availability date of machines non-associatedto a fixed maintenance period. For k ¼ 1; . . . ;m0ðiÞ, we switch the maintenance period ½Taðk;iÞ;1; Taðk;iÞ;2� temporarily on the machine Mbðk;iÞ.

– Step 3: Fixing test of maintenance periods. When the schedule of the job Jiþ1 violates the availability constraints, (i.e. 9k ¼ 1; . . . ;m0ðiÞsuch that Taðk;iþ1Þ;1 < Diþ1

bðk;iþ1Þ), then 8k ¼ 1; . . . ;m0ðiÞ such that Taðk;iþ1Þ;1 < Diþ1bðk;iþ1Þ, the maintenance period ½Taðk;iÞ;1; Taðk;iÞ;2� is fixed to the

machine Mbðk;iÞ. In addition, these machines are reconfigured to receive jobs after their maintenance period: Dibðk;iÞ ¼ Taðk;iÞ;2.

Fig. 7 illustrates with an example the steps 1 and 2 of the algorithm. It shows that jobs are scheduled according to the SPT rule and themaintenance periods are switched through the machines to align machines availability dates. In Fig. 8, if job 6 is scheduled, using steps 1and 2, the availability constraints are violated for the maintenance periods [3,6] and [5,8]. Then, these two maintenance periods are fixedon their associated machines of the iteration 5. The machines M1 and M2 are reconfigured to receive jobs, respectively, since the dates 8 and6. The maintenance period [14,17] remains switchable but only on machines which are not associated to a fixed maintenance period. Sincein this example, it remains for this only machine M2, then we can consider that maintenance period [14,17] is also fixed on machine M2.

5.3. MSPT2 heuristic

For the upper bound initialization of our branch-and-bound method, we define the MSPT2 heuristic which improves all proposed meth-ods results. It is constructed in three steps:

– Step 1: Reference schedule.We construct one of the SPTi schedules (i ¼ 0; . . . ;4).– Step 2: 2-OPT transformation.We apply a 2-OPT improving technique on the reference schedule. This technique consists in identifying,

for each machine, two jobs scheduled, respectively, before and after the maintenance period of this machine so that the exchange leadsto the most reduction of the total completion time. For each exchange, the two new obtained partial sequences on each machine areordered with the SPT rule. This technique is used by Sadfi et al. (2005) in their MSPT algorithm for the 1;h1k

PCi problem.

– Step 3: Dominance property-based improvement.In the obtained schedule, jobs are sequenced according to the SPT rule in each availabletime window. This schedule may be improved by rescheduling in the SPT order the jobs after the maintenance periods.

MSPT2 runs all SPTi schedules (i ¼ 0; . . . ;4) in the step 1 and returns at the end the best result.

5.4. Maintenance relocation lower bound

A natural lower bound could be obtained by allowing job preemption. For the single machine problem, the resumable problem1;h1jpmptj

PCi is polynomially solvable to optimality by an SRPT-like method (shortest remaining processing time). Concerning the

resumable problem Pm;hj1jpmptjP

Ci, the complexity issue is still open. As it is shown by Brucker and Kravchenko (1999), preemptioncan make parallel machine scheduling problems harder. For example, problem Pjpj ¼ p; pmtnj

PwjUj is binary NP-hard while the non-pre-

emptive counterpart can be solved in Oðn log nÞ time where n is the number of jobs. In addition, Wang et al. (2005) showed that the resum-able problem 1; h1k

PwiCi is NP-hard in the strong sense while its non-resumable counterpart can be solved in pseudo-polynomial time

with a dynamic programming method.Concerning our problem, we find that the use of the SRPT-based policy (shortest remaining processing time) does not give a lower

bound for the problem. We consider the following example.

Ji

1 2 3 4 5 6

pi

1 2 2 4 5 6 Mj 1 2

Tj;1

5 5 Tj;2 T T

In Fig. 9, the value obtained by an SRPT-like rule is FðSRPTÞ ¼ 17þ 3T while the solution value is F� ¼ 24þ 2T.We thus consider other relaxations to compute a lower bound. Let ðP1Þ denote the problem that we study (Pm; hj1k

PCi) and ðP2Þ the

problem that considers the maintenance periods as fictive jobs Jnþj with processing times pnþj ¼ Tj;2 � Tj;1; ðj ¼ 1; . . . ;mÞ. ðP2Þ consists in

Page 10: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 9. A counterexample for the SRPT-like algorithm.

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1159

scheduling the set of the nþm jobs on the m machines, with the constraints: 8j ¼ 1; . . . ;m; the maintenance job Jnþj starts at Tj;1 on ma-chine Mj.

Let FrðPÞ be the corresponding value of the objective function of the schedule r to problem ðPÞ and F�ðPÞ its the optimal criterion value.Hence, clearly we have

F�ðP2Þ ¼ F�ðP1Þ þXm

j¼1

Tj;2:

Let ðRP2Þ be the problem obtained by relaxing the constraints about the maintenance jobs in the problem ðP2Þ.

Proposition 5 (The lower bound expression). FSPTðRP2Þ �Pm

j¼1Tj;2 6 F�ðP1Þ.

Proof. The problem RP2 is optimally solved by the algorithm SPT. In addition, F�ðRP2Þ 6 F�ðP2Þ. h

This bound is a generalization of the lower bound of Kacem et al. (2005) which is proposed to the 1;h1kP

wiCi problem.

5.5. Dominance properties

First, we apply the three properties presented in Sections 2 and 4 as dominance rules.

Property 6. For a complete schedule, if it exists a machine on which the number of jobs processed after the unavailability period is strictly higherthan that before the unavailability period, and in addition the total processing time before the unavailability is greater than or equal to that afterthe unavailability period, then the schedule is dominated.

For a machine j ¼ 1; . . . ;m, let B and A be the set of jobs scheduled before and after the unavailability period, respectively. The conditionof non-optimality is 9 B0 � B such that

jB0j < jAj andXJi2B0

pi PXJi2A

pi:

Proof. It is enough to exchange A and B0 to obtain a better solution. h

5.6. The first branching scheme

For each job, there are mþ 1 possible assignments:

– the assignment ‘‘j”, with j ¼ 1; . . . ;m, i.e. on machine Mj before its maintenance period,– the assignment ‘‘mþ 1”, i.e. after maintenance periods.

A node of level k represents a partial schedule where k jobs are fixed (see Fig. 10).

Fig. 10. The first branching scheme.

Page 11: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 11. The second branching scheme.

1160 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

Considering the optimality rules which are described in Properties 1 and 2, jobs are indexed according the increasing order of processingtimes. Therefore, the branching from a node of level k consists in deciding the assignment of the ðkþ 1Þth job.

With each node of level k is associated an ‘‘assignment” vector of dimension k that represents the partial solution composed of jobs 1 tok. Indeed, this code makes it possible to construct a unique partial schedule. The exact assignment of the jobs which take the assignmentvalue ‘‘m + 1”, is given by applying the algorithm SPT after the maintenance periods (Proposition 2).

The relevance of this branching scheme comes from the fact that it exclusively generates the feasible solutions that verify the optimalityProperties 1 and 2. Then, this scheme restricts the search space by eliminating a significant part of dominated solutions without applyingcorresponding optimality tests during the tree construction.

The adopted exploration strategy is the strategy ‘‘Best First”.

5.7. The second branching scheme

In this scheme, a node of level k is represented by a sequence of k jobs chosen in the order among the n jobs. Consequently, the sep-aration of this node consists in choosing the ðkþ 1Þth job to add to the sequence. That implies a branching from a father node of level kto generate ðn� kÞ son nodes (see Fig. 11).

To be a valid branching scheme, the optimal solution must exist among the generated nodes. The scheme that we propose generates allfeasible active schedules (which contains at least one optimal solution). This strongly depends on the manner of establishing the relation-ship between representative sequence codes and partial schedules constructed in the tree. The representative sequence of a node is chosento be a single identifier of a given partial schedule.

We propose to define a bijective bidirectional function between sequence codes and schedules, including:

– An assignment rule able to construct, based on a sequence code, its corresponding partial schedule. This rule is defined as follows: We fixan arbitrary order of run on the machines. For example and quite simply: 1; 2; 3; . . . ; m. Thereafter, jobs are assigned, according to theirorder in the sequence, on the earliest available machine. Ties are broken with the smallest machine number rule. The assignment ismade if it is feasible.

– Reciprocally, an adequate coding scheme to this assignment rule is defined to find the same sequence code from the knowledge of thepartial schedule. It is defined as follows: from a partial or complete schedule, the sequence code is built by ordering jobs in increasingorder of their starting times (si ¼ Ci � pi). Ties are broken by using the smallest machine number rule.

Knowing that jobs are scheduled according to SPT algorithm after unavailability periods in an optimal solution (Proposition 2), it is pos-sible to restrict the sequence code of a complete schedule into jobs scheduled before unavailability periods. We call this portion of code‘‘reduced code”. It is sufficient to represent a schedule. Moreover, it avoids generating dominated schedules according to the second dom-inance property.

Illustrative example: We consider the following illustrative example:

Ji

3 The symbol ‘�

1

’ means ‘‘precede

2

s in the sequence

3

”.

4

5 6 7 8 9 10

pi

1 1 1 2 3 3 4 4 5 5 Mj 1 2 3

Tj;1

5 5 7 Tj;2 7 8 8

Adopted run order of machines is: 1 � 2 � 3. We consider the following sequence code: 6 � 1 � 2 � 5 � 3 � 7 � 4 � 8 � 10 � 9.3 Leav-ing from this code and using the defined assignment rule, we construct the schedule given in Fig. 12.

Page 12: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 12. Illustrative example.

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1161

In the complete schedule of Fig. 12, jobs processed after the maintenance periods verify the SPT rule. Thus, it is possible to represent thisschedule by the reduced code: 6 � 1 � 2 � 5 � 3 � 7 � 4.

Reciprocally, from the schedule, the corresponding sequence code can be found as follows:

s1 ¼ s2 ¼ s6 < s3 ¼ s5 < s7 < s4 < s8 < s9 ¼ s10:

J1, J2 and J6, which start at the same time, are processed, respectively, on machines M2, M3 and M1. According to the smallest machine num-ber rule, we have 6 � 1 � 2. With the same reasoning, we have 5 � 3 and 10 � 9. Then, the sequence code is6 � 1 � 2 � 5 � 3 � 7 � 4 � 8 � 10 � 9.

As a consequence, a code is associated to a unique partial schedule and vice versa. The proposed coding scheme is sufficiently precise todifferentiate even from some equivalent schedules by different sequence codes. For example, consider the two equivalent schedules:6 � 1 � 2 � 5 � 3 � 7 � 4 � 8 � 10 � 9 and 6 � 1 � 2 � 5 � 3 � 7 � 4 � 8 � 9 � 10. Conversely with the first schedule, J9 and J10 are pro-cessed, respectively, on M2 and M3 in the second schedule. The use of reduced codes avoids generating such equivalent solutions while thefirst branching scheme does not.

5.8. Comparison between the two branching schemes

The first branching scheme generates at worst ðmþ 1Þn complete solutions which verify the optimality Properties 1 and 2 concerningthe SPT rule assignment of jobs. However, among these solutions, a huge number of solutions are non-feasible (violation of availabilityconstraints) or symmetrical. In addition, to build a complete solution, it is necessary to go down to level n in the search tree.

The second branching scheme generates at most n! complete solutions. With the reduced codes, to build a complete schedule, it is notnecessary to go down to level n in the search tree. We can stop the branching when machines can receive no more jobs before the unavail-ability periods. Let ubnb to be the maximum number of jobs which can be inserted before unavailability periods. We have ubnb < n. Thisbranching scheme generates less than (ubnb!) complete solutions.

In addition, this scheme only generates feasible solutions but it does not take into account Properties 1 and 2, which is remediable byincluding them as optimality tests. Furthermore, this scheme eliminates some symmetrical solutions precisely where symmetry is locatedafter the unavailability periods.

5.9. Eliminating symmetrical solutions in the second branching scheme

We are interested in symmetrical solutions which are due to the presence of identical jobs, i.e. several occurrences of the same job. Forexample, in the previous illustrative example, jobs 5 and 6 are identical, therefore the schedules

6 � 1 � 2 � 5 � 3 � 7 � 4;

and

5 � 1 � 2 � 6 � 3 � 7 � 4;

are equivalent. To avoid generating equivalent (symmetrical) solutions in the search tree, an applicable improving procedure to the secondbranching scheme can be proposed.

This procedure consists in:

– Identifying the set of identical jobs and associating with them their occurrence numbers in the jobs set.

Ji

1 2 3 4 5 6 7 8 9 10

pi

1 1 1 2 3 3 4 4 5 5 Identical jobs [1] [2] [3] [4] [5]

Occurrence number

3 1 2 2 2 pi 1 2 3 4 5

– Renaming the jobs and the identical jobs by their processing times.

Identical jobs

1 2 3 4 5 Jobs 1 1 1 2 3 3 4 4 5 5

– Branching by adding to the partial sequences identical jobs while taking into consideration their numbers of occurrence.

Page 13: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Fig. 13. Symmetry reduction.

1162 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

With this improved coding approach and its corresponding branching scheme, the two previous schedules given as example are repre-sented only once in the search tree with the representative code 3 � 1 � 1 � 3 � 1 � 4 � 2. As it is shown in Fig. 13, this procedure pro-vides a significant reduction of the tree.

Other types of symmetries are possible such as the symmetries which are due to identical configurations of assignment (the same dateof beginning and of end of the availability on two-machines). The resolution of this type of symmetry is ensured after the unavailabilityperiods by use of the new branching scheme since it stops with the representation of complete solutions by reduced codes.

We note that the use of identical jobs is not efficient for the first branching scheme, because the branching is done according to thechoice of machines to process the jobs. Then, all possible assignments must be considered. We cannot reduce the width of the search tree.Moreover, we cannot reduce efficiently its depth because all jobs must be assigned.

6. Computational results

The experimentations are carried out on an Intel Pentium 4 PC with a 3 GHz processor and 2� 512 Mo of RAM. Instances are generatedas follows:

– The job number n varies from 10 to 100.– The machine number m is either 2, 3 or 4. In fact, because of the problem complexity, we are especially interested in two- or three-

machine problems.– The job processing times are randomly generated in the interval ½1; . . . ; pmax� with pmax ¼ 20 with a uniform distribution.– The unavailability periods are not identical on the machines and they have different starting dates and durations. The duration of each

period is proportional to the mean value of processing times of jobs. The proportionality coefficient for each one is randomly chosenamong 0.5, 1 and 2. The positions of these periods are randomly generated between 0 and the scheduling horizon H which is almostequal to

Pni¼1pi

� �=m. The random generation is done according to a discrete uniform distribution. Instances are grouped according to

the mean ratio of maintenance positions with respect to the scheduling horizon. The considered position values a are about0:1; . . . ; 0:7 of

Pni¼1pi

� �=m. The numbers of simulations done for each group are the followings:

a

0.1–0.2 0.2–0.3 0.3–0.5 0.5–0.7

Nbr

15 10 5 5
Page 14: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

The mean maintenance position of all generated instances is chosen to be approximately about 0:25 0:05 of ni¼1pi =m. Generally for

this position there is a balanced distribution of the jobs on both sides before and after of the maintenance periods.

P� �The different experimental results are summarized with mean values in Tables 1–4. The symbol ‘‘*” indicates the cases that the program

is interrupted due to memory requirement (out of memory) for some difficult instances. The symbol ‘‘–” indicates that the experimentationis stopped because a computation time limit (5000 seconds) is reached.

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1163

Table 1Heuristics and lower bound performances

m n SPT0 SPT1 SPT2 SPT3 SPT4 MSPT2 MRLB OPT gap1 gap2

2 10 217.3 220.5 226.0 222.2 228.6 213.5 199.0 209.5 0.021 0.05020 763.3 765.1 774.6 766.8 769.4 741.7 728.2 735.2 0.009 0.01030 1412.8 1412.8 1429.5 1447.6 1469.4 1392.0 1375.0 1383.3 0.006 0.00650 3661.9 3676.7 3667.1 3739.8 3792.5 3615.4 3569.8 3598.3 0.005 0.008

100 14887.0 14892.8 14925.2 15513.5 15639.0 14855.7 14820.4 14847.5 0.001 0.002

3 10 178.2 179.3 180.2 175.8 184.0 173.2 128.8 159.4 0.080 0.19220 526.8 539.7 534.7 546.8 548.3 512.3 500.1 502.8 0.019 0.00530 1010.0 1031.3 1024.7 1051.5 1055.7 996.0 975.1 987.8 0.008 0.01350 2594.0 2626.0 2632.5 2704.8 2827.5 2569.7 2499.5 2561.3 0.003 0.024

100 10133.7 10185.2 10148.0 10541.0 10589.0 10070.5 9964.7 – – –

4 10 180.1 180.2 190.2 194.1 202.4 179.9 159.3 177.0 0.012 0.11120 500.0 511.1 512.0 515.8 522.9 489.8 459.0 481.1 0.018 0.04630 1182.0 1219.8 1222.3 1240.1 1242.2 1165.0 1125.2 1154.1 0.009 0.02550 2081.5 2083.5 2086.0 2160.3 2162.3 2036.1 1926.0 1995.2 0.020 0.035

100 10988.0 11096.1 11095.4 11124.0 11132.4 10987.1 20864.2 – – –

Table 2Dynamic programming method performance DP

m n pmax ¼ 20 pmax ¼ 20 pmax ¼ 50

a: 0.2–0.3 a: 0.5–0.7 a: 0.2–0.3

2 10 0.05 0.29 0.1020 0.41 2.48 13.8130 1.88 12.02 69.1050 14.62 55.42 266.04

100 86.84 203.31 3611.10150 281.02 821.62 –200 1940.66 3902.22* –

3 10 6.46 26.83 316.0220 97.71 227.06 1998.5025 408.89 732.03 –30 884.51 2901.56 –

Table 3Comparison between the two branching schemes

m n B&B1 B&B2

Nodes CPU Nodes CPU

2 10 1033.4 0.01 331.4 0.0115 54281.2 0.44 1480.1 0.0620 4005236.1 27.73 9367.2 0.1530 1.026 E9 867.45 1006455.3 7.4240 2.706 E10 2794.32 8605907.2 31.0950 3.299 E11* 4874.91* 1.670 E9 463.6260 – – 1.4186 E10 2044.83

3 10 4273.2 0.02 1829.2 0.0115 847533.3 3.14 137971.8 0.6120 24654609.0 112.69 116743.4 0.7125 2.433 E8 1934.01 9254369.1 75.4330 – – 2.347 E8 274.4035 – – 1.619 E9* 2014.23*

4 10 3981.1 0.06 3022.1 0.0515 1.005 E8 39.52 108115.1 0.6220 1.612 E9 1322.01 432108.3 3.2125 – – 1.010 E8 91.3330 – – 1.114 E9 620.25

Page 15: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

Table 4Comparison between the exact methods

m n MILP1 MILP2 MILP3 MILP4 DP B&B1 B&B2

2 10 9.96 0.50 0.18 1.02 0.08 0.01 0.0115 4648.44 39.69 0.70 49.00 0.19 0.44 0.0620 – 695.52 1.02 820.01 0.73 27.73 0.1525 – 2010.50 2.32 2547.90 1.76 101.22 4.0130 – – 2.48 – 3.30 867.45 7.4240 – – 4.89 – 8.88 2742.32 31.0950 – – 451.50 – 18.30 – 463.6260 – – 1922.3 – 46.01 – 2044.83

100 – – – 93.19 – –200 – – – 2174.84* – –

3 10 51.12 0.29 0.03 4.01 9.27 0.02 0.0115 – 154.22 0.05 1181.40 30.55 3.15 0.6120 – 4922.41 1.23 – 101.71 112.69 0.7125 – – 4.23 – 416.19 1934.01 75.4330 – – 36.54 – 1197.44 – 274.4035 – – 383.14 – – – 2014.23*

50 – – 1016.87 – – – –

4 10 271.71 0.34 0.03 8.25 – 0.06 0.0515 – 1694.50 0.52 2239.22 – 39.52 0.6220 – – 1.46 – – 1322.01 3.2125 – – 8.20 – – – 91.3330 – – 688.10 – – – 620.2550 – – 1418.95 – – – –

1164 R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165

The study is organized in four parts. The first one concerns the experimentations done for the evaluation of heuristics. The second partanalyzes the dynamic programming based method. The third part concerns the branch-and-bound method. The final part is a comparisonbetween the three exact methods.

6.1. Heuristic experimentations

Table 1 presents average criterion values obtained by the heuristics and the lower bound. These bounds are compared to optimal solu-tion values.

We find that SPTi heuristics are complementary. For two-machine instances, experimental study shows that SPT0 gives exactly one ofthe two results of SPT1 and SPT2. It is easy to prove it analytically. Indeed, when m ¼ 2, schedules obtained by using these two rules dependon the assignment of the first job. This assignment can be done either on the first or the second machine. Remaining jobs are concretelyassigned according to traditional SPT algorithm. With SPT1 and SPT2, the exact machine receiving the first job is known. With SPT0, thismachine remains unknown until no job can be inserted before maintenance periods. The statistics show that the best result is given at74% by SPT0, at 41% by SPT1, at 32% by SPT2, at 21% by SPT3 and 15% by SPT4. For many instances, some of these heuristics give the sameresult. In addition, in 82% of the cases the improving procedure applied on the schedule obtained with SPT0 gives the best result. The aver-age rate of improvement realized in the second step of MSPT2 is about 3% and reaches as much as 9% for some instances. In the same table,MSPT2 upper bounds and maintenance relocation lower bounds (MRLB) are compared to optimal solution values (OPT) which are com-puted by our exact methods. Relative deviation ratio (gap) of upper and lower bounds from the optimal solution are defined as follows:gap1 ¼ FðMSPT2Þ�OPT

OPT and gap2 ¼ OPT�MRLBOPT . These ratios show good performances of these bounds. It is clear that one of the reason that they

decrease when n increases is the increase of the criterion value.

6.2. Dynamic programming method experimentations

Knowing the complexity of the dynamic programming method, we try in this paragraph to see the impact of parameters which haveinfluence on the method in terms of computation times. The method depends particularly on the maintenance dates and the length ofthe scheduling horizon. This implies that computational time is influenced by parameters a and pmax. Then, for our generated instances(where pmax ¼ 20), we distinguish two cases. In the first case, a ranges from 0.2 to 0.3. In the second case, a ranges from 0.5 to 0.7. In addi-tion, we add particularly further experimentations with pmax ¼ 50 for a ranging from 0.2 to 0.3. The number of simulations for each instancefamily is 10.

Table 2 presents computation times (in CPU seconds) required to obtain the optimal solution for each case. Computational times in-crease when average relative maintenance position values a increase and when pmax increases. Because of space complexity of the method,it solves efficiently two- and three-machines instances, but it cannot solve instances with m ¼ 4.

6.3. Branch-and-bound experimentation

Table 3 compares the two branching schemes in terms of explored nodes and computation time. B&B1 and B&B2 are, respectively, thetwo resulting branch-and-bound algorithms.

The convergence of these methods occurs with the emergence of the optimality proof of a solution. So, the convergence behavior can bequite heterogenous for some instances of a same family because of existence of more than one optimal solution located in differentbranches of tree searches. But, we note that these methods yield very quick-computed solutions for several instances. Average values

Page 16: Identical parallel-machine scheduling under availability constraints to minimize the sum of completion times

R. Mellouli et al. / European Journal of Operational Research 197 (2009) 1150–1165 1165

are presented in Table 3. Experimentation show a significant reduction of node numbers and CPU times, in average, by using the secondscheme exceeding the 90% in many cases. The first branching scheme remains interesting because it gives for some instances competitiveand better results.

6.4. Comparison between the proposed exact methods

In this paragraph, we compare the computation times (in CPU seconds) of the different exact methods. Table 4 presents the averagecomputation times of the methods obtained for the tested data. In this table, we introduce the results obtained for MILP methods.

For small instances, these methods have the same performance.When m ¼ 2, the dynamic programming method becomes more efficient for large instances (from n P 25). The dynamic programming

method solves instances with more than 200 jobs. The branch-and-bound method becomes relatively expensive, in term of CPU time, forinstances with 40 or 50 jobs. Mixed integer linear programming methods have different performances. MILP1 solves instances with 15 jobs,MILP2 and MILP3 solve instances with 25 jobs. MILP3 is very efficient for small and Large instances. In average, it has similar performancecompared to the branch-and-bound method.

When m ¼ 3 and 4, the branch-and-bound method is slightly faster than the dynamic programming method. However, it is difficult touse it to solve instances with more than 30 or 35 jobs. The dynamic programming method solves instances with less than 30 jobs and itcannot handle instances with m ¼ 4. MILP3 method remains very efficient and it is capable to solve quickly instances with more than 50jobs. However, for more than 50 jobs, the average of CPU times becomes higher.

7. Conclusion

In this paper, we study the parallel machine scheduling problem with availability constraints and total completion time criterion. Wedevelop three exact approaches to solve it: mixed integer linear programming based methods, a dynamic programming method and abranch-and-bound method. Some adaptations on the SPT rule are presented to develop heuristics capable to yield near-optimal solutions.A new adapted definition of the SPT algorithm is provided. A MSPT2 algorithm, a lower bound (MRLB) and dominance properties are de-fined. Two branching schemes are presented and compared. Faster construction of complete solutions and particular eliminations of equiv-alent solutions were applied to the second branching scheme. Numerical experiments were done to compare and evaluate the proposedmethods.

Acknowledgements

The authors wish to thank the referees whose suggestions led to an improvement of an earlier version of this paper.

References

Adiri, I., Bruno, J., Frostig, E., Rinnooy Kan, A.H.G., 1989. Single machine flow-time scheduling with a single breakdown. Acta Informatica 26, 679–696.Brucker, P., Kravchenko, S., 1999. Preemption can make parallel machine scheduling problems hard. Technical Report, Osnabrucker Schiften zur Mathematik, Reihe P, p. 211.Bruno, J.L., Coffman, E.G., Sethi, R., 1974. Scheduling independent tasks to reduce mean finishing time. Communication of the ACM 17, 382–387.Horn, W.A., 1973. Minimizing average flow time with parallel machines. Operations Research 21, 846–847.Kacem, I., Chu, C., 2006. Worst-case analysis of the WSPT and MWSPT rules for single machine scheduling with one planned setup period. European Journal of Operational

Research 187, 1080–1090.Kacem, I., Chu, C., 2007a. Efficient branch-and-bound algorithm for minimizing the weighted sum of completion times on a single machine with one availability constraint.

International Journal of Production Economics. doi:10.1016/j.ijpe.2007.01.013.Kacem, I., Chu, C., 2007b. Minimizing the weighted flowtime on a single machine with resumable availability contraint: Worst-case of the WSPT heuristic. International

Journal of Computer Integrated Manufacturing. doi:10.1080/09511920701575088.Kacem, I., Sadfi, C., Elkamel, A., 2005. Branch and bound and dynamic programming to minimize the total completion times on a single machine with availability constraints.

IEEE/SMC’05, Hawai, USA, 10–12 October 2005, 6p.Kacem, I., Chu, C., Souissi, A., 2006. Single-machine scheduling with an availability constraint to minimize the weighted sum of the completion times. Computers and

Operations Research, 180, 396–405.Kaspi, M., Montreuil, B., 1988. On the scheduling of identical parallel processes with arbitrary initial processor available times. Research Report, School of Industrial

Engineering Purdue University, pp. 88–92.Kubiak, W., Blazewicz, J., Formanowicz, P., Breit, J., Schmidt, G., 2002. Two-machine flow shops with limited machine availability. European Journal of Operational Research

136, 528–540.Lee, C.Y., 1996. Machine scheduling with an availability constraint. Journal of Global Optimization 9, 393–416.Lee, C.Y., Chen, Z-L., 2000. Scheduling jobs and maintenance activities on parallel machines. Naval Research Logistics 47, 145–165.Lee, C-Y., Liman, S.D., 1992. Single machine flow-time scheduling with scheduled maintenance. Acta Informatica 29, 375–382.Lee, C.Y., Liman, S.D., 1993. Capacitated two-parallel machines scheduling to minimize sum of job completion times. Discrete Applied Mathematics 41, 211–222.Lee, C.Y., Lei, L., Pinedo, M., 1997. Current trends in deterministic scheduling. Annals of Operations Research 70, 1–41.Liman, S.D., 1991. Scheduling with capacities and due-dates. Ph.D. Dissertation. Industrial and Systems Engineering Department, University of Florida.Mosheiov, G., 1994. Minimizing the sum of job completion times on capacitated parallel machines. Mathematical and Computer Modelling 20, 91–99.Sadfi, C., Penz, B., Rapine, C., 2002. A dynamic programming algorithm for the single machine total completion time scheduling problem with availability. In: Eighth Workshop

on Production Management and Scheduling, Valance Spain.Sadfi, C., Aroua, M.D., Penz, B., 2004. Single machine total completion time scheduling problem with availability constraints. In: Ninth International Workshop on Project

Management and Scheduling PMS’2004, Nancy France.Sadfi, C., Penz, B., Rapine, C., Blazewicz, J., Formanowicz, P., 2005. An improved approximation algorithm for the single machine total completion time scheduling problem

with availability constraints. European Journal of Operational Research 161, 3–10.Sanlaville, E., Schmidt, G., 1998. Machine scheduling with availability constraints. Acta Informatica 35, 795–811.Schmidt, G., 1984. Scheduling on semi-identical processors. Zeitschrift fr Operations Research 28, 153–162.Schmidt, G., 2000. Scheduling with limited machine availability. European Journal of Operational Research 121, 1–15.Smith, W.E., 1956. Various optimizers for single stage production. Naval Research Logistics Quarterly 3, 1.Wang, G., Sun, H., Chu, C., 2005. Preemptive scheduling with availability constraints to minimize total weighted completion times. Annals of Operations Research 133, 183–

192.