Published on

14-Dec-2016View

212Download

0

Embed Size (px)

Transcript

Batch scheduling of identical jobs with controllable processing times

Baruch Mor a, Gur Mosheiov b,n

a The Department of Economics and Business Administration, Ariel University, Israelb School of Business Administration, The Hebrew University, Jerusalem, Israel

a r t i c l e i n f o

Available online 20 August 2013

Keywords:Batch schedulingControllable processing timesFlowtimeSingle machineParallel identical machines

a b s t r a c t

In scheduling models with controllable processing times, the job processing times can be controlled(i.e. compressed) by allocating additional resources. In batch scheduling a large number of jobs may begrouped and processed as separate batches, where a batch processing time is identical to the totalprocessing times of the jobs contained in the batch, and a setup time is incurred when starting anew batch.

A model combining these two very popular and practical phenomena is studied. We focus onidentical jobs and linear compression cost function. Two versions of the problem are considered: in therst we minimize the sum of the total owtime and the compression cost, and in the second weminimize the total owtime subject to an upper bound on the maximum compression. We study bothproblems on a single machine and on parallel identical machines. In all cases we introduce closed formsolutions for the relaxed version (allowing non-integer batch sizes). Then, a simple rounding procedure isintroduced, tested numerically, and shown to generate extremely close-to-optimal integer solutions. Fora given number of machines, the total computational effort required by our proposed solution procedureis O np , where n is the number of jobs.

& 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Vickson [1,2] introduced a new class of scheduling models withcontrollable processing times. In this model the job processingtimes are not given constants as in classical scheduling, but canbe controlled (i.e. compressed) by allocating additional resources.Various versions of this very practical scheduling setting havebeen studied by many researchers, as reected in the recentsurvey of Shabtai and Steiner [3]. Some of the early papers are:Van Wassenhove and Baker [4], Nowicki and Zdrzalka [5], Janiakand Kovalyov [6], Wan et al. [7], Hoogeveen and Woeginger [8],Janiak et al. [9], Shakhlevich and Strusevich [10], Wang [11] Akturket al. [12] and Wang and Xia [13]. More recently, Tseng et al. [14]studied a single machine setting with controllable processingtimes and an objective of minimum total tardiness; Turkcanet al. [15] studied a setting of parallel machines and objectivefunctions of minimum manufacturing cost and total weightedearliness and tardiness; Shakhlevich et al. [16] focused on thetrade-off between the maximum cost (which is a function of thecompletion times) and the total compression cost; Shabtay et al.[17] addressed due date assignment problems in a group technol-ogy environment; Gurel et al. [18] considered failures of the

machine and repair time, and focused on an anticipative schedul-ing approach; Wan et al. [19] studied the problem of schedulingjobs of two-agents on a single machine with controllable proces-sing times; Choi et al. [20] focused on minimizing weightedcompletion time subject to an upper bound on the maximumcompression cost; Leyvand et al. [21] considered just-in-timescheduling on parallel machines; Yin and Wang [22] studied amodel combining controllable processing times and learning;Wang and Wang [23] addressed the single machine problem ofminimizing the total resource consumption subject to an upperbound on the total weighted owtime; Wei et al. [24] focused on amodel in which the job processing times are a function of bothresource consumption and the job starting times; Uruk et al. [25]studied a two-machine owshop problem with exible operationsand controllable processing times; and Wang and Wang [26]focused on convex resource dependent processing times and jobdeterioration.

In batch scheduling a large number of jobs may be grouped andprocessed as separate batches. Such batching is generally based onthe existence of some similarity between jobs belonging to thesame class. A batch processing time is identical to the totalprocessing times of the jobs contained in the batch. A setup timeis incurred when starting a new batch. In their classical paper,Santos and Magazine [27] solved a single machine batch schedul-ing problem to minimize total owtime. They assumed a constant(i.e. batch independent) setup time, and batch availability, i.e., jobs

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/caor

Computers & Operations Research

0305-0548/$ - see front matter & 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.cor.2013.08.007

n Corresponding author.E-mail address: msomer@mscc.huji.ac.il (G. Mosheiov).

Computers & Operations Research 41 (2014) 115124

are completed only when their entire batch is completed. Dobsonet al. [28], Naddef and Santos [29] and Coffman et al. [30] extendedthe basic results obtained by Santos and Magazine for the relaxedversion of the problem (when integer batch sizes are notrequired). Later, Shallcross [31] and Mosheiov et al. [32] solvedthe integer version. We refer the reader also to the extensivesurvey of Allahverdi et al. [33] on batch scheduling under differentmachine settings and objective functions.

In this paper we combine (to our knowledge for the rst time)the two phenomena of batch scheduling and controllable proces-sing times. Specically, with respect to batching, we consider thesetting of Santos and Magazine [27], i.e. a single machine, batchindependent processing times, batch availability and the objectiveof minimum owtime. With respect to the option of controllingjob processing times, we assume linear compression cost. Twoproblems are solved: in the rst we minimize the total owtimeplus the compression cost, and in the second we minimizeowtime, subject to an upper bound on the maximum batchcompression. We rst focus on a single machine setting, and showthat in both cases, the solution for the relaxed version consists of adecreasing arithmetic sequence of batch sizes, for which closedform solutions are obtained. The total running time of theprocedure is O np , where n is the number of jobs. An integersolution is obtained by a simple rounding procedure, requiringadditional O np time. Our numerical tests indicate that theinteger solutions are extremely close to those of the relaxedversions (which are lower bounds on the optimal solutions).We then consider the setting of parallel identical machines. [Werefer the reader to Mor and Mosheiov [34], who studied batchscheduling on parallel identical machines without the option ofcompression.] In this case, we show that the solution of bothproblems consists of identical decreasing arithmetic sequences ofthe batch sizes on all the machines. The total computational effortof this more complicated solution procedure becomes Om np ,where m is the number of machines. A rounding procedure and itsevaluation are provided for this setting as well.

In Section 2 we present the notation and the formulation.In Sections 3 and 4, we solve the relaxed versions of the twodifferent problems on a single machine. In Section 5 we proposethe rounding procedure to obtain an integer solution for bothproblems. Sections 6 and 7 are devoted to numerical examples andnumerical tests, respectively. Section 8 contains the extension toparallel identical machines.

2. Notation and formulation

We consider n identical jobs which need to be processed on asingle machine. Jobs may be processed in batches, sharing thesame setup operation: when starting a new batch, a setup time,denoted by s s40 is performed. For a given job allocation tobatches, we denote by K the total number of batches. By allocatingcertain resources, batch sizes can be compressed from theirmaximum (original) size denoted by nj , down to their nal batchsize denoted by nj40; j 1;;K . (For simplicity we assume inthe following that all the jobs have unit processing times prior tocompression. It follows that Kj 1nj n.) Let Nc denote the totalamount of compression of all K batches. Clearly, nKj 1njNc .The unit compression cost is denoted by c40, thus the totalcompression cost is given by cNc:

The rst objective function considered here is the sum of thetotal owtime and the batch compression cost. As mentionedabove, we assume batch availability, i.e., the completion time of ajob is dened as the completion time of the batch to which it isassigned. Thus, for a given allocation of jobs to batches, let Cjdenote the completion time of batch j, j 1;;K . The contribution

of batch j to the total owtime is njCj, implying that the totalowtime is given by C Kj 1njCj. Thus, the objective functionof the problem is CcNc , and using the conventional notation,the rst problem studied here (P1) is

P1 : 1=batch; ctrl=CcNc:While P1 is a legitimate objective function (and clearly any

extension to a linear combination of the above two cost compo-nents), theoretically, it may lead to an extreme non-realistic solu-tion, where the batch sizes are compressed to zero. One way tohandle this feasibility issue is by limiting the maximum allowablecompression. Thus, for the second problem (P2), we dene an upperbound U on the maximum compression. In order to guarantee arealistic solution, we restrict U to be strictly smaller than n. Theobjective function here is minimum owtime, subject to this upperbound on the maximum compression

P2 : 1=batch; ctrl; NcrU=C:We extend both models to a setting of m parallel identical machines.For a given job allocation to the machines, let Ki denote the numberof batches processed on machine i; i 1;;m. Let ni;j denote thesize of batch j processed on machine i; i 1; :::;m; j 1; :::;Ki. Thus,the second part of the paper consists of the following two problems(P3 and P4, respectively):

P3 : Pm=batch; ctrl=CcNc:

P4 : Pm=batch; ctrl; NcrU=C:

3. Problem P1: 1=batch; ctrl=CcNc

In this section we provide a formal denition of the problem,present the basic properties of an optimal solution, and introduceclosed form expressions for the optimal batch sizes. The combinedobjective function (owtime plus compression cost) is given by

f 1 sn1n12sn1n2n2 K1 s K1

j 1nj

!nK1

Ks K

j 1nj

!nKcNc:

f 1 K

j 1j

i 1ni

!njs

K

j 1jnjcNc:

It is easy to verify the following equality:

K

j 1j

i 1ni

!nj

12K

j 1n2j

12

K

j 1nj

!2:

Thus, we get the following objective function:

f 1 12K

j 1n2j

12

K

j 1nj

!2s

K

j 1jnjcNc: 1

The formal presentation of the problem is

Min f 1

s:t: K

j 1njNc n

NcZ0;njZ0; j 1;;K:

Clearly f 1 is a quadratic convex function of the batch sizes,implying that the global minimum can be found by applyingthe KarushKuhnTucker (KKT) conditions. The Lagrangian, with being the single Lagrange multiplier, is

L f 1 K

j 1njNCn

!:

B. Mor, G. Mosheiov / Computers & Operations Research 41 (2014) 115124116

The optimal solution must satisfy the KKT conditions below:

Lnl

nl K

j 1nj ls 0; l 1;;K : 2

LNC

c 0: 3

K

j 1njNcn

! 0: 4

njZ0; j 1;;K: 5

NcZ0: 6From (3) we must have

c: 7From (2) we note that the batch sizes follow a decreasingarithmetic sequence. The terms of this sequence are

nj n1j1s; j 2;;K: 8The sum of this arithmetic sequence is given by:

K

j 1nj

K22n1 K1s Kn1

12sKK1 9

From (2) and (7)

K

j 1nj cn1s: 10

Equating (9) and (10), we obtain the following expression for n1:

n1 2csKK12

2K1 : 11

We observe that the last term of the sequence must be strictlypositive, therefore

nK n1K1s40: 12Substituting n1 (from (11)) in (12), we get

sK2sK2co0; 13The largest positive value of K that satises (13) is

K 142c

s

r12

$ %:

Comment 1: As mentioned above, in an extreme case, this expres-sion may lead to a non-realistic solution. Note that for very small cvalues (i.e. small compression cost), or for sufciently large s-value(a very large setup time), K becomes zero. This theoretical resultreects the case that all the jobs are totally compressed, and thecost is solely due to compression (i.e., all batch sizes become zero,and there is no owtime cost). Obviously, this extreme case is notrealistic, and when solving real-life problems, we may avoid iteither by adding a positive lower bound on all batch sizes, or byadding a constraint on the minimal number of batches, or byadding a constraint on the maximum compression (as assumed inthe model solved in Section 4).

We note that the following KKT conditions are satised:(i) condition (2) is met due to (7) and (8); (ii) condition (3) isguaranteed by (7); (iii) condition (5) is met due to (12) (the last, i.e.smallest term nK is strictly positive). Finally, since c40, wemust have Kj 1njNcn 0 (in order to satisfy (4)). Thus, (6) isvalid only when Nc nKj 1njZ0

In summary, an optimal solution is given by

Kn 142c

s

r12

$ %; 14

nn1 2csKnKn12

2Kn1 ; 15

nnj nn1j1s; j 2;;Kn; 16

CcNcn 12Kn

j 1nnj 2

12

Kn

j 1nnj

!2s

Kn

j 1jnnj cNc; 17

and the following optimality condition must be satised:

Nc n Kn

j 1njZ0: 18

Note that if the latter condition is not satised, i.e. Nco0, itfollows that compression is too expensive, and is not performed atall. In this case the problem is reduced to the classical version, andthe optimal solution is obtained by Santos and Magazine's proce-dure [27].

Concluding this section, we state that the above explicitexpressions for the optimal number of batches and batch sizesreect the solution of the relaxed (non-integer) version of theproblem. In Section 5 we return to the original integer version, andintroduce a rounding procedure to obtain integer batch sizes.

4. Problem P2: 1=batch; ctrl; NcrU=C

The problem solved in this section is minimizing owtimesubject to an upper bound on the maximum amount of compres-sion. Thus, the objective function is

f 1 sn1n12sn1n2n2 K1s K1

j 1nj

!nK1

Ks K

j 1nj

!nK ;

or more concisely

f 2 12K

j 1n2j

12

K

j 1nj

!2s

K

j 1jnj: 19

The formal presentation of the problem is

Min f 2

s:t: n K

j 1njrU:

njZ0; j 1;;K :

The Lagrangian for this quadratic program is L f 2nKj 1njU, where is the Lagrange multiplier. Applying KKT, we obtainthe following optimality conditions:

Lnl

nl K

j 1nj ls 0; l 1; ::K: 20

n K

j 1njU

! 0: 21

n K

j 1njUr0: 22

njZ0; j 1;;K : 23

Z0: 24From (20) we conclude that

nj n1j1s; j 2;;K : 25

B. Mor, G. Mosheiov / Computers & Operations Research 41 (2014) 115124 117

The sum of this decreasing arithmetic sequence is

K

j 1nj Kn1

12sKK1: 26

From (20) and (23), and since s40 (by denition)

n1 K

j 1njs40: 27

It follows from (21)

K

j 1nj nU: 28

Equating (26) and (28)

K

j 1nj Kn1

12sKK1 nU

Thus n1 nU

K12sK1 29

Again, we are interested only in sequences with strictly positiveterms, thus the last term must be positive:

nK n1K1s40: 30After substituting n1 (from (29)) and rearranging terms

sK2sK2nUo0: 31The largest value of K that satises (31) is

K 142nU

s

r12

$ %: 32

(Recall that U was restricted to be strictly smaller than...