Single machine scheduling with a variable common due date and resource-dependent processing times

  • Published on
    03-Jul-2016

  • View
    216

  • Download
    2

Embed Size (px)

Transcript

<ul><li><p>Available online at www.sciencedirect.com</p><p>Computers &amp; Operations Research 30 (2003) 11731185www.elsevier.com/locate/dsw</p><p>Single machine scheduling with a variable common due dateand resource-dependent processing times</p><p>C.T. Daniel Nga ; , T.C. Edwin Chenga, Mikhail Y. Kovalyovb, S.S. Lamc</p><p>aDepartment of Management, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong KongbInstitute of Engineering Cybernetics, National Academy of Sciences of Belarus, Minsk 220012, Belarus</p><p>cSchool of Business &amp; Administration, The Open University of Hong Kong, Homantin, Kowloon, Hong Kong</p><p>Received 1 March 2001; received in revised form 1 November 2001</p><p>Abstract</p><p>The problem of scheduling n jobs with a variable common due date on a single machine is studied.It is assumed that the job processing times are non-increasing linear functions of an equal amount of aresource allocated to the jobs. The due date and resource values can be continuous or discrete. The objectiveis to minimize a linear combination of scheduling, due date assignment and resource consumption costs.The resource consumption cost function may be non-monotonous. Algorithms with O(n2 log n) running timesare presented for scheduling costs involving earliness=tardiness and number of tardy jobs. Computationalexperiments show that the algorithms can solve problems with n= 5; 000 in less than a minute on a standardPC.</p><p>Scope and purpose</p><p>We study a problem that combines scheduling, common due date assignment and resource allocation de-cisions. Due date scheduling is an important area of scheduling research because it focuses on customersatisfaction. Variable common due date applies to situations where several items constitute a single cus-tomers order and the due date for the order can be negotiated. Resource-dependent processing times appearwhenever resources can be employed to adjust processing requirements. Polynomial time algorithms are pre-sented for minimizing a linear combination of scheduling, due date assignment and resource consumptioncosts. Computational experiments demonstrate their e&gt;ciency in solving large-scale problem instances.? 2002 Elsevier Science Ltd. All rights reserved.</p><p>Keywords: Single machine scheduling; Common due date assignment; Controllable processing times</p><p> Corresponding author. Tel.: +852-2766-7364; fax: +852-2774-3679.E-mail address: msctng@polyu.edu.hk (C.T.D. Ng).</p><p>0305-0548/03/$ - see front matter ? 2002 Elsevier Science Ltd. All rights reserved.PII: S0305-0548(02)00066-7</p></li><li><p>1174 C.T.D. Ng et al. / Computers &amp; Operations Research 30 (2003) 11731185</p><p>1. Introduction</p><p>The following single machine scheduling problem with a variable common due date and resource-dependent processing times is studied. There are n independent, non-preemptive and simultaneouslyavailable jobs to be scheduled for processing on a single machine. Each job j has a processingtime pj; j = 1; : : : ; n, and is to be assigned a common due date d; d 0. The due date d is acontinuous or discrete variable, and the job processing times are linearly non-increasing functionsof an equal amount of a continuously divisible or a discrete resource x used for performing thejobs: pj = uj vjx, where uj is the normal (maximum) value of the processing time and vj is thecompression rate, i.e., reduction of the processing time per unit of the resource used, j = 1; : : : ; n. Itis assumed that x [0; xmax] and uj vjxmax 0; j = 1; : : : ; n.</p><p>A solution to the problem speciHes the job schedule S and the values of d and x. Given a solution,the job completion times Cj; j = 1; : : : ; n, can be calculated. The objective is to Hnd a solution suchthat the cost function F(S; d; x) is minimized. We consider F(S; d; x){nj=1 (Ej + Tj + d) +g(x);</p><p>nj=1 (Uj+d)+g(x)}, where Ej=max{0; dCj} is the earliness of job j; Tj=max{0; Cjd}</p><p>is its tardiness, and</p><p>Uj is the number of tardy jobs such that Uj = 0 if j is early (Cj6d) andUj = 1 if j is tardy (Cj d); j = 1; : : : ; n. The resource consumption cost function g(x) may benon-monotonous for x [0; xmax]. More precisely, we assume that it satisHes Properties 1 and 2presented in Section 3.</p><p>If the job processing times are Hxed, then there exists an optimal schedule for which the machinehas no idle time from time zero until it Hnishes the processing of the last job, see Panwalkar et al.[1] and Cheng [2]. Clearly, this statement also holds for variable processing times. Therefore, weconsider such schedules only, and so any schedule can be represented by its job sequence.</p><p>There exist many practical situations in which problems combining scheduling with common duedate assignment arise. Examples can be found in Wheelright [3], Smith and Seidmann [4], Ragatz andMabert [5], Cheng and Gupta [6], Baker and Scudder [7] and Cheng [2]. They include just-in-timeproduction, assembly scheduling, batch delivery, project management and shoe making. Schedulingproblems with controllable processing times are observed in steel production, part manufacturing,project management, see, for example, Williams [8], Vickson [9,10], Van Wassenhove and Baker[11], Nowicki and Zdrza lka [12], Janiak [13], Blazewicz et al. [14], Janiak and Portmann [15] andChen et al. [16].</p><p>It is natural to study problems combining scheduling, due date assignment and controllable pro-cessing times. To the best of our knowledge, only Cheng et al. [17] and Biskup and Jahnke [18]study the problems of this type. Cheng, Oguz and Qi consider the single machine problem in whichthe model for job processing times is pj = uj xj; 06 xj6 Oxj6 uj; 16 j6 n, the resource con-sumption cost is g(x1; : : : ; xn) =</p><p>nj=1 bjxj, and the job due dates to be assigned are dj = d (the</p><p>common due date model) or dj =pj + k (the slack due date model), j = 1; : : : ; n. Cheng, Oguz andQi reduce their problem to an assignment problem.</p><p>Biskup and Jahnke give the following examples where the resource allocation is the same for alljobs. In steel production, a furnace can be heated to a speciHc temperature every day before theprocessing of an order for ingots (jobs) starts. It is not beneHcial to change the temperature forevery single job. A higher temperature reduces the processing time of the jobs but incurs a cost forusing the furnace. A similar situation is observed for a machine on which some tools have to beperiodically changed. When a new tool is installed, a decision about the production power of the</p></li><li><p>C.T.D. Ng et al. / Computers &amp; Operations Research 30 (2003) 11731185 1175</p><p>tool used in the coming time period has to be made. For example, a drilling machine can run with adiamond drill, a high- or low-quality steel drill. If the diamond drill is set up, then the jobs can beprocessed faster than with a steel drill by incurring a higher cost. Another example is an assemblyline the speed of which depends on the number of workers and tools available. It is generally notpossible or advantageous to change the speed during the day.</p><p>This paper can be considered as an extension of the results of Biskup and Jahnke. The maindiQerence is that Biskup and Jahnke assume that the dependence of the job processing timeson the resource value is such that pj = uj(1 x); j = 1; : : : ; n. This means that the processingtime compression rate is equal to the normal processing time for each job. It is clear that ourmodel is more general for describing practical situations. For example, consider the situation witha drilling machine. In this situation, the normal processing time of a job depends on the com-plexity of the shape of the part to be drilled, its dimensions, weight, material it is made from,and the number and depth of holes to be drilled. The processing time compression rate dependsonly on the latter three factors. The second diQerence is that we allow the values of the due dateand the resource to be discrete. In the above example, the resource is discrete. The third diQer-ence is that Biskup and Jahnke assume the resource consumption cost g(x) to be a monotonousincreasing function for x [0; xmax]. We allow this function to have several local minima within[0; xmax]. Such a situation exists in reality. For example, when the resource is obtained from asupplier in batches and a non-full batch is more expensive than a full one because it is notstandard.</p><p>We call the job sequence (j1; : : : ; jn) the SPT sequence if the jobs are placed in the shortestprocessing time order such that pj16 6pjn . Since the job processing times depend on x, theremay exist many SPT sequences. However, in the next section, we show that there are at most O(n2)distinct feasible SPT sequences. For our purposes, we do not distinguish the SPT sequences thatdiQer only in the positions of the jobs with equal processing times. In Sections 3 and 4, we use thisresult to derive O(n2 log n) time algorithms for the problems with earliness=tardiness and number oftardy jobs scheduling costs, respectively. For the number of tardy jobs, we also identify an errormade by Biskup and Jahnke [18]. Computational results are given in Section 5. They conHrm thee&gt;ciency of the presented algorithms on randomly generated instances with up to 5000 jobs. Thepaper concludes with some remarks and suggestions for future research.</p><p>2. Constructing all the SPT sequences</p><p>Let us represent the job processing times as the lines y=pj(x) = uj vjx; j= 1; : : : ; n, in the xyplane. Further, construct all the intersection points of these lines for 06 x6 xmax. An example isgiven in Fig. 1.</p><p>For each intersection point (x; y), denote the set of lines that intersect at (x; y) as I(x; y). Theremay be several intersection points with the same x-coordinate. Denote the set of y-coordinates ofsuch points as Y (x).</p><p>There are at most n(n1)=2 intersection points. These points and their associated sets I(x; y) andY (x) can be found in O(n2) time. Sort the distinct x-coordinates of the intersection points, togetherwith 0 and xmax, in increasing order such that 0 = x0 x1 xk = xmax; k6 n(n 1)=2 + 1.This sorting requires O(k log k)6O(n2 log n) time.</p></li><li><p>1176 C.T.D. Ng et al. / Computers &amp; Operations Research 30 (2003) 11731185</p><p>Fig. 1. Lines y = uj vjx; j = 1; : : : ; n, and their intersections.</p><p>It is clear that for any x [xi1; xi]; i{1; : : : ; k}, the job processing times can be numbered suchthat pji16 6pjin . Therefore, for any x [xi1; xi], there exists a unique (if we do not distinguishthe jobs with equal processing times) SPT sequence. This sequence can be found in O(n log n) timeby calculating the job processing times for x = (xi1 + xi)=2 and sorting them in non-decreasingorder. Let us denote this sequence as Q(i) = (ji1; : : : ; j</p><p>in).</p><p>Observe that, in order to obtain sequence Q(i+1) from sequence Q(i), it is su&gt;cient to reverse theorder of the jobs that correspond to the distinct lines from I(xi; y) for each yY (xi). Let qi bethe number of such distinct lines. Then sequence Q(i+1) can be constructed in O(qi) time if Q(i) isgiven, i = 1; : : : ; k 1. Sequence Q(1) can be constructed in O(n log n) time. For the example givenin Fig. 1, we have Q(1) = (3; 2; 1); Q(2) = (3; 1; 2) and Q(3) = (1; 3; 2).</p><p>It is easy to see that the problem of minimizing F(S; d; x) with x [0; xmax] reduces to k analogousproblems with x [xi1; xi]; i = 1; : : : ; k.</p><p>3. Minimizing earliness=tardiness</p><p>Consider the problem of minimizing F(S; d; x)=n</p><p>j=1 (Ej+Tj+d)+g(x), subject to x [xi1; xi];i{1; : : : ; k}. Denote an optimal schedule, optimal due date and resource values by S(i); d(i) andx(i), respectively.</p><p>For the development of our algorithm, we need the following lemma. Calculate</p><p>b = max{</p><p>0;n( ) + </p><p>}:</p><p>Lemma 1. If b = 0; then an associated optimal due date is d = 0; and if b 0; then there existsan optimal schedule where the job sequenced bth completes at d. Moreover; the objective function</p></li><li><p>C.T.D. Ng et al. / Computers &amp; Operations Research 30 (2003) 11731185 1177</p><p>F(S; d; x) for an arbitrary job sequence S = (j1; : : : ; jn) can be written as</p><p>F(S; d; x) =n</p><p>r=1</p><p>wrpjr + g(x);</p><p>where</p><p>wr =</p><p>{(r 1) + n; if r6 b;(n r + 1); if r b;</p><p>is the positional weight of job jr .</p><p>Proof of Lemma 1 for Hxed processing times can be found in Panwalkar et al. [1] and Bakerand Scudder [7]. In their proofs, it is immaterial whether processing times are Hxed or variable.Therefore, Lemma 1 holds for variable processing times.</p><p>According to the result of the previous section, if x [xi1; xi], then Q(i) = (ji1; : : : ; jin) is such asequence that pji16 6pjin . In this case, it follows from Lemma 1 that an optimal job sequenceS(i) can be found in O(n log n) time by the following matching procedure. In addition to the SPTsequence Q(i), consider the sequence (k1; : : : ; kn) such that wk1 wkn . In the sequence S(i), jobjir is sequenced krth, r = 1; : : : ; n.</p><p>It is easy to see that, similar to Q(i+1) and Q(i), sequence S(i+1) can be obtained from S(i) byreversing the order of jobs corresponding to the distinct lines from I(xi; y) for each yY (xi).Therefore, sequence S(i+1) can be constructed in O(qi) time if S(i) is given. Sequence S(1) can beconstructed in O(n log n) time.</p><p>For the Hxed job sequence S(i) = (li1; : : : ; lin) and variables x and d, calculate</p><p>F(S(i); d; x) =n</p><p>r=1</p><p>wrplir + g(x) =n</p><p>r=1</p><p>wr(ulir vlir x) + g(x) = K (i) L(i)x + g(x);</p><p>where K (i) =n</p><p>r=1 wrulir and L(i) =</p><p>nr=1 wrvlir . Therefore, the problem with x [xi1; xi] reduces to</p><p>minimizing g(x) L(i)x, subject to xi16 x6 xi.It is easy to see that K (i+1) and L(i+1) can be calculated in O(qi) time if I(xi; y); yY (xi) and</p><p>K (i); L(i) are given. K (1) and L(1) can be calculated in O(n) time if sequence S(1) is given.The function g(x) may possess the following properties.</p><p>Property 1. For any x [0; xmax]; g(x) is computable in a constant time.</p><p>Property 2. For an arbitrary constant L; function g(x) Lx has a ;nite number of local minimain [0; xmax] that can be found in a constant time.</p><p>Properties 1 and 2 are satisHed for many functions such as polynomials of at most power Hve,some power functions, exponential and trigonometric functions. For example, if g(x) is a polynomialof power Hve, then its derivative g(x) is a polynomial of power four. Local minima of the functiong(x) Lx are among the solutions of the equation g(x) = L, which is of power four. There are fourroots (solutions) for this equation and they can be found analytically in a constant time, see, forexample Herstein [19].</p></li><li><p>1178 C.T.D. Ng et al. / Computers &amp; Operations Research 30 (2003) 11731185</p><p>If Property 2 is satisHed, let e(i)1 ; e(i)2 ; : : : ; e</p><p>(i)c be the local minima of g(x) L(i)x in [0; xmax]. Then</p><p>x(i) {xi1; xi; e(i)1 ; : : : ; e(i)c } [xi1; xi] for a continuously divisible resource and x(i) {xi1; xi;e(i)r ; e(i)r | r = 1; : : : ; c} [xi1; xi] for a discrete resource. If Property 1 is additionally satisHed,then x(i) and F(S(i); d(i); x(i)) can be found in a constant time. Note that we do not need d(i) tocalculate the value of the function.</p><p>Let (S; d; x) denote an optimal solution to the problem with x [0; xmax]. It is easy to see thatF(S; d; x) = min{F(S(i);...</p></li></ul>

Recommended

View more >