10
60 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001 Scheduling No-Wait Production with Time Windows and Flexible Processing Times Fabrice Chauvet, Jean-Marie Proth, Member, IEEE, and Yorai Wardi, Member, IEEE Abstract—This paper presents a low-complexity algorithm for on-line job scheduling at workcenters along a given route in a man- ufacturing system. At each workcenter, the job has to be processed by any one of a given set of identical machines. Each machine has a preset schedule of operations, leaving out time-windows during which the job’s processing must be scheduled. The manufacturing system has no internal buffers and the job cannot wait between two consecutive operations. There is some flexibility in the job’s pro- cessing times, which must be confined to given time intervals. The scheduling algorithm minimizes the job’s completion time, and it is executable in real time whenever the job requirement is generated. Index Terms—No-wait manufacturing, on-line scheduling, time windows. I. INTRODUCTION T HIS paper considers an on-line job scheduling problem in no-wait manufacturing. The underlying manufacturing system is configured for multiple products. Whenever a product demand arrives a corresponding work order is generated, in- volving job processing at various workcenters. By “job” we mean a part, component, or workpiece that is processed by ma- chines stationed at the various workcenters. Job processing by a machine is called an operation. Some operations are sequen- tial and following a given order, while other operations must process multiple jobs in a concurrent fashion, as in the case of assembly. We characterize the work order by its associated jobs’ routes and processing times at the various workcenters. To sim- plify the notation (and exposition), we identify the workcenters with production stages, and assume that only one type of oper- ation can take place at each workcenter. We comment that the meaning of the term “job” can be extended to include batches or lots of parts. Consider a particular product demand arising at time , and the associated work order, also generated at that time. The jobs’ routing is given and is dependent on the product in question. Each workcenter contains one or more identical machines, only one of which is required for processing the related jobs. A ma- Manuscript received March 5, 1999; revised February 18, 2000. This paper was recommended for publication by Associate Editor D. Wu and Editor N. Viswanadham upon evaluation of the reviewers’ comments. The work of Y. Wardi was supported by National Science Foundation under Grant INT-9402585. This paper was presented in part at the IEEE/EAMRI Rens- selaer’s International Conference on Agile, Intelligent, and Computer-Aided Manufacturing, Troy, NY, October 1998. F. Cauvet is with INRIA Lorraine, 57070 Metz, France. J.-M. Proth is with the Institute for Systems Research, University of Mary- land, College Park, MD 20742 USA. Y. Wardi is with the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail: [email protected]). Publisher Item Identifier S 1042-296X(01)02768-9. chine can process only one job at a time. At time , each machine in the system has a preset schedule of activities like maintenance or previously scheduled processing of other jobs. The sched- uling problem for the present work order, generated at time , is to schedule all of its associated jobs at the various workcen- ters according to their routes in order to minimize the product’s completion time without interfering with any of the machines’ preset activity schedules. This scheduling must be performed at time . Once determined, a job’s schedule cannot be modi- fied, and it becomes part of the associated machine’s activity schedule. The product’s completion time is defined as the time its last job processing is completed. The no-wait characteristic of the system implies the absence of internal buffers, and therefore jobs must be “in processing” continually while following their routes. Times required for transportation, loading, and unloading can be assumed neg- ligible if they are small as compared to the processing times by machines. Alternatively, these tasks may be considered as manufacturing operations that must be scheduled if they require significant times. There is flexibility in the processing times: depending on the product type, the job, and the particular workcenter, there is a given interval that must contain the job’s processing time. We summarize and clarify. A product demand arriving at time causes a work order to be generated at that time. The work order corresponds to jobs’ routing through the system. Each workcenter contains one or more identical machines for pro- cessing the various jobs. Each machine has an associated finite set of time intervals during one of which the jobs’ scheduling must take place. The processing time has to fall within a given (machine- and job-dependent) interval. Jobs are not allowed to wait once they enter the system. The scheduling problem amounts to computing, at time , the schedule that minimizes the product’s completion time. The following three scheduling parameters have to be determined: 1) jobs’ release times, namely the times jobs are put into the system; 2) the choice of a machine at each workcenter along the job’s route; 3) the processing times at the various machines. In what follows, we propose a low-complexity algorithm for real-time solution of this scheduling problem. The above scheduling problem resembles the problem of production rescheduling in dynamic manufacturing environ- ments, where a schedule is updated as events occur. However, the problem here is not that of rescheduling, but rather that of computing a schedule for each product demand. The com- putation of each schedule, done in real time, depends on past 1042–296X/01$10.00 © 2001 IEEE

Scheduling no-wait production with time windows and flexible processing times

  • Upload
    y

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Scheduling no-wait production with time windows and flexible processing times

60 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001

Scheduling No-Wait Production with Time Windowsand Flexible Processing Times

Fabrice Chauvet, Jean-Marie Proth, Member, IEEE, and Yorai Wardi, Member, IEEE

Abstract—This paper presents a low-complexity algorithm foron-line job scheduling at workcenters along a given route in a man-ufacturing system. At each workcenter, the job has to be processedby any one of a given set of identical machines. Each machine hasa preset schedule of operations, leaving out time-windows duringwhich the job’s processing must be scheduled. The manufacturingsystem has no internal buffers and the job cannot wait between twoconsecutive operations. There is some flexibility in the job’s pro-cessing times, which must be confined to given time intervals. Thescheduling algorithm minimizes the job’s completion time, and it isexecutable in real time whenever the job requirement is generated.

Index Terms—No-wait manufacturing, on-line scheduling, timewindows.

I. INTRODUCTION

T HIS paper considers an on-line job scheduling problemin no-wait manufacturing. The underlying manufacturing

system is configured for multiple products. Whenever a productdemand arrives a corresponding work order is generated, in-volving job processing at various workcenters. By “job” wemean a part, component, or workpiece that is processed by ma-chines stationed at the various workcenters. Job processing bya machine is called anoperation. Some operations are sequen-tial and following a given order, while other operations mustprocess multiple jobs in a concurrent fashion, as in the case ofassembly. We characterize the work order by its associated jobs’routes and processing times at the various workcenters. To sim-plify the notation (and exposition), we identify the workcenterswith production stages, and assume that only one type of oper-ation can take place at each workcenter. We comment that themeaning of the term “job” can be extended to include batchesor lots of parts.

Consider a particular product demand arising at time, andthe associated work order, also generated at that time. The jobs’routing is given and is dependent on the product in question.Each workcenter contains one or more identical machines, onlyone of which is required for processing the related jobs. A ma-

Manuscript received March 5, 1999; revised February 18, 2000. This paperwas recommended for publication by Associate Editor D. Wu and EditorN. Viswanadham upon evaluation of the reviewers’ comments. The workof Y. Wardi was supported by National Science Foundation under GrantINT-9402585. This paper was presented in part at the IEEE/EAMRI Rens-selaer’s International Conference on Agile, Intelligent, and Computer-AidedManufacturing, Troy, NY, October 1998.

F. Cauvet is with INRIA Lorraine, 57070 Metz, France.J.-M. Proth is with the Institute for Systems Research, University of Mary-

land, College Park, MD 20742 USA.Y. Wardi is with the School of Electrical and Computer Engineering,

Georgia Institute of Technology, Atlanta, GA 30332 USA (e-mail:[email protected]).

Publisher Item Identifier S 1042-296X(01)02768-9.

chine can process only one job at a time. At time, each machinein the system has a preset schedule of activities like maintenanceor previously scheduled processing of other jobs. The sched-uling problem for the present work order, generated at time,is to schedule all of its associated jobs at the various workcen-ters according to their routes in order to minimize the product’scompletion time without interfering with any of the machines’preset activity schedules. This scheduling must be performedat time . Once determined, a job’s schedule cannot be modi-fied, and it becomes part of the associated machine’s activityschedule. The product’s completion time is defined as the timeits last job processing is completed.

The no-wait characteristic of the system implies the absenceof internal buffers, and therefore jobs must be “in processing”continually while following their routes. Times required fortransportation, loading, and unloading can be assumed neg-ligible if they are small as compared to the processing timesby machines. Alternatively, these tasks may be consideredas manufacturing operations that must be scheduled if theyrequire significant times. There is flexibility in the processingtimes: depending on the product type, the job, and the particularworkcenter, there is a given interval that must contain the job’sprocessing time.

We summarize and clarify. A product demand arriving at timecauses a work order to be generated at that time. The work

order corresponds to jobs’ routing through the system. Eachworkcenter contains one or more identical machines for pro-cessing the various jobs. Each machine has an associated finiteset of time intervals during one of which the jobs’ schedulingmust take place. The processing time has to fall within a given(machine- and job-dependent) interval. Jobs are not allowedto wait once they enter the system. The scheduling problemamounts to computing, at time, the schedule that minimizesthe product’s completion time. The following three schedulingparameters have to be determined:

1) jobs’ release times, namely the times jobs are put into thesystem;

2) the choice of a machine at each workcenter along the job’sroute;

3) the processing times at the various machines.In what follows, we propose a low-complexity algorithm forreal-time solution of this scheduling problem.

The above scheduling problem resembles the problem ofproduction rescheduling in dynamic manufacturing environ-ments, where a schedule is updated as events occur. However,the problem here is not that of rescheduling, but rather thatof computing a schedule for each product demand. The com-putation of each schedule, done in real time, depends on past

1042–296X/01$10.00 © 2001 IEEE

Page 2: Scheduling no-wait production with time windows and flexible processing times

CHAUVET et al.: SCHEDULING NO-WAIT PRODUCTION WITH TIME WINDOWS AND FLEXIBLE PROCESSING TIMES 61

Fig. 1. An example with assembly–disassembly operation.

schedules via the time windows resulting from them. Therefore,the optimal scheduling problem can be viewed as an optimalcontrol problem, and hence it will be termed aschedulingcontrol problem.

Questions of manufacturing-systems control requiringon-line solutions have become quite important in recent years.Due to decreasing product life cycles, planning horizons arebecoming shorter, and future tools for planning and schedulingwill be expected to provide immediate (on-line) solutions. Atthe same time, many companies are diversifying their productlines while keeping inventory levels to a minimum. All of thishighlights the importance of scheduling control in no-waitmanufacturing environments. Some processes (e.g., chemical)have flexibility in processing times. Otherwise, such flexibilityreflects on machines’ ability to briefly hold semi-finishedproducts, which can compensate for the absence of buffers.1

The problem of scheduling a given set of operations on semi-identical processors (identical, according to our terminology)with availability time intervals was considered by Schmidt [8],[9]. These references concerned feasibility, and developed low-complexity algorithms for computing feasible schedules. Thepresent paper is different in that it concerns optimality, and itimposes assumptions guaranteeing feasibility.

Scheduling in no-wait manufacturing has been extensivelydiscussed in the literature, often in the context of systems withblocking. Callahan [1] used queueing models to study no-waitproblems in the steel industry. Chuet al. [4] and Chauvetet al.[2] have considered surface treatment problems with no-waitmodels. McCormicket al. [6] have studied a cyclic flowshopwith buffers, which can be transformed into a blocking problemby considering the buffers as resources with arbitrary processingtimes. Hall and Sriskandarajah [5] presented a survey of sched-uling problems with blocking and no-wait, and Rachamaduguand Stecke [7] have classified scheduling procedures in no-waitenvironments. The above references either constrain the pro-cessing times to be given and fixed [4], [6], [7], or allow con-siderable flexibility in their values [2], [5]. This paper permitssome flexibility in the processing times by allowing them to as-sume values from within given intervals. It differs from the ex-isting works in that it allows neither waiting nor blocking,2 andthus addresses a scheduling problem in a new context that hasnot been explored yet. A preliminary version has been presented[3].

1In this case, the term “no wait” is not quite precise since there is waitingat the machines. However, that term is made precise by referring to the modelwhich incorporates the holding time within the processing time.

2Again, when flexible processing times represent the possibility of holding apart by a machine then certainly there is blocking, but the model having flexibleprocessing times can be viewed as excluding the possibility of blocking.

The rest of the paper is organized as follows. Section II for-mulates the problem and establishes the requisite notation. Sec-tion III develops the algorithm and carries out its analysis, whilerelegating some of the proofs to the Appendix. Section IV pro-vides an example, and Section V concludes the paper.

II. PROBLEM FORMULATION

Consider a product demand and its associated work order.Suppose that the work order involves a finite set of operations,denoted by , and let denote the cardinality of the set

. The operations are indexed byand denoted by .For example, see Fig. 1 involving assembly and disassemblyoperations. In this example, and , andthe work order involves multiple (two) jobs.

Each operation is associated with a workcenter, and eachworkcenter contains one or more identical machines. Each oneof these machines can perform the operation in question. Therouting of jobs, associated with the work order, among thevarious workcenters is given in the example of Fig. 1.

For every , the following notation will be used:

• —the set of operations immediately preceding;• —the set of operations immediately following ;• —the set of operations other than that must begin

at the same time as ;• —the set of operations other than that must end at

the same time as .For example, in Fig. 1, ( is an assemblyoperation), because must end at the same timeas , and because no other operation must end at thesame time as . To simplify the exposition we will sometimeidentify the operation with the index , and thus will say that

. Also, we will implicitly assume that a specificoperation can be required no more than once for a given workorder.

We assume that no part can wait for an operation, that is,for , must start at the time ends. Moreover, itcan be seen that, if and , thenand . We observe that, in the case of an assemblyoperation, if then , and in the caseof disassembly, if then . We assumethe contrary: if ( , respectively) then there exists acorresponding disassembly (assembly, respectively) operationpreceding (following, respectively) and , that is, thereexists ( , respectively). This laterassumption is not restrictive in view of the general definition of“operation,” possibly including loading/unloading and materialtransport.

Page 3: Scheduling no-wait production with time windows and flexible processing times

62 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001

Next, let and denote the beginning and ending times,respectively, of , and let denote the processingtime of . Given and , letbe the given interval that must contain, that is, we impose therequirement that .

Suppose that, for every operation , there exists a finite setof closed time intervals, denoted by , [forsome given integer ], during one of which must takeplace. These time intervals, henceforth calledtime windows, orwindows, reflect periods during which the various resources areavailable for the operation . We observe that several windowsmay be overlapping since they can be associated with variousidentical machines at the workcenter where must be per-formed. Denoting by and the starting time and endtime of , we have that . We order thesewindows in the increasing order of their starting times , andin case of two identical starting times, in the increasing order oftheir end times . We also assume that , that is,the last window’s end point, for each operation, is. This as-sumption implies that there exits at least one feasible schedulewhich can be obtained by choosing a late window, and it allowsus to focus on the problem of optimality.

In summary, the work order involves the operations. The operations’ precedence relations and jobs’

routing are given by the sets , , , and , associatedwith every . Moreover, for every , we are givennumbers and in whose terms the processing time of

will be constrained, and a set of time windows [fora given integer ], , , during one ofwhich the operation must take place.

Definition 2.1: A feasible scheduleconsists of operations, , beginning at time and ending at time , sat-

isfying the following four constraints for all .

For some

and (2.1)

(2.2)

for every (2.3)

for every

and

for every (2.4)

Observe that (2.1) expresses the requirement that the operationbe performed during one of the time windows, (2.2) imposes

upper and lower bounds on the processing time, and (2.3) and(2.4) define the routing and precedence relations among the jobsand express the fact that no waiting is allowed.

The scheduling control problem is to choose a feasibleschedule that minimizes the product completion time. That is,to compute such that (2.1)–(2.4) are satisfied andthe last operation ends at the earliest possible time. In formalterms, let us define the set by ,namely the set of indices whose corresponding operations areat the “end of a line,” and observe that the completion time isthe term .

An algorithm for this purpose will be developed in Section III.

III. A LGORITHMS

This section presents an algorithm for computing the feasibleschedule that minimizes the product completion time. Considerthe undirected graph whose nodes are the operations, andwhere there is an arc between the operationsand iff

. Recall that, by assumption, ( , respectively)if and only if there exists an operation ( ,respectively).

Assumption 3.1:The above undirected graph is loop free.We remark that this assumption excludes the case of an as-

sembly operation combining two jobs that originated from thesame disassembly operation.

In what comes we identify an indexwith the operationfor the sake of notational convenience. Thus, for example, wewill use the notation instead of .

We next renumber the operations in a sequencein a way that will be useful for the scheduling control

algorithm, below. The first operation must be at the startor the end of a line, namely, have no preceding or succeedingoperations. For , either is at the start or the end of aline, or all of its preceding or succeeding operations must havebeen numbered. The sequencing is done according to Algorithm3.1, below. Let denote the set of operations that have beennumbered when the operation is considered, namely

, and define . Observe thatwhile .Algorithm 3.1:

Step 0) Set .Step 1) Choose to be any operation satisfying

either one of the following two conditions:

1)2) .

Step 2) Set . If , go to Step 1. If ,exit.

Note that must be at the start or the end of aline. For , if 1) is satisfied then every oper-ation in has been numbered, and if 2) issatisfied then every operation in has beennumbered.

Proposition 3.1: Algorithm 3.1 is consistent in the sense thatit results in a sequencing of all of the operations.

Proof: The proof is immediate in view of Assumption3.1.

We note that the sequencing resulting from Algorithm 3.1 isby no means unique.

We next develop the scheduling control algorithm. Recall thatoperation is carried out during a time window forsome .

Definition 3.1: A set of windows is a standardwindow setif it contains one window associated with eachworkcenter, namely, for all there exists

such that .Definition 3.2: A schedule is feasiblewith

respect to a standard window set if it is afeasible schedule and for every , the operation is

Page 4: Scheduling no-wait production with time windows and flexible processing times

CHAUVET et al.: SCHEDULING NO-WAIT PRODUCTION WITH TIME WINDOWS AND FLEXIBLE PROCESSING TIMES 63

performed during the time window ,namely, the following inequalities hold [see (2.1)]:

and (3.1)

Definition 3.3: A schedule isminimal feasiblewith respectto a standard window set if it has the least completion timeamong the schedules that are feasible with respect to.

Definition 3.4: A schedule isoptimalif it solves the sched-uling control problem, namely, it is a feasible schedule with theearliest-possible product completion time.

If we could compute a minimal feasible schedule with re-spect to every standard window set, then the schedule amongthem with the earliest completion time would be optimal. Thisapproach for solving the scheduling control problem could beimpractical for two reasons as follows: 1) not every standardwindow set has a feasible schedule, and 2) the number of stan-dard window sets is —indicating an exponential com-plexity. We get around the first difficulty by relaxing the feasi-bility requirement and note that the second difficulty also willbe overcome, as it will be seen below. Recall that the feasibilityrequirements are given in terms of (2.1)–(2.4) or (3.1) in lieu of(2.1).

Definition 3.5: A schedule is almost fea-siblewith respect to a standard window set ifall of the feasibility requirements are satisfied except possiblyfor the right inequality of (3.1), i.e., we permit the condition

.Definition 3.6: A schedule is minimal almost

feasiblewith respect to a standard window set if it is almostfeasible with respect to and, for every other schedule

that is almost feasible with respect to, the inequal-ities and hold for every .

Removing the requirement [the right inequalityof (3.1)] ensures that every standard window set has an al-most-feasible schedule. Consequently, every standard windowset has aminimal almost-feasible schedule. Algorithm 3.2,below, computes such a minimal almost-feasible schedule withcomplexity . One among the schedules computed by Al-gorithm 3.2, taken over the range of standard window sets, willbe shown to constitute an optimal schedule. Still the complexityissue, indicated by the number of standard windowsets, remains. However, we address it by devising a procedurerequiring a search among at most standard windowsets.

The following algorithm computes a minimal almost-feasible schedule with respect to a standard window set

. It has two steps: the first step computeslower bounds on and , denoted by and , respectively,and the second step uses these bounds to compute a desiredschedule (the minimal almost-feasible schedule is not unique).The bounds are computed by a forward recursionwhile the schedule is computed by a backwardrecursive procedure.

Given a standard window set . Asfor Algorithm 3.1, we define the sets and by

with , and .

Algorithm 3.2:

Step 1: Forward computation of lower bounds. For, compute and according to either

Case I or Case II, below (we later will see that if theconditions defining both cases are satisfied then theresulting computations are identical):.

Case I: .Set

(3.2)

and set

(3.3)

Case II: .Set

(3.4)

and set

(3.5)

Step 2: Backward computation of the schedule.For every, compute and according to

either one of the following respective cases.Case 1: . Set for any ,

and set .Case 2: . Set for any ,

and set .Case 3: . Set for any ,

and set .Case 4: . Set for any ,

and set .Case 5: None of the cases 1–4 is satisfied.Set , and

set .

Some of the variables in Algorithm 3.2 can be computed inmore than one way. We say that such a variable iswell definedifit has a unique value regardless of the way it is computed. In theforthcoming we will prove that all of the variables computed bythe algorithm are indeed well defined.

Proposition 3.2: All of the variables computed by Algorithm3.2 are well defined.

The proof is highly technical and is relegated to the Appendix.The next assertion concerns the minimality of the schedule

computed by Algorithm 3.2.

Page 5: Scheduling no-wait production with time windows and flexible processing times

64 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001

Proposition 3.3: For a given standard window set , theschedule computed by Algorithm 3.2 is minimal almost feasiblewith respect to .

The proof requires some preliminary results, whose proofsare supplied later in order not to break the flow of the argument.

Lemma 3.1:For every schedule that is almostfeasible with respect to , and for all

. ( and were computed by Algorithm 3.2 and areindependent of the schedule.)

Lemma 3.2:For every schedule that is almostfeasible with respect to , and for every

and (3.6)

Lemma 3.3:For every

and (3.7)

Lemma 3.4:For every , the following in-equalities hold:

(3.8)

Lemmas 3.1 and 3.3 will be required for proving Lemmas 3.2and 3.4, which will be directly applied to the following proof ofProposition 3.3.

Proof of Proposition 3.3:Consider first the almost feasi-bility of the schedule computed by Algorithm 3.2(minimality will be addressed later). Equations (2.3) and (2.4)follow from the fact that the quantities computed by Algorithm3.2 (Step 2) are well defined; see Proposition 3.2. Regarding(2.1) [alternatively, (3.1)], the left inequality follows from thefact that , (Lemma 3.2), and theright inequality is not necessary for almost feasibility. Finally,(2.2) follows from Lemma 3.4. Consequently, the scheduleisalmost feasible with respect to . Its minimality follows fromLemma 3.2.

We next prove the above lemmasProof of Lemma 3.1:We argue by induction. For ,

by Step 1 of Algorithm 3.2, and .At the same time, since is almost feasible, the left inequalityof (2.1) implies that and, by (2.2),

. This establishes that and .Fix and consider the following inductive

hypothesis: “For every , and.” We now prove such inequalities for.Since is almost feasible, and. Moreover, for every , and ; for

every , and ; for every ,and ; and for every ,

and . Therefore, the following twoinequalities apply:

(3.9)

and,

(3.10)

Now consider and as computed in Step 1. In Case I,is given by (3.2). By the inductive hypothesis each term in theright-hand side of (3.2) is no greater than the corresponding termin (3.9), and consequently, . Next, the schedule isalmost feasible with respect to , and hence, by (2.3) and (2.4),we have that for all , and for all

. Consequently, (3.3), together with the inductive hypothesisand the fact that (established above) imply that

. But by (2.2), and hence,.

In Case II of Step 1, the arguments are similar by considering(3.4), (3.10), and (3.5) instead of (3.2), (3.9), and (3.3).

Proof of Lemma 3.2:By Step 1 of Algorithm 3.2, and es-pecially (3.2) and (3.5), we have that for all

. Next, let be almost feasible with respect to.Observe that, if Case 5 in Step 2 holds for the computation of,then and , and therefore, sinceand by Lemma 3.1, (3.6) holds.

We next prove the lemma’s assertion by induction, fromdown to . For , Case 5 in Step 2 of Algorithm 3.2

must hold and hence we have seen that the inequalities in (3.6)are satisfied. Next, fix , and consider thefollowing inductive hypothesis: “For every ,(3.6) holds for .” We next prove that (3.6) also holds for.

Consider the various cases in Step 2 of Algorithm 3.2. IfCase 1 holds then, for all , and

. At the same time, since ,, , and by Lemma 3.1, . The

last two inequalities imply that .Therefore, and by the inductive hypothesis, we conclude that

, and

This completes the proof of (3.6) for. The proofs for Cases2–4 are similar and hence omitted. Case 5 has been discussedearlier.

Proof of Lemma 3.3:We argue by induction fordown to . For , Case 5 in Step 2 must hold andhence (3.7) is immediate. Next, fix andconsider the following inductive hypothesis: “For all

, (3.7) holds for .” We now will prove (3.7) for .Consider the various cases of Step 2 in Algorithm 3.2. In Case

1, for some (and all) . Since ,, and therefore, and by Step 1 [either (3.2) or

(3.5)], . Consequently, and by the inductive hypothesis,we conclude that . The fact thatfollows directly from the formula defining in Case 1 of Step2. This established (3.7) for.

Page 6: Scheduling no-wait production with time windows and flexible processing times

CHAUVET et al.: SCHEDULING NO-WAIT PRODUCTION WITH TIME WINDOWS AND FLEXIBLE PROCESSING TIMES 65

Cases 2–4 can be treated in a similar fashion, and in Case 5,(3.7) is immediate.

Proof of Lemma 3.4:We first prove the inequalities

(3.11)

By Lemma 3.3, and . To prove (3.11), considerfirst Step 1 of Algorithm 3.2. If Case I holds, then by (3.3),

, and hence . Further considering(3.3): if , then clearly . If

, then by (3.2), ,and hence, . Similarly, if

, then by (3.2), , hence. In either case, (3.11) holds.

Suppose next that Case II in Step 1 holds for. By (3.5),, and hence . Considering

(3.5), if then . Ifis equal to any one of the other three terms in the max of (3.5),then by (3.4), . In any event, , and(3.11) holds. We thus have established (3.11).

Let us next explore the bounds on by consideringthe various cases in Step 2 of Algorithm 3.2. In Cases 1 and 2,

, and hence ,implying that . If then

. If , then by (3.11) and the fact that(Lemma 3.3), we have that .

Consequently, (3.8) holds for all.In Cases 3 and 4, , and hence

. If then obviously .If , then by (3.11) and the fact that (Lemma3.3), we obtain that . In any event,(3.8) is satisfied for .

Finally, in Case 5, (3.8) is immediate in view of (3.7).We next discuss the scheduling control algorithm. Algorithm

3.2 gives us a minimal almost feasible schedule with respect toa given standard set . If such a schedule were feasible then itwould yield the minimal completion time among the schedulesthat are feasible with respect to . What happens if is notfeasible? The answer is given by the following result, whichwill be shown to have consequences to the complexity of thescheduling control algorithm.

Proposition 3.4: Let be the schedule computed by Algo-rithm 3.2 with respect to a given standard window-set. Ifis not feasible then there is no schedule that is feasible with re-spect to .

Proof: Suppose that is not feasible. Sincethis schedule is almost feasible with respect to, there exists

such that . Let be anotherschedule that is almost feasible with respect to. Then, byProposition 3.3, , and consequently ,implying that is not feasible with respect to .

Proposition 3.4 indicates a way to compute an optimalschedule: apply Algorithm 3.2 for every standard window set,and pick up the schedule having the earliest product completiontime from among the schedules that are feasible. There is a prac-tical issue, however, because the number of standard windowsets is . We therefore take a slightly different approach,

by considering standard window sets in an iterative fashion. Ateach iteration we apply Algorithm 3.2 for computing a minimalalmost-feasible schedule. If the resulting schedule is not feasiblethen (by Proposition 3.4) there exists no feasible schedule for.On the other hand, if the above schedule is feasible, then it willbe shown to beoptimal as well. In other words, the algorithmiterates among infeasible schedules until a feasible scheduleis found, which is provably optimal. Moreover, the number ofiterations required is at most . The standard windowset at the initial iteration is .

To formalize, we first establish some notation. Given an in-teger-set (vector) we denote it by ,and likewise, we denote , and simi-larly for other integer–vector notation. We will henceforth as-sume that for such an integer–vector,for all . We say that iffor all , and we say that if and

. Given we denote by the stan-dard set , and by the corre-sponding minimal almost-feasible schedule computable by Al-gorithm 3.2. The following procedure will be shown to computethe optimal schedule.

Algorithm 3.3:

Step 0. Set for all , and set.

Step 1. Compute by Algorithm 3.2.Step 2. Feasibility test.If for all

, then stop and exit.Step 3. For every , compute

, set ,and with , go to Step 1.

We explain the algorithm. Given a standard window set, Step 1 computes the minimal almost-feasible

schedule . If this schedule is feasible thenthe algorithm exits; the above schedule is optimal, as it will beshown later. If is not feasible then, since it is almost feasible,there exist some (possibly multiple) such that

. We then pick the next earliest windowsuch that (such a window exists because ofthe assumption that ), modify according to Step3, and reiterate Step 1. We observe that Algorithm 3.3 iteratesthrough at most standard window sets.

We next prove that the algorithm computes an optimalschedule. The proof will be broken down into a sequence oflemmas. First, some notation is established.

Recall that, given and the standardwindow set , denoted theschedule computed by Algorithm 3.2. Likewise, we denote by

and the respective bounds computable by Step 1 ofAlgorithm 3.2.

Lemma 3.5:Let . Then, for every ,, , , and .

Proof: Recall that, by the way we order the windows,. Since , we have that for

all . Now all of the computations in Algorithm3.2 use max and plus, and hence the resulting quantities aremonotone increasing in .

Page 7: Scheduling no-wait production with time windows and flexible processing times

66 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001

TABLE IPROBLEM PARAMETERS

Suppose now that Algorithm 3.3 cycles through its main loopexactly times (for some positive integer ), and let us denoteby the value of the integer–vectorwith which the algo-rithm computes during theth cycle. Thus, , theth schedule computed in Step 1 is , and the algorithm exits

with the feasible schedule . For every , we de-fine , the integer setthat violates feasibility according to the right inequality of (3.1).Observe that for all while .

Lemma 3.6:Let satisfy the in-equalities for some . Then,for every satisfying , we havethat .

Proof: Let . By the definition ofin Step 3 of Algorithm 3.3, we have that . ByLemma 3.5, , and hence . .

Lemma 3.7:For every not satis-fying the inequality , there existand such that, and .

Proof: Let not satisfy the in-equality . Since we have that .Define ( exists be-cause ). Now, since , , and hence ,

, and it is not true that . We next argue bycontradiction. If the lemma’s assertion is not true then, for every

, . At the same time, for every ,, hence (because ).

Consequently , a contradiction.Lemma 3.8:For every not satis-

fying the inequality , the schedule is not feasiblewith respect to .

Proof: Suppose that does not satisfy the inequality. By Lemma 3.7 there exist andsuch that, and . Define

as follows: , and for every, . Then, ,

, and . By Lemma 3.6 as applied to,we have that . Since and by Lemma3.5, . Consequently . But

, hence , meaning that is not feasiblewith respect to .

Algorithm 3.3 computes a feasible schedule . The impli-cation of Lemma 3.8 is that every other feasible schedule,,must satisfy the inequality . In light of this, the fol-lowing conclusion is not surprising.

Theorem 3.1:The schedule is optimal.Proof: is a feasible schedule, because it passes the

feasibility test in Step 2 of Algorithm 3.3. By Lemma 3.8, anyother feasible schedule must be feasible with respect toforsome . Therefore, by Lemma 3.5, must be theoptimal schedule.

We remark that Algorithm 3.3 cycles through at mostiterations. At each iteration Algorithm 3.2 is in-

voked, and its complexity is of . Therefore, the complexityof the entire scheduling control algorithm is .

IV. EXAMPLE

Consider the system shown in Fig. 1, where the rectanglesrepresent operations and contain their respective numbers. Wefirst verify that these numbers satisfy the conditions of Algo-rithm 3.1, and hence could be obtained by it.

Algorithm 3.1:

and . Observe that. Therefore, in Step 1, Condition

1 is satisfied for .while and . There-

fore, in Step 1, Condition 1 is satisfied for ., hence Condition 1 in Step 1 is

satisfied for .Since is an assembly operation, .Observe that and . There-fore, in Step 1, Condition 1 is satisfied for .We observe that Condition 1 in Step 1 is not satis-fied for . The reason is thatwhile , since both and are imme-diate successors of the disassembly operation.However, Condition 2 is satisfied for . To seethis, observe that , hence

.Observe that and , while

. Therefore, in Step 1, Condition1 is satisfied for .It is evident that both Condition 1 and Condition 2are satisfied for .

We remark that the above sequencing, shown in Fig. 1, is notunique, and alternative sequences can be obtained by Algorithm3.1.

We next consider the scheduling control problem whose pa-rameters are shown in Table I.

Page 8: Scheduling no-wait production with time windows and flexible processing times

CHAUVET et al.: SCHEDULING NO-WAIT PRODUCTION WITH TIME WINDOWS AND FLEXIBLE PROCESSING TIMES 67

TABLE IIa AND d

Here represents the operation number, while the otherparameters in the left column are self explanatory. We solvethe scheduling control problem by applying Algorithm 3.3with the aid of Algorithm 3.2.3 By Step 0 of Algorithm3.3, we start with the standard window set corresponding to

, namely,. We next

follow the computation of Algorithm 3.2 for this standardwindow set.

Algorithm 3.2: First, Step 1 computes the boundsandfor .

, and therefore Case I applies.By (3.2), , and by (3.3),

., , , , and. Therefore, Case I holds. Equation (3.2)

implies that, and (3.3) implies that .

, , , and. Therefore Case I holds. Equation

(3.2) implies that, and (3.3) implies that

., , , , and

. Case I holds again. By (3.2),.

By (3.3), ., , , and

. We see that Case I fails to hold,but Case II is satisfied. By (3.4),

. By (3.5),

., , , ,

and . Case I holds. By (3.2),,

and (3.3) implies that ., , and. Case I holds. Equation (3.2) implies

that , andby (3.3), .

The results of Step 1 of Algorithm 2.1 are summarized inTable II.

We next turn to Step 2 of Algorithm 3.2 for computing theschedule in a backward fashion.

, , and .Consequently, Case 5 holds, and hence

and .

3A shortcut can be made by removing time windows that are shorter than theshortest-possible processing time. However, we follow the steps of the algo-rithm.

TABLE IIIb AND e

, , , , and. Therefore Case 1 holds, and hence

, and.

, , , and. Case 3 holds, and hence and

., , , and

. Case 1 holds, and therefore, and

., , , and

. Case 1 holds, hence , and

., , , , and

. Both Case 1 and Case 2 holdand they yield identical results. Choosing Case 2,we get that , and

., , and

. Case 1 holds, , and.

The results are summarized in Table III.By Proposition 3.3, this schedule is minimal almost feasible

with respect to the standard window set . Tocheck for feasibility (and hence minimality, by Theorem 3.1)we check the right inequality of (3.1) to see whetherfor some . Recall that the end times areshown in Table III, and the windows’ right points, , areshown in the upper “Windows” row of Table I. Let denotethe index set where feasibility is violated, namely,

, and recall (Theorem 3.1) that thecondition implies optimality. We can see that

.We next follow Algorithm 3.3 to Step 3, whose task is to

compute the earliest windows whose right end are no earlier thanthe computed . For , since , and hence

. For , , namely Recalling thatand observing the windows associated with the operation

in Table I, we see that . Similarly, for ,we get that . Next, Algorithm 3.3 sets

and returns to Step 1.The algorithm computes recursively a sequence of minimal

almost-feasible schedules until . The obtained results,including the first iteration, are shown as follows.

Iteration 1:

Windows:

Page 9: Scheduling no-wait production with time windows and flexible processing times

68 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 17, NO. 1, FEBRUARY 2001

Fig. 2. Optimal schedule.

Iteration 2:

Windows:

Iteration 3:

Windows:

Iteration 4:

Windows:

The optimal schedule was computed in Iteration 4, and it isshown in Fig. 2. In this figure, the shaded rectangles indicatetime intervals not falling inside any of the windows, and hencemust not overlap with the associated operations. The empty rect-

angles indicate time intervals during which the various opera-tions take place. The completion time is . Weobserve that it takes four iterations for the algorithm to reach theoptimal schedule, that is, only four standard window sets havehad to be considered by Algorithm 3.2. Compare that with thetotal number of the standard-window sets, 648, to see the poten-tial merit of the proposed algorithm.

V. CONCLUSIONS

This paper has developed an on-line algorithm for operationscheduling in no-wait manufacturing systems without blockingor waiting. The algorithm computes schedules with minimumproducts’ completion times, and it is suitable to schedulingcontrol in multi-product manufacturing environments withfrequently changing product requirements. The algorithm andits analysis are based on novel techniques. Future researchwill explore extensions to manufacturing systems subjected touncertain or random processing and transportation times.

APPENDIX

The purpose of this appendix is to provide a proof of Proposi-tion 3.2. Let us fix a standard window set ,where . Algorithm 3.2 computesa schedule. The question of it being well defined arises becauseof the possibility of multiple cases arising in either step of thealgorithm. Recall that well-definedness means that, regardlessof how computed, the variables , , , and have uniquevalues.

Proof of Proposition 3.2:Let us first prove that and ,, are well defined. These quantities are computed

Page 10: Scheduling no-wait production with time windows and flexible processing times

CHAUVET et al.: SCHEDULING NO-WAIT PRODUCTION WITH TIME WINDOWS AND FLEXIBLE PROCESSING TIMES 69

in Step 1 of Algorithm 3.2. Therefore, the only way they can benot well defined is if, for some , both Case I andCase II (in Step 1) are satisfied for some . In thiscase, and . Therefore, somelaborious but straightforward algebra shows that a substitutionof (3.2) in (3.3) gives the same expression as (3.4), and a sub-stitution of (3.4) in (3.5) gives the same expression as (3.2).

We next turn to the quantities and , computed in Step 2,where we prove that they are well defined by induction (back-ward) for . For , Case 5 must hold in Step2, and hence and . Since and wereshown to be well defined, and are also well defined.

Next, fix , and consider the fol-lowing inductive hypothesis. “The variables and ,

, are well defined.” We next prove thatand are well defined as well.

The possibility of or being not well defined can arisein one of the following two situations concerning Step 2: 1) foreither Case 1, Case 2, Case 3, or Case 4, there is more than oneway to assign the above variables, or 2) more than one Caseoccurs simultaneously for. We first consider the former situ-ation. Starting with Case 1, suppose there are and

such that . By Case 1, and also. We next show that .

Let . Since , we have that . Sinceand , we have that . By the inductive

hypothesis as applied to with Case 3, we obtain the desiredequality, .

Similar arguments apply,mutatis mutandis, when either Case2–4 involves multiple choices for the computation ofand .

We next consider the situation where more than one caseholds simultaneously for. Observe that Case 5 can occur onlyalone. We next submit that Cases 1 and 2 cannot occur with ei-ther Case 3 or 4. To show this, let us consider the hypotheticalcollusion of Cases 1 and 3; the rest of the above combinationscan be treated in a similar way, and hence their analysis willbe omitted. Thus, let and . Then,

(the superscript means the complementof a set) and , and consequently, neitherCase I nor Case II in Step 1 could have been satisfied for. This,of course, is in contradiction with Proposition 3.1.

It thus remains to check the collusions of Cases 1 and 2, andof Cases 3 and 4, respectively. We only consider the first kindof collusion, as the arguments for the second kind are similar.

Suppose that Cases 1 and 2 hold for, and hence there existand . Then, and .

Now either or . If then ,and by the inductive hypothesis and Case 1 as applied toinStep 2, we have that . If, on the other hand,then , and by the inductive hypothesis and Case 4as applied to in Step 2, we have that . In any event,

. Observe that, Case 1 as applied todictates that, while Case 2 implies that . This indicates that is

well defined, and the formula for in either Case 1 or 2 showsthat is well defined as well. This completes the inductionargument, and hence the Proposition’s proof.

REFERENCES

[1] J. R. Callahan, “The nothing hot delay problems in the production ofsteel,” Ph.D. dissertation, Dept. Ind. Eng., Univ. Toronto, Toronto, ON,Canada, 1971.

[2] F. Chauvet, E. Levner, L. K. Meyzin, and J. M. Proth, “On-line partscheduling in a surface treatment system,” INRIA, Le Chesney, France,INRIA Res. Rep. 3318, 1997.

[3] F. Chauvet and J. M. Proth, “On-line scheduling with WIP regulation,”in Proc. IEEE-EAMRI Renssealaer’s Int. Conf. Agile, Intelligent, andComputer-Integrated Manufacturing, Troy, NY, Oct. 1998.

[4] C. Chu, J. M. Proth, and L. Wang, “Improving job-shops schedulesthrough critical pairwise exchanges,”Int. J. Prod. Res., vol. 36, pp.638–694, 1998.

[5] N. G. Hall and C. Sriskandarajah, “A survey of machine schedulingproblems with blocking and no-wait in process,”Oper. Res., vol. 44,pp. 510–525, 1996.

[6] S. T. McCormick, M. L. Pinedo, S. Shenker, and B. Wolf, “Sequencingin an assembly line with blocking to minimize cycle time,”Oper. Res.,vol. 37, pp. 925–935, 1989.

[7] R. Rachamadugu and K. Stecke, “Classification and review of FMSscheduling procedures,”Prod. Planning Contr., vol. 5, pp. 2–20, 1994.

[8] G. Schmidt, “Scheduling on semi-identical processors,”Z. Oper. Res.,vol. 28, pp. 153–162, 1984.

[9] G. Schmidt, “Scheduling independent tasks with deadlines on semi-identical processors,”J. Oper. Res. Soc., vol. 39, pp. 271–277, 1988.

Fabrice Chauvetobtained the diplome d’etudes ap-profondies in operations research from the Univer-sity of Grenoble, Grenoble, France, in 1995. He re-ceived the Ph.D. degree in applied mathematics anddata processing from the University of Metz, Metz,France, in 1999. His dissertation was on constrainedwork-in-process in on-line scheduling.

In 1996, he joined INRIA, where his two main re-search interests were in transportation and logistics(he obtained new applicable results to regulate self-service cars system) and scheduling and planning (he

improved a hoist scheduler). After completion of the Ph.D. degree in 1999, hejoined Bouyegues Telecom where his interest is in optimization of networks andcall centers.

Jean-Marie Proth (M’89) is currently working inreal time scheduling, supply chains optimization,logistics, and modeling using Petri nets. He authoredand co-authored 11 books and more than 400papers in international journals and conferences. Heconducted 46 contracts with the French aerospaceagency, the defense department and its subcontrac-tors, and several industrial groups. Prof. Proth hasadvised 26 Ph.D. dissertations in France and the U.S.He is currently developing two projects. The firstone concerns the control of modern radar systems.

The second contract aims at managing an automated self-service transportationsystem.

Yorai Wardi (M’81) received the Ph.D. degree inelectrical engineering and computer sciences fromthe University of California, Berkeley, in 1982.

From 1982 to 1984, he was a Member of TechnicalStaff at Bell Telephone Laboratories and Bell Com-munications Research. Since 1982, he has been withthe School of Electrical and Computer Engineering atthe Georgia Institute of Technology, Atlanta, wherehe currently is an Associate Professor. He spent the1987–1988 academic year at the Department of In-dustrial Engineering and Management, Ben Gurion

University of the Negev, Be’er Sheva, Israel. His research interests include dis-crete event dynamic systems, perturbation analysis, and modeling and optimiza-tion of hybrid dynamical systems.