24
International Journal of Production Economics, 28 ( 1992) 47-70 Elsevier 47 The “Orchard” scheduler for manufacturing systems* Robert J. Wittrock Manufacturing Research Department, IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA (Received 1 January 1992; accepted in revised form 15 March 1992) Abstract This paper describes Orchard, which is a heuristic algorithm for scheduling the loading of jobs into a manufacturing system. While Orchard was specifically designed for use at a particular printed circuit board line, it is sufficiently generic to be used at other manufacturing systems as well. In general, such a manufacturing system performs several operations on each job with multiple machines performing each operation. The routing of each job may involve stochastic branching including rework loops, and there may be a need to assign human operators to operations. The Orchard heuristic can be adapted to two different objective functions: weighted mean flow time, and an objective specially formulated for the circuit board line. The present paper describes all aspects of Orchard except its operator assignment module (which is described in another paper) and concludes with results of tests of the algorithm as applied to the circuit board line. 1. Circuit board line Orchard was designed for use at a particular printed circuit board line. This line takes circuit boards that have been fabricated elsewhere and performs various assembly and test operations on them, producing several dozen boards per day. The line (hereafter referred to as “the board line”) is partitioned into 11 sectors. Several op- erations are performed by each sector and each operation is performed by a bank of parallel identical machines. Product flow within each sector is facilitated by an automated material handling system, while product flow between sectors is manual. There is a nominal route that boards follow through the line. However, when a board fails at a test operation, it branches off from the nominal route and follows a rework route be- fore returning to the nominal route. The product flow between sectors is shown in Correspondence to: Dr. R.J. Wittrock, Manufacturing Re- search Department, IBM Research Division, T.J. Watson Re- search Center, Yorktown Heights, NY 10598, USA. *This article was originally accepted by the Journal of Man- ufacturing and Operations Management. Fig. 1 (adapted from Ref. [ 1 ] ). The boxes indi- cate the 11 sectors of the line. The dark lines in- dicate the nominal route through the line. The fat arrows indicate initial entry into the line at sector 1 and final exit from the line at sector 8. The dotted lines indicate rework routes, which constitute a substantial proportion of the total product flow. Sectors 9, 10, and 11 perform re- work only. The scheduling of this line naturally decom- poses into hierarchical problems: line loading and sector scheduling. Line loading views each sector as a “black box” and determines the sequence and timing at which boards will be loaded into the front of the line. Sector scheduling focuses on one sector, ignores all others, takes into account ac- tivities inside that sector, and determines the se- quence and timing at which boards will be loaded into that sector. There are no scheduling deci- sions to be made inside a sector, since the mate- rial handling system manages the internal queues automatically on a FIFO basis. Orchard was de- signed as a sector scheduling algorithm for this circuit board line. (The name “Orchard” is ex- plained in Section 5. ) 0925-5273/92/$05.00 0 1992 Elsevier Science Publishers B.V. All rights reserved.

The “Orchard” scheduler for manufacturing systems

Embed Size (px)

Citation preview

International Journal of Production Economics, 28 ( 1992) 47-70 Elsevier

47

The “Orchard” scheduler for manufacturing systems*

Robert J. Wittrock

Manufacturing Research Department, IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA

(Received 1 January 1992; accepted in revised form 15 March 1992)

Abstract

This paper describes Orchard, which is a heuristic algorithm for scheduling the loading of jobs into a manufacturing system. While Orchard was specifically designed for use at a particular printed circuit board line, it is sufficiently generic to be used at other manufacturing systems as well. In general, such a manufacturing system performs several operations on each job with multiple machines performing each operation. The routing of each job may involve stochastic branching including rework loops, and there may be a need to assign human operators to operations. The Orchard heuristic can be adapted to two different objective functions: weighted mean flow time, and an objective specially formulated for the circuit board line. The present paper describes all aspects of Orchard except its operator assignment module (which is described in another paper) and concludes with results of tests of the algorithm as applied to the circuit board line.

1. Circuit board line

Orchard was designed for use at a particular printed circuit board line. This line takes circuit boards that have been fabricated elsewhere and performs various assembly and test operations on them, producing several dozen boards per day. The line (hereafter referred to as “the board line”) is partitioned into 11 sectors. Several op- erations are performed by each sector and each operation is performed by a bank of parallel identical machines. Product flow within each sector is facilitated by an automated material handling system, while product flow between sectors is manual. There is a nominal route that boards follow through the line. However, when a board fails at a test operation, it branches off from the nominal route and follows a rework route be- fore returning to the nominal route.

The product flow between sectors is shown in

Correspondence to: Dr. R.J. Wittrock, Manufacturing Re- search Department, IBM Research Division, T.J. Watson Re- search Center, Yorktown Heights, NY 10598, USA. *This article was originally accepted by the Journal of Man- ufacturing and Operations Management.

Fig. 1 (adapted from Ref. [ 1 ] ). The boxes indi- cate the 11 sectors of the line. The dark lines in- dicate the nominal route through the line. The fat arrows indicate initial entry into the line at sector 1 and final exit from the line at sector 8. The dotted lines indicate rework routes, which constitute a substantial proportion of the total product flow. Sectors 9, 10, and 11 perform re- work only.

The scheduling of this line naturally decom- poses into hierarchical problems: line loading and sector scheduling. Line loading views each sector as a “black box” and determines the sequence and timing at which boards will be loaded into the front of the line. Sector scheduling focuses on one sector, ignores all others, takes into account ac- tivities inside that sector, and determines the se- quence and timing at which boards will be loaded into that sector. There are no scheduling deci- sions to be made inside a sector, since the mate- rial handling system manages the internal queues automatically on a FIFO basis. Orchard was de- signed as a sector scheduling algorithm for this circuit board line. (The name “Orchard” is ex- plained in Section 5. )

0925-5273/92/$05.00 0 1992 Elsevier Science Publishers B.V. All rights reserved.

48 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

-+ SECTOR 4

SECTOR 11

Fig. 1. Circuit board line sector flow.

2. The problem addressed

While Orchard was designed specifically for the board line, it may be applicable to other manu- facturing systems, either for scheduling a single sector of a line, or perhaps for scheduling a whole line. Thus, it is best to describe Orchard in ge- neric terms.

The problem addressed by Orchard is defined in terms of its data: a static description of a man- ufacturing system and a “snapshot” description of its current state, consisting primarily of a list of jobs inside the system (“internal” jobs), a list of jobs in front of the system waiting to be loaded in (“external” jobs), and detailed information about each of these jobs. In the board line case: the manufacturing system was one sector of the board line and each job was a board. The prob- lem is to produce a “loading schedule”, which specifies the order in which the external jobs should be loaded into the system and the exact time at which each external job should be loaded. This scheduling problem entails many assump- tions, considerations, and even multiple objec- tives, all of which are derived from the board line problem. Most of these considerations will be elaborated in this introductory Section. How-

SECTOR I(

ever, some aspects of the precise problem defi- nition are sufficiently detailed that a full expla- nation will be deferred to the main text.

The manufacturing system in question con- sists of many machines that perform various op- erations on the jobs. Each operation is per- formed by a bank of parallel identical machines. The processing time of a job at an operation de- pends on the job and on the operation, but not on the individual machine being used. In partic- ular, each job visiting an operation is allowed to have its own processing time. This is in contrast to many manufacturing environments, in which the processing time is the same for all jobs of the same “type”. The assumption made here derives directly from the board line case, in which each board had its own unique work content that de- pended on variations in the processing that had already been performed on the board before it entered the line.

There may be some small ( 1 O-20%) deviation between the actual processing time of a job at an operation and the predicted processing time. This can be due to various factors, especially the un- predictable influence of the human operators. The validity of this assumption in the board line case will be discussed in Section 15.

R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems 49

It is assumed that any setup time for an oper- ation to change from one type of job to another is negligible. This was true of most operations in the board line. Two sectors did include opera- tions with significant setup times, but in this case, the setups occurred sufficiently infrequently that it would be reasonable to reschedule after each setup (more on rescheduling later). Also, a batch size of one is assumed at each operation, which was true at the board line.

The time it takes to transport a job from one operation to another depends only on the opera- tions in question and not on the individual ma- chines. This was a reasonable approximation in the board line case, since the machines perform- ing an operation were all on the same section of the material handling system. Also, the material handling system was sufficiently uncongested that there was no need to model potential conflicts for this resource.

In general, the jobs do not all follow the same routing (the sequence of operations that a job visits). Specifically, there is a list of routings that the jobs are allowed to take and each job follows one of these routings. In the board line case, boards would arrive at a sector from different parts of the line, requiring a differing sequence of operations to be performed before proceeding to another sector. For example, Fig. 1 indicates five different circumstances under which a board might enter sector 1: three on the board’s nomi- nal routing (dark lines) and two distinct rework possibilities (dotted lines). These live ways of arriving at sector 1 give rise to five distinct rout- ings within sector 1. In general, the set of possible routings through a sector of the board line tended to be much smaller than the set of boards to be loaded into it.

The routing that a job follows is allowed to in- volve stochastic branching, i.e., after a job com- pletes a “test” operation, the next operation it visits depends on whether it passed or failed the test, and this is viewed as random. In the board line case, there were two kinds of stochastic branching that would occur within a sector: A board that failed a test would either follow a re- work loop completely within the sector or it would leave the sector entirely and proceed to another sector for repair. A precise definition of

stochastic routings is given in Section 5. Some of the operations must be run by a hu-

man operator. Each operator is skilled at some subset of the set of all operations and may switch operations dynamically as the work load dic- tates. Thus, the capacity of an operation depends on how much time each operator will devote to working on that operation. While this is a deci- sion that the operators themselves will make dy- namically, in order to estimate that capacity of the operations, Orchard must have a model of how this assignment of operators to operations will be performed. Thus, one of the tasks that Or- chard must perform is an operator assignment. This operator assignment problem is a major topic in its own right. Full details, including problem definition, Orchard’s model, and Or- chard’s solution algorithm are given in Ref. [ 2 1. A brief summary of the operator assignment problem and how it relates to the rest of Orchard are given in this paper (Section 6 ) .

The scheduling of activities internal to the sys- tem is assumed to be automatic. Specifically, the order in which jobs are processed at each ma- chine (or operation) can be decided by any rea- sonable queuing discipline, such as FIFO or first- in-system-first-out. In the board line case, the au- tomated material handling system used a FIFO queuing discipline at each operation. In general, while there is no decision to be made about activ- ities internal to the system, it is necessary to model these internal activities in order to solve the stated problem of producing a loading schedule.

Finally, there is a requirement that the method for solving this problem be usable in a reactive mode, i.e., whenever the state of the system changes in some unpredicted or unmodeled way, a means must be provided to generate a schedule that takes the new state into account. Examples of events that would require a reaction include: - A new job or group of jobs arrives at the system. - A required machine fails or is repaired. - An operator is added or to removed from the

work force.

3. Objectives

The scheduling problem addressed by Orchard is an optimization problem. There are two main

50 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

objective functions: “weighted mean flow time” (WMFT) and “targeting”. The user is allowed to decide which of these objectives will be used, depending on the manufacturing environment and the goals of management. There is also a queueing objective, which is used in conjunction with both the WFMT objective and the targeting objective.

Weighted mean flow time is a classical sched- uling objective [ 3 1. The flow time of a job j, de- noted C,, is the elapsed time between the begin- ning of the schedule and the time that the job leaves the system. This includes the time that the job spends inside the system and the time it spends waiting in front of the system. Associated with each job j is a priority or weight We indicat- ing the relative importance of completing that job. The weighted mean flow time is the weighted average of the flow times of all the jobs:

In the board line case, the weight of a board corresponded to a priority specified by manage- ment, which was meant to reflect how far ahead or behind schedule the board was in the line as a whole. Also, some types of boards were desig- nated “hot” (very high priority), in order to meet some special need. Minimizing the WMFT ob- jective provided a means of trading-off between the expediting of high priority boards and the ef- ficient use of resources.

In conjunction with the WMFT objective, a subordinate objective is also used, which was motivated by the following concern. It is as- sumed that there is infinite buffer capacity in front of each operation, i.e., that it is acceptable to have an arbitrarily large queue in front of any operation. Clearly, this is an approximation. In the board line case, there was a buffer limit of about 10 boards per operation. Theoretically, if this limit was exceeded, blocking would occur and upstream operations would be delayed. How- ever, in practice this rarely happened, as long as the sector operator was careful not to load boards into the sector at too rapid a rate. From a mod- eling point of view, it was not considered feasible to address the blocking issue directly, due to the stochastic nature of the problem. Instead, this is-

sue was addressed implicitly, by inco~orating a secondary objective that seeks to avoid unneces- sary queuing. A precise formulation of the queuing objective is given in Section 10. The idea is to imitate the spirit of actual practice on the board line, i.e., if queuing is kept fairly low, then blocking will rarely occur. The queuing objective is subordinate to the WMFT objective, in the sense that a schedule is considered to be optimal if it minimizes the queuing objective among all schedules that minimize the WMFT objective.

Alternatively, instead of WMFT, a completely different objective can be used. The alternative objective is called “targeting” and was formu- lated specifically use at the board line. It is a goal- programming type of objective, which seeks to achieve a stable, prespecilied throughput rate, while also incorporating the queuing objective mentioned above. The precise mathematical for- mulation of this objective is developed in Sec- tion 12.

In summary, Orchard can work with either of two main objectives: WMFT or targeting, at the user’s option, and both of these main objectives incorporate the same queuing objective.

4. Overview of the algorithm

Orchard is an algorithm which was designed to solve the scheduling problem defined above in order to function as a sector scheduler for the board line. Some general comments will serve to introduce the approach.

Finding a schedule that has the absolute mini- mum objective value (for either of the two pos- sible objectives) would be prohibitively slow, (A trivial special case of Orchard’s problem is the “two machine flow shop” problem, which, for the WMFT objective, is known to be NP-complete; see Ref. [ 41. ) For this reason, Orchard was de- signed as a heuristic algorithm. The schedule it produces is expected to have a relatively good (i.e., small) objective value, but probably not the best possible.

As was mentioned above, Orchard was re- quired to have a “reactive mode”, a means of re- acting to unpredicted events. One approach for this is to let the computer work hard to generate a good initial schedule and then provide a quick means of adjusting this schedule when a reaction

R. J. Wittrock / The “Orchard” scheduler for manufacturing systems 51

is called for [ 5,6]. In contrast, the approach taken in Orchard was to design an algorithm that is practical to run every time a reaction is called for. The advantage of this approach is that each re- action has the full benefit of Orchard’s modeling and heuristics. The disadvantage is that it im- poses a serious speed constraint on Orchard. For reactive use on the board line, it was necessary that Orchard would run in about 5 min or less on some kind of personal computer.

The scheduling problem addressed by Orchard is stochastic in two ways. The processing times are imprecise, and the routings branch ran- domly. One way in which Orchard deals with sto- chastic phenomena is by relying on reactive re- runs. However, since the loading schedule produced by Orchard needs to be useful for more than a trivial amount of time, it is necessary to handle random phenomena more directly. In principle, the data for the problem might be viewed as random variables with well defined probability distributions. Exact distributions could then be computed for derived quantities, such as queuing times and completion times. For Orchard’s purpose, it is much too difficult to compute these distributions, or even their mean values. Instead, for each derived quantity, Or- chard computes a “heuristic estimate”, an unri- gorous educated guess at its value. Thus Or- chard’s estimates of such quantities as completion time are not meant to be taken as a serious out- put of the algorithm; rather, they are used by Or- chard for guidance as it computes its main out- put: a loading schedule. In other words, Orchard was designed to be used for prescription and not prediction.

Various aspects of the scheduling problem that Orchard addresses have been studied in other contexts. In a classical result, Smith [ 31 gives a simple policy that finds an optimal schedule to the deterministic one-machine problem with a WMFT objective. The job-shop scheduling prob- lem is a special case of Orchard’s problem with deterministic routings and no operator assign- ment. This problem has been studied for many years, especially with a makespan objective (i.e., minimize maximum completion time ) [ 7 1. The hierarchical real-time scheduling methodology of Kimenia and Gershwin [ 81 viewed machine

failures and repairs as stochastic disruptions and was adapted by Gershwin et al. [ 91 for use at a printed circuit board line (which did not involve rework). Wein [ 10,111 has treated multiple ma- chine stochastic scheduling from the point of view of scheduling networks of queues. However, it appears that no previous work has considered the main issue addressed by Orchard: scheduling a specific list of jobs with stochastic branching in the routings.

The broad outline of the Orchard algorithm is as follows: ( 1) Model the routings. (2 ) Assign operators. ( 3 ) Schedule.

The scheduling module of Orchard proceeds by building up a schedule, one job at a time. For each job to be scheduled, there is an iteration consist- ing of the following steps: ( 1) Choose the next job to schedule. (2 ) Determine the load time of the job. (3 ) Schedule the job, i.e., update the schedule.

The remainder of this paper is organized as follows: Orchard’s model of stochastic routings is described in Section 5. Section 6 briefly de- scribes the operator assignment problem and its relation to the rest of Orchard. See Ref. [ 21 for a thorough discussion of the operator assignment problem and Orchard’s method for solving it. Section 7 explains Orchard’s means of applying capacity constraints during scheduling. In es- sence, Sections 5-7 are concerned with Or- chard’s model of the system. The purpose of this model is to enable Orchard to make intelligent decisions during the scheduling module. The three steps of Orchard’s scheduling iteration are described in Sections 8, 9 and 11. Section 8 ex- plains step 3, essentially Orchard’s model of a schedule. Section 9 explains step 1, how Orchard chooses the next job to schedule. This is really the heart of the Orchard heuristic. Section 10 ex- plains Orchard’s queuing objective, which can only be fully defined in the context of Orchard’s model of a schedule, as given in Section 8. This queuing objective drives the loading time heuris- tic (step 2 ), which is given in Section 11. Or- chard’s alternative objective is discussed in Sec- tions 12 and 13. Section 14 reports off-line computational tests with data from the board

52 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

line, while Section 15 reports on-line tests at the board line itself. Section 16 suggests directions for further research and Section 17 is a brief conclusion.

(At this point, it may be useful to refer to ap- pendix A, which gives a complete statement of Orchard’s data requirements.)

5. A model of stochastic routings

Orchard’s initial model of a job’s routing is a directed graph. Each node in the graph repre- sents the job visiting an operation and is called a visit. The operation corresponding to visit h is denoted i(h). The arcs represent the branches in the route. There is an arc from each visit h to each visit h’ that the job can branch to immediately after visit h. Associated with each arc is its branching probability, the conditional probabil- ity that the job will take that branch, given that it has just completed visit h. The routing is as- sumed to be Markovian so that the conditional probability of taking a branch from a visit de- pends only on that visit and not on how the job got there. (This assumption will be relaxed later. ) The job begins at a designated visit, called the “load” visit, which has no inward arcs. The job completes at any of one or more “terminal” vis- its, which have no outward arcs.

As an example, Fig. 2 shows the routing graph with 5 visits. After performing visit 3, the job has a 0.8 probability of branching to visit 5, and a 0.2

probability of branching to visit 4. Visits 4,2 and 3 form a circuit, which would correspond to a re- work loop in the route.

Graph-structured routings are difficult to work with, particularly when there are multiple arcs directed into the same visit. For example, in Fig. 2, in modeling visit 2, one would have to distin- guish between the initial visit 2 after visit 1 and each potential subsequent visit 2 after looping through visit 4.

To resolve this ambiguity, Orchard transforms the graph into a tree by duplicating visits. A sep arate copy of each visit is created for each path by which the visit can be reached. If there are any circuits, this will lead to an infinite tree. The de- rived routing tree for the example routing graph is shown in Fig. 3. Visit 2 has been replicated as visits 2a, 2b, 2c, . . . . Since the graph contains a circuit, the tree is infinite.

Associated with each visit h in the derived routing tree is its reaching probability, the prob- ability of reaching the visit, denoted p& This probability is obtained as the product of the branching probabilities of all the arcs on the path from the “load” visit to visit h. For example, the reaching probability of visit 5b in Fig. 3 is 0.16=0.2x0.8.

Of course, Orchard cannot explicitly store an infinite tree. It avoids this by ignoring suffi- ciently improbable visits. Notice that if the graph contains a circuit whose arcs each have probabil- ity 1, then any job entering such a circuit would

Fig. 2. Routing graph. Fig. 3. Derived routing tree.

R. J. Wittrock / The “0rchard”scheduler for manufacturing systems 53

never leave. Thus it is assumed that each circuit in the graph contains at least one arc with branching probability less than 1. With this as- sumption, it is easy to see that, for any e > 0, there are only finitely many visits with reaching prob- ability exceeding E. To keep the tree finite, Or- chard uses a given value of E and truncates the tree at any visit with reaching probability less than c. Specifically, any such improbable visit is replaced by an artificial “truncation” visit which is considered to be a terminal visit. Any visits that would have been descendents of this visit in the tree are ignored.

Figure 4 shows the truncated routing tree for the example routing, using e = 0.0 1. Since visit 4c has reaching probability 0.008 < e, it is replaced by a truncation visit, making the tree finite.

These routing trees are the fundamental data structure of Orchard. Orchard uses the routing tree to make a heuristic estimate of the impact of a job on each of the operations it visits. One as- sumption about the manufacturing system being scheduled is that, though there may be many jobs to schedule, there are only a few possible rout- ings. Thus, a small list of generic routing trees is generated, and each job is assigned its own copy of a generic routing tree selected from that list. This is why the algorithm is called Orchard: it uses many similar trees.

Once the routings have been transformed from graphs to trees, the Markovian assumption on the original graph becomes irrelevant. In fact, one can

T ,008 P

Fig. 4. Truncated routing tree.

c = .Ol

drop this assumption and permit the branching probabilities to change each time a job passes through a rework loop. However, for reasons of simplicity in the notation and input data, the Markovian assumption was retained in the soft- ware implementation. (This was consistent with the available data for the board line. )

The routing trees described so far are for exter- nal jobs. The routing tree of an internal job is similar, but it begins at the job’s current visit, the visit at which the job is currently being processed or queuing. Specifically, the routing tree for an internal job is a copy of a subtree of the appro- priate generic routing tree, rooted at the job’s current visit. The reaching probability of a visit on this tree is the conditional probability of reaching that visit given that the current visit has been reached. Let h be a visit of an internal job, let g be the corresponding visit on the generic routing tree and let g’ be the visit on the generic routing tree corresponding to the job’s current visit. Then the reaching probability of visit A is given by

6. Operator assignment

A key ingredient to Orchard’s model of the sys- tem is a measure of the capacity of each opera- tion, the rate at which it can process jobs. These capacities are defined over the scheduling pe- riod, which is the interval of time from the begin- ning of the schedule to some estimate of the com- pletion time of the last job. The capacity of an operation depends on the processing times of the jobs visiting it, the number of machines perform- ing the operation and, for some operations, the number of human operators performing the og eration. Since the number of machines at any op eration is fixed, it follows that the machine ca- pacities are fixed quantities. However, the same is not true for the capacity of an operation due to its human operators.

In general, some operations are fully auto- mated, while others, the “non-fully automated” operations, require the intervention of a human operator. The operator may be required for the entire operation, or only part of it, perhaps load-

54 R. J. Wittrock I The “Orchard” scheduler for manufacturing systems

ing and unloading. Each operator may be skilled on one or more such operations, and may switch operations multiple times during the course of the scheduling period. Thus the capacity due to the operators is variable, depending on how many operators will perform the operation during the scheduling period. Before it attempts any sched- uling, Orchard performs a tentative assignment of operators to operations, in order to determine this operator capacity.

Operator assignment comprises a major com- ponent of the overall Orchard algorithm and is fully described in Ref. [ 21. For completeness, a statement of the operator assignment problem will be given here followed by a brief summary of the solution method.

The assignment of operators is static over the scheduling period. Since, in reality, operators are permitted to switch operations during the sched- uling period, some operators may be assigned to spend only part of the scheduling period per- forming a given operation. In this case, the num- ber of operators assigned to an operation would be fractional. For example, an assignment of 1.5 operators to an operation would correspond to one operator spending all of his/her time on the operation and another operator spending half of his/her time on it.

Each operator is certified to perform one or more operations. All operators certified on an operation are assumed to perform it at the same speed. All operators that are certified on the same set of operations are lumped together as a “team”. The task of operator assignment is to decide how many operators from each team will perform each operation. The objective is to maximize the ca- pacity of the system by balancing the capacities of the operations.

The assignment is intended to be non-binding in the sense that, if the schedule were actually implemented in a real manufacturing system, the operator assignment would not be implemented, i.e., any skilled operator would be allowed to per- form a given operation on a given job, regardless of the operator assignment. This would give the system flexibility to respond to dynamic events. Orchard only uses operator assignment to esti- mate operation capacity.

Since the assignment is static over time, it does

not consider the individual visits to the opera- tions. Instead, it considers operation work loads, which are aggregate quantities derived from the visits. Roughly, the work load of an operation is the total amount of time that will be spent pro- cessing jobs at the operation during the schedul- ing period. In principle, work load is generated by three types of jobs: internal jobs, external jobs, and some “future” jobs.

Future jobs are jobs that have not yet arrived in front of the system. These jobs are not part of the scheduling problem, and Orchard has no ex- plicit model of them. However, it is clear that be- fore the end of scheduling period (when the last external job leaves the system), some future jobs will have entered the system and some opera- tions will have been performed on them. These are called the “immediate” future jobs, and it is necessary to include some (implicit) model of the immediate future jobs when computing work loads. For each immediate future job, consider the visit at which the job will be in process or queuing at the end of the scheduling period, and define its partial route to be the path of visits that the job will take to get to that visit. A model is needed for the partial route of each immediate future job.

For any internal job, define its past route to be the path of visits that the job took to get to its current visit. Note that since the past route is a deterministic path, each visit on it has a reaching probability of 1. Orchard uses the past route of the internal jobs as an approximation of the par- tial route of the immediate future jobs. Thus when computing work loads, Orchard includes the past route of internal jobs. In effect, Orchard is making a steady-state approximation by using the jobs in the system at the beginning of the scheduling period as an approximation to the jobs in the system at the end of the scheduling period.

To help define the concept of an operation work load, let Zbe the set of all visits to all operations, including visits both by external and internal jobs, and visits on the past route of internal jobs. Let

be the set of visits to operation i. Let sh be the processing time of visit h. The machine work load of operation i is defined as

R.J. Wittrock / The “0rchard”schedulerfor manufacturingsystems 55

zi= c PhSh* hs.%

This is the expected total processing time needed at operation i to process all the jobs.

Note that while Orchard models the process- ing times as deterministic, it only uses them to make heuristic estimates of derived quantities. Thus, a small amount of imprecision in this data should be tolerable. (Section 15 discusses this further. )

Similarly, let oh, be the (estimated) amount of an operator’s time consumed by visit h. The op- erator work load of operation i is defined as

yi= c PhOh* he.%

This is the expected total operator time needed at operation i to process all the jobs. The ma- chine and operator work loads comprise the visit- oriented input to the operator assignment mod- ule. The other inputs are capacity-oriented and are defined in Ref. [ 2 1.

The main output of operator assignment is rI, the number of operators assigned to each opera- tion, i. Another output of operator assignment is a designation for each operation, indicating either that it is fully staffed, so that the number of ma- chines is the binding capacity constraint, or that it is understaffed, so that the number of opera- tors assigned is the binding capacity constraint. Since Orchard implements only one of these two capacity constraints for each operation, this is an important distinction.

Orchard’s technique for solving the operator assignment problem is that given in Ref. [ 2 1. There, the problem was formulated as a mathe- matical programming problem, which is essen- tially a flow problem on a network. The nodes correspond to operations and “teams” of simi- larly skilled operators, and the arcs connect the teams to the operations for which they are skilled. The objective is to maximize the capacity of the system by balancing the capacity of the opera- tions. The criterion for capacity balance defined in Ref. [ 2 ] is more stringent than the usual max- min capacity balance criterion and leads to a non- convex objective function. Because of this objec- tive function, the standard network flow algo- rithms available at the time did not apply. In-

stead, a special algorithm was developed, which solves a sequence of maximum flow problems to find an optimal solution to the operator assign- ment problem in polynomial time.

More recently, a faster approach to solving the operator assignment problem was reported in Ref. [ 12 1. This improved approach applies the parametric network flow techniques given in Ref. [ 13 1. However, this more recent technique was never implemented in the Orchard software, since the Wittrock [ 21 approach was already imple- mented and seemed to run fast enough on the problems in question.

7. Capacity constraints

After Orchard has performed the operator as- signment, it computes the schedule. This sched- ule has two components: a loading schedule and an internal schedule. The loading schedule lists the sequence and times at which the jobs are to be loaded into the system. This is Orchard’s pri- mary output. The internal schedule indicates when each job will visit each operation. Its main purpose is to estimate the impact to the system of loading a particular job.

Of course, since the routings are stochastic, the internal schedule cannot be known deterministi- tally, and so Orchard computes a heuristic esti- mate. In effect, Orchard’s internal schedule is an “average case” schedule, in which each visit on the routing tree occurs and has a duration equal to its processing time, sh, but the amount of the operation’s capacity it consumes is an expected value. Thus, a key component of the internal schedule is th, the “throughput time” of visit h, which is a measure of the expected value of the amount of capacity of operation i(h) that is con- sumed by visit h. This quantity depends on whether operation i( h ) is fully staffed or understaffed.

To derive an expression for the throughput time in the fully staffed case, let mi be the num- ber of machines at operation i. The span of op- eration i is defined by

This is the expected total time it would take op- eration i to complete all its work, assuming that

56 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

the work load, z,, is distributed evenly among all mi machines, so that all these machines are pro- cessing continuously throughout the time inter- val [ O,Ti] . This defines an upper bound on the capacity of operation i and so the operation is considered to be “operating at capacity”, if it achieves this (optimistic) bound.

In the fully staffed case, the throughput time of visit h is given by

th =Phsh/mi(h) > (1)

so that

T, = c th. he.%

(2)

Visit h is considered to consume th time units of the capacity of operation i(h) . In effect, process- ing visit h makes operation i(h) “busy” for th time units. The next Section explains how this concept is implemented. Expressions similar to Eq. ( 1) have been used elsewhere, see e.g., Ref.

[141. When operation i is understaffed, there are

more machines available than needed, and so the capacity of the operation is determined by rr, the number of operators assigned to operation i, and by oh, the operator time of each visit h. In this case,

hs.%

and

th =PhOhlri(h) >

so that Eq. (2 ) holds in this case as well. Thus the general expression for throughput

time is

t,, =

(

Ph%lmi(h), if operation i (h ) is fully staffed,

PhOh/rr(hb if operation i(h) is understaffed.

The concept of operation span can be used to derive several other concepts relating to capac- ity. The span of the system is defined as T= Max, T,, where the maximum is taken over all operations. This is a lower bound on the ex- pected total time the system must spend in order to complete all the jobs, and the system is consid-

ered to be “operating at capacity” if it achieves this (optimistic) bound. Any operation i for which T,= T is called a bottleneck, since its ca- pacity determines the capacity of the system. Since the operator assignment module of Or- chard maximizes the system capacity by balanc- ing the operation spans, there will tend to be multiple bottlenecks.

Define the “bounding utilization” of opera- tion i as /I+= Ti/ T. This is the fraction of time that operation i would be busy, if the system were op- erating at capacity. Thus pj < 1 for all operations, and pI = 1 for bottlenecks.

Let J be the total number of jobs (both inter- nal and external). Define the system throughput time as t* = T/J. If the system is operating at ca- pacity, it processes an average of one job every t* time units.

8. Scheduling a job

Step 3 of Orchard’s scheduling iteration is to “schedule the job”, i.e., update the internal schedule to include the job. Once a job is sched- uled in step 3, its schedule is fixed, i.e., it will not be changed in subsequent iterations. In reality, loading a job can effect the schedule of jobs al- ready loaded, but as a heuristic, Orchard ignores this effect, in order to save computation.

Orchard schedules a job by scanning its rout- ing tree, and scheduling each visit as it is scanned. In principle, the visits could be scanned in any order such that each visit is scanned at a later time than its immediate predecessor. Orchard hap- pens to use a depth-first scan.

To schedule visit h to operation i = i(h), Or- chard computes the following heuristic estimates: - ah = its estimated arrival time, - qh = its estimated queuing time, and - ch = its estimated completion time,

and then updates an “available capacity” model for operation i.

The estimated arrival time, ah, is easy to com- pute. If the visit is the job’s “load” visit, then ah is the loading time of the job, computed in step 2. Otherwise, let h’ be the predecessor visit of h in the routing, and let rh be the transport time from h’ to h. Then

ah=ch’+rh. (3)

R. J. Wittrock / The “Orchard” scheduler for manufacturing systems 57

This is how Orchard takes transport time into account.

Given estimates for the arrival time and queuing time, the estimated completion time is easy to compute,

ch=ah+qh+sh. (4)

The main work of scheduling a visit is to esti-

‘h qh. .... eh

Before Scheduling Visit h.

mate the queuing time, which depends heavily on the state of the system. To facilitate this, Orchard uses an average case model of the available ca- pacity of each of the operations. The purpose of this model is to determine whether, at any point in time, an operation has capacity available to process another job. For each operation, Or- chard maintains a list of “busy intervals”, the set of non-adjacent intervals of time during which the operation is considered to be busy. The op- eration is considered to be available at all times not included in any busy interval. For any two points in time, a Q b, by scanning the list of busy intervals for operation i, it is easy to compute Ai( the amount of time in the interval [ a$] during which operation i is considered to be available.

After Scheduling Visit h

Fig. 5. Adding to visit to the schedule.

available time between the two busy intervals, part of th compktdy consumes this interval of available time and the rest of th is allocated to the available time immediately after the second busy interval. The result is one long busy interval in- dicated at the bottom of the figure.

Orchard estimates the queuing time as

%=eh-ah-fh, (6)

As indicated in the previous section, a visit h effectively makes operation i(h) busy for t,, time units. To model this, when visit h is scheduled, Orchard updates the busy intervals of operation i to include th units of additional busy time, be- ginning at time ah. By scanning the list of busy intervals for operation i, it is easy to compute

which is the amount of busy time during [a,&] (before the busy intervals are updated). In the case where all of the busy time in [a,&] is at the beginning of the interval, Orchard’s way of allo- cating th and assigning qh are very natural. The visit queues until the operation is available and then occupies the operation for th time units.

eh =Min{e>ah ]Ai(&,e) =th}. (5)

When Orchard schedules visit h, it designates the time interval [a&,] as “busy” for operation i, and updates the busy intervals accordingly. The effect of this is to consume the first th units of available time on the operation after ah, so that

A,(&&,) =o.

A less intuitive case is illustrated in Fig. 5, in which the busy time and available time are inter- laced. Here, two separate intervals of busy time contribute to the queuing time, qh. The intention is to reflect that fact that a visit will tend to queue more if it arrives at a time when the operation is heavily utilized, as reflected in the amount of busy time during the interval [a&,]. In effect, Orchard allocates the capacity of an operation through time as a preemptive schedule in Eq. (5), while it schedules the jobs as a non-preemptive schedule in Eq. (4).

The available time in the interval [a,&, ] that It is now possible to define C, the estimated visit h consumes need not be contiguous. Figure completion time of jobj, which is used in both of 5 illustrates an example. The shaded intervals in- Orchard’s objectives. Let 5 be the set of termi- dicate the busy intervals of time on an operation nal visits of job j, i.e. those visits that are leaves before and after visit h has been scheduled. Be- on the routing tree. This includes all visits at fore the visit is scheduled, the operation has two which the job actually leaves the system as well busy intervals. The visit arrives in the middle of as all the truncation visits generated in the pro- the first busy interval. Since th is greater than the cess of creating a finite routing tree. Thus,

58 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

c Ph=l.

In Orchard’s model, job j completes at time ch with probability &, for each visit hcq, and C, is defined to be the expected completion time. Thus,

cj= c Phch* ht.5

Sometimes it is necessary for Orchard to “hy- pothetically schedule” a job. This means to schedule the job, i.e., COmpUte the ah, qh, and ch values, but without updating the busy intervals. This allows Orchard to determine what the com- pletion time of a job would be, without updating the state of the system.

The scheduling module of Orchard is initial- ized by scheduling the internal jobs. Each inter- nal job is scheduled, using a loading time of 0 at its current visit. The main purpose of this is to initialize the busy intervals.

9. Choosing the next job to schedule

Step 1 of Orchard’s scheduling iteration chooses the next job to schedule. The method for making this selection is called the sequencing heuristic, because it determines the loading se- quence. Orchard’s sequencing heuristic is best understood by considering a much simpler prob- lem with the same objective: scheduling a set of jobs on a single machine in order to minimize WMFT. In this simplified setting, let Sj be the processing time of job j and let w, be the weight (priority) of job j. Then it is well-known that the optimal sequence is found by sorting the jobs in order of decreasing Wj/S, (“Smith’s rule” [ 3 ] ) .

Smith’s rule is easy to understand intuitively. If the weight of a job is interpreted as its value, then the WMFT objective seeks to maximize the average rate at which value is produced. Smith’s rule maximizes this rate greedily as each job is added to the sequence, and this, in the one-ma- chine case, produces an optimal schedule. A key aspect of this greedy process is that, as the se- quence is being built up, s, is the amount by which putting job j next in the sequence delays the en- tire sequence of jobs that will follow it.

Orchard’s sequencing heuristic generalizes this idea. It computes, for each job j not yet sched-

uled, a heuristic estimate, D,, of the amount by which putting job j next in the schedule would delay the remaining sequence of jobs. Then, gen- eralizing Smith’s rule, Orchard selects the job that maximizes w,/D,, in order to greedily maximize the rate at which value is produced. The ratio w,/ D, can be viewed as a heuristic benefit-to-cost ra- tio, conceptually similar to the “rate of return” ratio at the heart of the SCHED-STAR schedul- ing algorithm [ 14 1, although it is computed quite differently.

Thus, the key to Orchard’s sequencing heuris- tic is its estimated “delay effect”, 0) In the one machine case, the amount by which job j delays the remaining sequence of jobs is Sj, which de- pends only on job j. In the much more complex situation considered by Orchard, the amount by which job j delays the remaining sequence de- pends not only on job j, but also on the current state of the system, and, furthermore, each jobj’ in the remaining sequence may be delayed by a different amount. Thus, in principle, job j has a different delay effect, D;,,, for each job j’ in the remaining sequence. In order to have a single, well-defined ratio w,/D, to work with, Orchard makes a rather substantial approximation by re- placing the many delay effects DJ,.,, by a single aggregate delay effect, 0,.

Conceptually, the computation of D, involves several approximations. It would not be practical to compute Di,,, for each remaining job j’, be- cause this depends on where j’ appears in the re- maining sequence, which is not known. Consider Orchard at any iteration of its scheduling mod- ule. Let c,,, be the hypothetical completion time of job j’, if it were scheduled next. Let c,,,, be the hypothetical completion time of job j’, if jobs j and j’ (in order) were the next two jobs sched- uled. Then

0, ../ = c, ,, - ‘,, (7)

is the immediate delay effect of job j on job j’. Orchard’s first approximation in computing D, is to replace DJ,,, (which depends on where j’ ap- pears in the remaining sequence) by D,,,, (which assumed that job j’ will be sequenced next ) .

Let %be the set of unscheduled external jobs. Orchard’s second approximation in computing the delay effect is to take the mean of the imme-

R.J. Wittrock / The “0rchard”schedulerfor manufacturingsystems 59

diate delay effects of job j on all other unsched- uled jobs:

Using rS, as the estimated delay effect would require scheduling 0( IQ&l 2, jobs at each itera- tion. Such an approach would be more compu- tationally intensive than is justified by the im- precise, average case nature of Orchard’s model. For this reason, Orchard makes one further ap- proximation in computing the delay effect. This approximation is motivated by the following ob- servation: If jobs j and j’ both visit operation i, then the delay effect of jobj on jobj’ will depend directly on the throughput time of job j on oper- ation i, but will tend not to depend on the throughput time of job j’ . Also, job j will tend to have the greatest delay effect on job j’, if the two jobs have the same routing and will probably have no delay effect, if they have disjoint routings. Thus, the final approximation in computing Dj is made by assuming that the delay effect of job j on job j’ will depend on the routings of both jobs and on the throughput times of job j, but not on the throu~put times of job j’ .

Recall that although there may be many jobs to schedule, there are only a few possible routing trees. These trees are called “generic”, and each job has its own copy of the corresponding generic routing tree. For each generic routing tree, Or- chard constructs a “generic job” that will be used as a surrogate for the unscheduled jobs that use that routing tree. These unscheduled jobs are called the “constituents” of the generic job, and the set of constituents of generic job r is denoted 9,. The nodes on a generic routing tree are called generic visits. Each visit on the routing tree of an unscheduled job is considered to be a constituent of the corresponding generic visit on the generic routing tree. The set of constituent visits of ge- neric visit g is denoted $

The processing time of a generic visit is de- fined to be the mean of its constituent processing times:

The throughput time of a generic visit is defined similarly:

(9)

Given the above definitions, it is possible to hypothetically schedule a generic job. Thus Is,,,, the immediate delay effect of job j on generic job r, can be computed according to Eq. (7). Or- chard uses Dj,, as a surrogate for DjJ, for j’ E%;,- u>. Note that j’ #j, because the intent is to measure the delay effect of one job on other jobs, Thus, it is necessary to work with a modi- fied set of constituent jobs:

dpl,r =dp, - ti>,

with @j-g defined similarly. When computing Dj,,, Orchard hypothetically schedules a generic job, Y, whose processing and throughput times are calculated according to Eqs. ( 8 ) and (9 ), but us- ing these modified constituent sets, %&

Orchard uses the delay effect on generic job r as an approximation to the mean of the delay ef- fects on all its constituents:

This approximation is based on the assumption that the delay effect of job j on job j’ will depend on the routings of both jobs and on the through- put times of job j, but not on the throughput times of job j' . Approximation ( 10) is used to com- pute the estimated delay effect as follows:

9j=~~~iX~l-Is,,,./(I”c;rl-1), (11)

where L&? is the set of all generic jobs. With Eq.

(lo),

0,~~~~,~,~j,~/(Iql-l)=15,.

Conceptually, Dj, is an approximation to Dj,

which is an approximation to Isjj,, Vj’ E Q- G}, which is an approximation to D;,jf, Vj' E %'- u}. Computationally, Orchard simply calculates Dj by Eq. ( 11). To perform this calculation, Or- chard must temporarily schedule job j. In this case, it is also necessary to update the busy inter- vals temporarily so that the system will be in the appropriate state to hypothetically schedule each generic job, r, to compute c_,r. However, after Dj has been computed, the busy intervals must be restored to their previous state, in order to com-

60 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

pute D, for the next job. Thus, at the beginning of step 1, Orchard saves a copy of the busy inter- vals. Then, for each job, j, after computing Dj, Orchard restores the busy intervals to this saved state. Also, the loading times for these scheduling operations are determined by a method given in Section 11.

Orchard’s sequencing heuristic proceeds as follows: - Save busy intervals. - Qje %,

- Q~E% - Calculate processing and throughput

times for generic job r. - Hypothetically schedule generic job r. - Compute cr.

- Schedule job j. - Q/E%

- Hypothetically schedule generic job r. - Compute (?,,r and Dj,,.

- Compute 0,.

timates. The actual arrival, queuing and comple- tion times of a visit are stochastic quantities. In particular, the actual arrival of a visit h may be delayed sufficiently beyond ah that there is no ac- tual queuing time and the operation starves, i.e., some of its machines or operators incur idle time. On bottleneck operations, this constitutes a real loss of capacity. Thus, a certain level of planned queuing is desirable. Conversely, excessive queuing, which is equivalent to excessive work- in-process ( WIP), is undesirable for many well- known reasons, including decreased responsive- ness to changes in demand, the time value of the money which has been invested in the WIP, and (particularly in the board line case) increased congestion and the possibility of blocking. Thus, a small amount of queuing at each operation is desired.

- Restore busy intervals. - Choose Argmax Wj/D,.

,E 4/ To determine the worst-case complexity of the

sequencing heuristic, let

X = the number of external jobs, H =the maximum number of visits to any

The need to seek a desired level of queuing is incorporated into Orchard as a queuing objec- tive. For each operation, i, Orchard computes a target queuing time, q:, as a preprocessing step, before scheduling. (How Orchard computes q:

is explained below. ) Given this, the queuing ob- jective is formulated as the following penalty function, which penalizes the expected absolute deviation from the target queuing time of each operation:

operation, G = the number of generic visits.

The dominating term in the complexity corre- sponds to the two inner loops described above, “Hypothetically schedule generic job r”. To schedule one visit may require a scan of all the busy intervals for the operation and there may be H of these. Since each generic job must be sched- uled, each generic visit must be scheduled, which takes 0( GH) time. Since the outer loop iterates 1921 <X times, the sequencing heuristic runs in O(XGH) time. The sequencing heuristic will be executed X times in global loop of the scheduling module (described in Section 4)) so the contri- bution of the sequencing heuristic to the run time of Orchard is O(X’GH).

h~2d~h-q:(h) 1~ E

where %‘is the set of all visits of external jobs. This objective seeks a schedule such that qhEqT(h) as nearly as possible for all visits h. As mentioned in Section 3, the queuing objective is subordinate to the WMFT objective.

To aid in determining target queuing times, Orchard requires the user to specify B*, the “bot- tleneck target queue length”. This is the desired number of jobs that would queue in front of a hypothetical bottleneck operation that processed each job once. Orchard uses this parameter and t*, the system throughput time, to compute e*, the bottleneck target queuing time. These quan- tities are related by Little’s law,

10. Queuing objective B*= Q*t*-‘,

where t*-’ is interpreted as the arrival rate of jobs at the bottleneck. Thus, Orchard computes the

As has been noted, the quantities associated with a scheduled visit, ah, qh, and ch, are only es-

R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems 61

target queuing time of any bottleneck operation i, as

qr =Q*&*t*. (12)

In addition to the bottleneck operations, some planned queuing is also necessary for near-bot- tleneck operations, i.e., those operations whose bounding utilization is high but less than 1. To define which operations are near-bottleneck, Or- chard uses a parameter, p*. An operation i is con- sidered to be near-bottleneck if its bounding uti- lization, pui, satisfies ,u* < pi < 1. For operations i with pi< p:, no planned queuing is considered necessary, so q: = 0. For the near-bottleneck op- erations, qT drops off quadratically from its max- imum of Q* at 1 to 0 at p*. The general expres- sion for the target queuing time is

q: =Q*[ (p;;;; +I’, (13)

where x+ =Max(x,O). This expression has no particular theoretical foundation. It merely pro- vides a means of allowing the target queuing time to drop off smoothly from Q* to 0.

11. Timing

Step 2 of Orchard’s scheduling iteration deter- mines a loading time, Lj, for the job chosen to be scheduled next. The loading time is chosen by a heuristic that seeks to achieve (as nearly as pos- sible) the target level of queuing at each visit.

To facilitate computing loading times, Or- chard maintains a loading time, L,, for each ge- neric job r. Initially, L,=O, for all generic jobs. Then, at the end of each scheduling iteration, each “generic loading time”, L,, is recomputed ac- cording to the timing heuristic described in this Section.

In general, Orchard’s timing heuristic consists of two steps: ( 1) Hypothetically schedule the job, j, at some time, & (2) Adjust the loading time by some amount, UT, in order better to achieve the queuing targets.

The resulting loading time is

Lj= [L,+ALLiLf]+.

Since job j will be hypothetically scheduled at time I,, this should be chosen to be a reasonable loading time for job j. The ideal loading time is expected to depend mostly on the routing of the job, and relatively less on its processing times. Thus, Orchard takes

Lj=L,(j),

where ru) is the generic job corresponding to job r. (For a generic job r, let r(r) = r. This is rele- vant when the timing heuristic is being applied to a generic job. )

Step 2 of the timing heuristic computes an ad- justment, AL,?, for the loading time of job j. In reality, the ideal adjustment would depend on which branch the job actually takes in its routing tree. For any hEq, let Ph be the path to visit h, the set of visits on the routing tree ofjobj leading up to and including visit h. Let

A&= c (9h’--9Y(h’)). h'E.93,

(14)

Orchard takes this as the ideal adjustment given that the path ph is taken. The intention is to com- pensate for the deviations from the queuing targets.

Since it is not known which path will actually be taken, Orchard chooses a compromise of the A&, values. An adjustment greater than A& would tend to reduce queuing below the target level on $$h, which would run the risk of starving bottleneck operations. Orchard requires that this occur with low probability. For a given adjust- ment, ALj, let

~j(Mj)=(hE~lUj>AEh},

be the set of terminal visits for which the adjust- ment is too high. Given a probability threshold, o!, Orchard chooses the maximum adjustment ALj, such that the probability is no larger than cx that a path is taken leading to a terminal visit in 9j( mj) e (In experiments cx was set at 0.2. ) Thus,

Uf=max MjI

i

C ph<CY. (15)

hsS#j(ALj) 1

Algorithmically, hLJ’ is computed as follows: Initialize 9j=#. Add visits from 4 to 9j in in- creasing order of AL+, until

; Ph>a*

62 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

Let h* be the last visit added to 9,. Set

ALf=AZ&

Clearly, this satisfies Eq. ( 15 ) . A loading time is required any time a job is

scheduled (hypothetically or otherwise), and this occurs in several places in the Orchard algo- rithm. However, the timing heuristic is not used for all of these cases. In fact, it is only used R + 1 times per scheduling iteration, where R is the number of routings. The timing heuristic is called once in step 2, to compute the loading time of the selected job, and once for each generic job, TE 9?, at the end of the scheduling iteration, in order to maintain the generic loading times. The only other time Orchard requires a loading time is at three steps of the sequencing heuristic: “Hypo- thetically schedule generic job r” (two steps) and “Schedule job]? (one step). For this purpose, it is not necessary to compute a precise loading time, and so Orchard uses the generic loading times. Thus, L,,, is used as the loading time for job j in the sequencing heuristic.

To determine the worst-case complexity of the timing heuristic, let A be the maximum number of visits in any routing. The dominating term in the complexity is O(A log A), corresponding to the work of sorting the terminal visits in order to compute AL:. The R + 1 calls to the timing heu- ristic contribute O(RA log A) to the complexity of the scheduling iteration, and so the contribu- tion of the timing heuristic to the complexity of Orchard is 0( XZU log A ) .

With the timing heuristic defined, the entire Orchard algorithm with WMFT objective is now defined, except operator assignment. In Ref. [ 2 1, it is shown that the complexity of operator as- signment is 0 ( (KI) 3 ) , where K is the number of “teams” of operators and Z is the number of op- erations that require some human intervention. The overall complexity of Orchard is comprised of three terms, corresponding to operator assign- ment, the sequencing heuristic, and the timing heuristic:

0( (KI)3)+0(X2GH) +O(XZU logA).

At the end of an iteration, let Ybe the set of external jobs scheduled so far. Let

L=Max Lj. JE.cY

Orchard takes L as a heuristic estimate of the earliest possible loading time of all future jobs. This is useJfu1 in the case when there are so many external jobs that it is only necessary to schedule a subset of them. In this case, the user specifies a “planning horizon”, L*, and Orchard terminates scheduling as soon as L 2 L*. Since it is expected that all future jobs would be loaded after time L, the portion of the schedule up to time L is ex- pected to be the same as would have resulted if all jobs were scheduled. If the user intends to re- run Orchard by time L*, then this truncated schedule will suffice. In some cases, scheduling a subset can greatly reduce the run time. Let S be the number of external jobs scheduled. Then the complexity of Orchard becomes

0( (ZU)3) +O(SXGH) +O(SRA log/t).

Additionally, the final value of L is useful for de- fining statistics. Such quantities as capacity uti- lization can be computed as an average over the interval [ O,L] .

12. The targeting objective

In addition to the WMFT objective defined in Section 2, an alternative objective for Orchard was formulated to accommodate specific con- cerns at the board line. This line was being man- aged according to practices known as continuous flow manufacturing (CFM). One goal resulting from the CFM approach was to achieve, at each sector, a nearly constant inter-departure time (the time between successive boards leaving the sec- tor ), specified in advance by management.

Constant inter-departure time was an appro- priate goal for this line. In most of the sectors, the processing times were largely independent of the board being processed. In these sectors, if the in- put rate could be kept constant, one could expect a constant output rate, which would, in turn, fa- cilitate a constant input rate at the next sector. If this could be achieved at all sectors, a stable, low inventory product flow would result. However, this was much more difficult to achieve in sec- tors where the processing times varied greatly from board to board. For these sectors, achieving the inter-departure time goal required a more so- phisticated scheduling algorithm, such as Or-

R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems 63

chard. Thus, it was important for Orchard’s ob- jective to reflect the need for constant inter- departure time.

( 1) An inter-departure time objective. (2 ) The queuing objective (defined in Section

Orchard’s alternative objective is a goal-pro- gramming type of objective, called the “target- ing” objective, which consists of the following two aspects:

10). For the purpose of defining the inter-departure

time objective, it is assumed that there is an out- put buffer of jobs already completed and waiting to go to the next sector. It is also assumed that jobs are removed from this output buffer (to go to the next sector) at a linear rate, i.e., one job every tminutes, where tis the desired inter-de- parture time. The inter-departure time part of the objective seeks to maintain the output buffer at a fixed level (a user specified target ) .

Let b( t,k) be Orchard’s estimate of the num- ber of jobs in the output buffer at time t, if k jobs have completed by that time. Let b,=b( 0,O) be the initial output buffer level. Then Orchard’s linear depletion rate model sets

b(t,k)=b,,+k-Lt/ij’.

where LxJ denotes the greatest integer GX. For the jth job loaded, Orchard specifies a tar-

get completion time, CT, that would maintain the output buffer at the target level. In defining CT, it is assumed that the jth job loaded will be the Y+jth job to complete, where there are Y jobs initially inside the system. Let b* be the target output buffer level. Then Cy must satisfy

b(C,*, Y+j)=b*.

Solving for CT,

C,*=(b,+Y+j-b*)t.

For the&h job loaded, the inter-departure time part of the objective penalizes the absolute de- viation between its actual completion time, C,, and its target completion time, CJ’. Thus, the in- ter-departure time part of the objective is

where %is the set of external jobs. The targeting objective is just a weighted av-

erage of the inter-departure time objective and the queuing objective,

Minimize:

where W is a user-specified weight, with

Assuming IV< 1, the queuing objective is being given more importance here than in the WMFT

O<W<l.

objective, where it was used only to break ties. The targeting objective was formulated in this way, because if the inter-departure time objec- tive were allowed to dominate, then in the case of a slow inter-departure time ( f), the schedule might be forced to starve bottleneck machines in order to achieve an inter-departure time ap- proaching 6 and this was considered undesirable for the board line. The weighted objective allows these two concerns to be treated on a commen- surate basis.

13. Scheduling with the targeting objective

The only part of Orchard that needs to be al- tered to reflect the targeting objective is the se- quencing heuristic. Here, a technique similar to the WMFT case is used, i.e., the effect of a job on all other unscheduled jobs is approximated by the effect of that job on all generic jobs. However, instead of measuring the delay effect on other jobs, Orchard measures the effect on the other jobs’ contribution to the objective.

Let y be the set of visits ofjobj. The deviation of job j is defined as

which is the contribution of job j to the targeting objective.

Consider Orchard at any iteration of its sched- uling module. In the spirit of Section 9, let Zj be the hypothetical deviation of job j, if it were scheduled next, and let Zjj, be the hypothetical deviation of job j’ , if jobs J’ and i’ were the next two jobs scheduled. Further, let Z$ be the hy- pothetical deviation of job j’ , if job j were sched-

64 R.J. Wittrock / The “0rchard”schedulerfor manufacturing s.wtems

uled next and job j’ and all other unscheduled jobs were scheduled in the optimal remaining se- quence. Then the optimal job to schedule next would be the one that minimizes

E,*=Zj+ C ZT,,. J' E %- b)

It is easy to see how EJ’ could be optimized by a dynamic programming approach whose state space is exponential in j421. Of course, such an approach would be impractical. A practical heu- ristic would be to make the approximation, Z,>. z Z,tj, and choose the job that minimizes

This would be an approximate dynamic pro- gramming approach similar to that used in Ref. [ 15 1. One way to interpret Ej is to observe that the second term in the expression is proportional to the mean h~othetical deviation of the unse- quenced boards if they were scheduled after job j. Thus, the job that minimizes ~j is the one that would put the system in its best state if it were scheduled next.

Calculating l?j would require the same amount of work as calculating rr, (in Section 9). Since this is judged to be more computation than is jus- tified, Orchard uses an approximation similar to that used in Section 9. For each generic routing tree, Orchard constructs a generic job, I, and uses its hypothetical deviation as an approximation to the mean of the hypothetical deviations all its constituents:

I: 5.i’ Z,,, zJ’ ~~.,~

L&I .

This leads to the following approximation:

Finally, Orchard chooses

Argmin El JE ,?f

as the next job to schedule.

14. Off-line tests

After the Orchard algorithm was designed and a prototype program for it was written, two main

tasks remained in order to implement Orchard as a sector scheduler for the board line: testing the algorithm and creating a user interface. The main purpose of the user interface would be to collect the considerable data required by Or- chard, either manually or by direct access to on- line data bases.

Thus, the next task was to perform computa- tional tests on Orchard. Two kinds of tests were performed: “off-line” tests followed by “on-line” tests. The purpose of the off-line tests was to measure the quality of Orchard’s heuristically generated schedules. Ideally, Orchard’s schedule should be compared to an optimal schedule, but of course, this was too difficult to obtain. In- stead, Orchard’s schedules were compared to schedules generated by a method representing current practice on the board line.

The initial tests were quite simple. Orchard was compared to an algorithm that was nearly iden- tical to Orchard, differing from Orchard only in the most important aspect: the sequencing heu- ristic. At each iteration of the scheduling mod- ule, instead choosing the next job to schedule by Orchard’s method, the prioritized-first-in-first- out (PFIFO ) rule was used. This ruIe chooses the job of highest priority, breaking ties in favor of the earliest job to arrive. This rule seemed to be a reasonable approximation to actual practice on the line.

Real data was used for one day’s production at one sector. The data formed five test cases. Or- chard (with the WMFT objective) and the PFIFO rule were run for each case, and the re- sults are shown in Table 1. The WMFT values are compared for Orchard and the PFIFO rule. In all cases, Orchard gave better (lower) WMFT

Table I Weighted mean flow times

Case Number PFIFO Orchard Improve- CPU of ment time boards (%I (s)

1 81 2505 1685 33 175 2 18 850 646 24 15 3 17 1023 828 19 14 4 2.5 775 640 17 22 5 21 623 500 20 16

R.J. W&rock / The “0rchard”schedulerfor manufacturing systems 65

values, with improvements of 17%-33%. Table 1 also shows CPU times on an IBM PS/

2 model 80 at 16 MHz running DOS. Case 1 ran considerably longer than the rest, due to an unu- sually large number of boards. This would have been an excellent case for scheduling a subset of the boards. Assuming this is done, the CPU times shown in Table 1 indicate that Orchard runs fast enough to be used reactively, i.e., it would be quite feasible to run Orchard several times dur- ing a shift, in response to disruptions.

15. On-line tests

The off-line tests described above and similar tests were sufficiently encouraging to warrant proceeding to the next step: on-line tests. The on- line tests consisted of loading boards into a sec- tor of the board line according to a schedule gen- erated by Orchard. The purpose of these tests was to determine whether or not Orchard performed well enough in the real environment to merit the effort of implementing it.

The following approach was used: The test was performed at the beginning of the second shift of the day. Orchard was installed on an IBM PC/ XT on the floor at the sector. Most of the data was collected in advance. At the last minute, just as the second shift was beginning, the dynamic data was collected and entered. This included the exact list of boards to be scheduled as well as the locations of the boards inside the sector, and the operator data. Orchard was run with this data, using the targeting objective. The sector operator was then asked to load boards into the sector ac- cording to the schedule generated by Orchard. The resulting behavior of the sector was ob- served and measured.

In this case, no comparison with an alternative could be made, since the schedule was being ex- ecuted under unique conditions that could never be repeated. Instead, these tests were to provide qualitative insights. As it turned out, the on-line tests revealed substantial obstacles to the pro- ductive use of Orchard at the board line.

Two on-line tests were performed. The first test revealed that the program was too inconvenient to use on-line, due to input and output consider- ations. Fortunately, this problem was easy to remedy by changing the program.

The second test revealed a much deeper prob- lem. A key portion of the data for Orchard is a list of the processing times of each job at each operation. For the board line, the processing times could only be estimated, based on a few key parameters, such as the number of wires to be bonded. Some deviation from the estimates was to be expected, due to such factors as differing operator speeds. However, the second on-line test revealed that the deviations between estimated and actual processing times were much greater than was previously believed. For 75% of the processing times measured, the actual time de- viated from the predicted time by a factor of at least 2 in either direction, i.e., the error was at least + 100% or - 50%. While Orchard was de- signed to be tolerant of small deviations from es- timated processing times, i.e., lo-20%, the ob- served level of deviation would render Orchard’s schedule meaningless. Indeed, it would be difli- cult to devise any scheduling algorithm to ad- dress the issues described in Section 2 in the presence of wildly unpredictable processing times. In any event, it was clear that Orchard could not be adapted to do so. Based on this crit- ical discovery, the author decided to terminate the project of implementing Orchard as a sector scheduler for the board line.

16. Directions for further research

Further research that might usefully be con- ducted on Orchard would fall into two main cat- egories: testing the algorithm and making im- provements to various aspects of it. The tests described in the preceding two Sections were quite limited. The off-line tests only dealt with the sequencing heuristic and compared it with one very simple policy. The on-line tests ran into problems so quickly that there was no opportu- nity to explore the various aspects of the algo- rithm. (Indeed, there is no way to know whether or not there were any other aspects of Orchard and/or the board line that would have prevented its use there. ) Thus, most of the approximations and heuristics used in Orchard remain untested.

An excellent way to test Orchard would be by discrete event simulation. One would build a simulation model of some manufacturing sys- tem, e.g., a sector of the board line. The simula-

66 R. J. Wittrock / The “Orchard” schedulerfor manufacturing systems

tion model would randomly generate the various stochastic events, such as stochastic branching, variations in processing time, etc., and schedule the loading of jobs by calling Orchard for an ini- tial schedule and then repeatedly calling Orchard to reschedule in reaction to each major disrup- tion, such as a machine failure, or the arrival of new jobs. In this mode, one could investigate un- der what circumstances Orchard’s various as- sumptions and approximations are valid. One could also use the simulation approach to per- form comparison tests. In this case, two simula- tion runs would be executed, one using Orchard as the scheduling algo~thm and one using an ai- ternative method, with both runs using the same sequence of stochastic events. Thus, each job would follow the same path in its routing tree in both simulation runs and each visit of each job would have the same processing time in both runs, etc. This would provide a very fair compar- ison. Using this paradigm, one could compare the various components of Orchard to alternatives, or one could compare Orchard to a completely different method. To date, discrete event simu- lation tests of Orchard have not been carried out, due to the substantial programming effort involved.

There are many aspects of Orchard for which further testing would be warranted. It would be important to investigate the effect of unmodeled variability on Orchard’s performance as a heu- ristic. The on-line tests revealed variability in processing time at the board line that was clearly too much for Orchard to be useful, but the ques- tion remains, “How much variability in process- ing time can Orchard tolerate before its perform- ance degrades signi~~antly?“. The simulation approach would be excellent for this. A similar question to investigate by this means would be, “How much branching in the routing trees can Orchard tolerate?“.

Orchard’s queuing objective was formulated in order that the resulting schedules would tend to avoid to problems that Orchard does not explic- itly model: blocking, if a buffer becomes full, and unpredicted bottleneck starvation, if jobs arrive at a bottleneck significantly later than modeled or if they complete significantly sooner than ex- pected. One could use simulation to determine

the extent to which the schedules generated by Orchard avoid these problems.

It would useful to compare the sequencing heuristic against more intelligent heuristics than the PFIFO rule that was considered in Section 14. One kind of heuristic to compare would be one that follows Orchard’s scheme of generaliz- ing Smith’s rule, by maximizing Wj/Dj, while US-

ing a different (perhaps simpler/faster) ap- proach for computing the estimated delay effect, .Dje However, heuristics that do not use this ap- proach should also be compared.

In addition to testing Orchard in its present form, one would make various improvements to Orchard on theoretical grounds. Those parts of Orchard whose purpose is to manage the internal queues could benefit from the use of concepts and approximations from queueing theory. Espe- cially, this would apply to the formulation of the queuing objective and the timing heuristic. See also appendix B for an alternate approach to queue management.

In a real manufacturing system, there will inevitably be a certain amount of unscheduled idle time on the bottlenecks and near-bottlenecks (i.e., other than the idle time represented by the gaps between the busy intervals of these opera- tions) . Some of this can be avoided by specifying a high bottleneck target queue length,

R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems 67

making essentially deterministic, mean-value as- sumptions about a problem which is actually sto- chastic in nature. It is well-known in queuing theory that queuing times usually increase with the variability of the problem. Thus by ignoring variability, Orchard is probably underestimating queuing time. This might be amenable to an- other scaling adjustment, i.e., defining a queuing time expansion factor, 43 1, and replacing Eq. (6) with

qh=(eh-ah-b)& (16)

where appropriate values for 4 might be deter- mined by queueing theoretic considerations or by simulation. In any event, simulation should probably be used to investigate the effectiveness of Orchard’s preemptive approach to estimating queuing times, given in Eqs. ( 6 ) or ( 16 ) .

It would be useful to investigate how to set the various input parameters to Orchard. In general, the simulation approach could be used for these, but in some cases, an analytical model would be appropriate. Appropriate values for the tree truncation threshold, E, and the near-bottleneck utilization, JL*, should probably be determined by simulation. A queuing theoretic model could probably be developed for setting B*, the bottle- neck target queue length, and for b*, the target output buffer level.

It would definitely be useful to investigate how to set the weights, Wj, in the WMFT objective. Indeed, the chief drawback of the WMFT objec- tive is that, while management usually knows how to prioritize jobs in the sense of a rank ordering, there is no well-defined method for translating this information into numerical weights, and un- fortunately, the schedules tend to be sensitive to the actual weights. Some model needs to be de- veloped, perhaps an economic one.

17. Conclusion

Orchard is a generic scheduling algorithm that was designed to addresses the various issues that arose at a particular printed circuit board line. Unfortunately, it turned out to be not to be use- able at this board line, due to a critical issue that was not foreseen: dramatically unpredictable processing times. However, since many manu-

facturing systems have relatively predictable processing times, Orchard has the potential to be usefully applied at some other manufacturing line, probably after further tests and modifica- tions, In fact, as of this writing, Orchard is being adapted, tested, and deployed for use as sector scheduler at a semiconductor line. In any event, some of the concepts of Orchard may be of gen- eral use.

Appendix A: Input data

For a coherent view of Orchard, it may be helpful to refer to the following statement of the input data required to run Orchard. There are three types of input data: ( 1) Static problem data, which defines the line

and tends to stay the same from one run to the next.

(2) Dynamic problem data, which specifies those aspects of the problem that tend to change every time Orchard is run.

(3) Tuning parameters, that effect how Or- chard solves the problem.

The distinction between static and dynamic data is not precise. In fact, from the point of view of the Orchard algorithm, there is no real need to make such a distinction. However, this distinc- tion is useful from the point of view of applying the algorithm.

All time data is expressed in whole numbers. In the list below, each portion of data is accom- panied by a bracketed reference to the text.

Static problem data: - The set of operations (Section 6 ) . - The set of teams [ 2 1. - The set of routings (Section 5 ). - For each team k:

- The set of operations on which the team is skilled, YJ [ 2 1.

- For each routing: (Section 5 ) A directed graph of visits, defining the routing. A designated load visit in the graph. For each visit h: - The operation i(h) being visited. For each arc in the graph:

68 R. J. Wittrock / The “Orchard” scheduler for manufacturing systems

- The conditional probability of travers- ing this arc given that the job has already reached the visit from which the arc is incident.

- For each operation i: - For each operation j that is a successor to

operation i on some routing: - The transport time from operation i to

operation j (Section 8 ) .

Dynamic problem data: - For each team k:

- The number of operators, gk, in team k [ 2 1. - For each operation i: (Section 7 and [ 2 ] ).

- The number of machines, m,, that perform operation i. (Assuming the machines are subject to failure (and repair), mi, is dy- namic data. )

- For each job j: - Its routing (Section 5 ). - For each visit h in the routing:

- Its processing time, &, (Sections 6, 7 and

8). - ItS OperatOr tim!, oh (SeCtiOnS 6 and 7 ) .

- If it is an internal job, - The current visit of the job (Section 5 ) .

(The visit at which it is currently in pro- cess or queuing. )

- If the WMFT objective is used: (Sections 3 and

9). - For each external job j:

- Its weight, w,, in the WMFT objective. - If the targeting objective is used, the following

scalar values are required: (Section 12 ). - Initial output buffer level, bO. - Target output buffer level, b*. - Output buffer depletion time, L - Inter-departure time objective weight, IV.

Tuning parameters: - The routing tree truncation threshold, E (Sec-

tion 5). - Bottleneck target queue length, B* (Section

10). - Near-bottleneck utilization, p* (Section 10). - The planning horizon, L* (optional) (Section

11).

Appendix B: An alternate approach to queue management

Orchard’s means of managing the internal queues may seem rather weak from a conceptual point of view. As mentioned in Section 16, the most promising way to deal with this aspect of the problem would probably be to develop an ap- proach based on queuing theory. However, the following alternative approach (which does not employ queuing theory) is at least simpler and perhaps more conceptually satisfying than the approach given in the main text.

Two aspects of Orchard are oriented towards queue management: the queuing objective and the timing heuristic. The main conceptual draw- back of the queuing objective given in Section 10 is the quadratic expression ( 13) for the target queuing time of near-bottleneck operations, which is probably more complicated than can be justified on a conceptual basis. A simpler ap- proach would be to dispense with the concept of near-bottlenecks and define bottlenecks dis- cretely. Thus, any operation i such that pi& p* is designated a bottleneck and all other operations are designated non-bottlenecks. (A higher value of p* should probably be used in this context. )

Since (by definition) the utilization of the non- bottlenecks is low, one can expect that the queuing at these operations will also tend to be low, regardless of the loading schedule. Thus, there is no need to manage these queues, and they are left out of the queuing objective. The target queuing time for all bottlenecks is given by Eq. ( 12) where (lacking a theoretical basis for any- thing more complicated) this is now applied uni- formly to all operations i such that pj> p*. Let

the set of all visits of external jobs to bottleneck operations. The queuing time objective becomes

Also, a corresponding revision is made in the de- viation term, 21, used in the sequencing heuristic for the targeting objective in Section 13.

The main conceptual drawback of the timing heuristic is that it seeks to achieve the queuing targets of all of the job’s visits simultaneously (as

R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems 69

reflected in Eq. ( 14 ) ) . This is probably to ambi- tious a goal for a loading time heuristic. Realist- ically, the only queuing time that can be con- trolled by adjusting the loading time of a job is the queuing time at the first bottleneck in its routing. The queuing times of visits before the first bottleneck will tend to be low regardless of the loading time, while the queuing times of vis- its after the first bottleneck will usually not be ef- fected by the loading time, because the first bot- tleneck will tend to absorb any loading time adjustments into its own queuing time. Thus, it would be more realistic to choose a loading time that attempts to manage the queue at the first bottleneck and ignores the other queues.

Of course, since the routings are trees, a job may have several “first bottlenecks”, depending on which path it follows. Let .9$ be the set of first bottleneck visits of jobj, defined by

Thus, visit h is a first bottleneck if, and only if, the only bottleneck on the path up to and includ- ing visit h is visit h itself. Note that

h;qPhG1r

which is the probability that jobj will visit at least one bottleneck.

The alternative timing heuristic replaces the loading time adjustment, AL,?, given in Section 11 by

A+&Ph(qh-,*). J

This is a weighted average compromise of the loading time adjustments necessary to cancel out the deviation from the queuing target at each first bottleneck. Implicitly, this expression assumes that the queuing on the path leading up to each first bottleneck is sufficiently low that the effect of the adjustment passes though these visits and has full impact at the first bottleneck. Given this new expression for hLf , the loading time is com- puted as in Section 11.

The dominating term in the complexity of this timing heuristic is simply the time it takes to schedule the job. This must be done O(R) times per iteration, which is less than the number of job scheduling operations that must be per-

formed for sequencing. Thus, the alternative timing heuristic would be a low order term in the overall complexity of Orchard.

Acknowledgements

The author is grateful to Robert Juba for his many vital contributions to the Orchard project. Additional contributions were made by many other people in the TCM Board Manufacturing and Advanced TCM Board Process Manufactur- ing Engineering Departments of IBM Pough- keepsie. An earlier version of this paper was given a very careful and insightful reading by an anon- ymous editor of The Journal of Manufacturing and Operations Management. Various improve- ments to the paper, and especially, many of the ideas in Section 16 are due to this editor.

References

Ill

I21

131

141

ISI

I61

[71

[81

191

Ito1

1111

Dietrich, B.L. and Snowdon, J.L., 1986. Private Communication. Wittrock, R.J., 1989. Solving an operator assignment problem using network flows. Research Report RC 15276. IBM Research Division, T.J. Watson Research Center, Yorktown Heights, NY. Smith, W.E., 1956. Various optimizers for single-stage production. Naval Logistics Research Quarterly, 3: 59- 66. Carey, M.R., Johnson, D.S. and Sethi, R., 1976. The complexity of flowshop and jobshop scheduling. Math. Oper. Res., 1: 117-129. Bean, J.C. and Birge, J.R., 1985. Match-up real-time scheduling. Technical Report 85-22. Dept. of Indus- trial & Operations Engineering, University of Michi- gan, Ann Arbor, Ml. Morton, T.E., Lawrence, S.R., Rajagopolan, S. and Kekre, S., 1986. MRP-STAR: PATRIARCH’s plan- ning module. Graduate School of Industrial Adminis- tration, Carnegie Mellon University, Pittsburgh, PA. Lageweg, B.J., Lenstra, J.K. and Rinnooy Kan, A.H.G., 1977. Job-shop scheduling by implicit enumeration. Manage. Sci., 24: 441-450. Kimenia, J.G. and Gershwin, S.B., 1983. i.~i algo- rithm for the computer control of production in flexi- ble manufacturing systems. IIE Trans., 15: 353-362. Gershwin, S.B., Akella, R. and Choong, Y.F., 1985. Short-term production scheduling of an automated manufacturing facility. IBM J. Res. Develop., 29: 392- 400. Wein, L.M., 1988. Optimal control of a two-station brownian network. Math. Oper. Res., 15: 215-242. Wein, L.M., 1988. Scheduling networks of queues: Heavy traffic analysis of a two-station network with controllable units. Oper. Res., 38: 1065-1078.

70 R.J. Wittrock / The “0rchard”schedulerfor manufacturing systems

[ 121 Wittrock, R.J., 1990. Operator assignment and the parametric preflow algorithm. Manage. Sci., 38: 1354- 1359.

[ 131 Gallo, G., Grigoriadis, M.D. and Tarjan, R.E., 1989. A fast parametric maximum flow algorithm and appli- cations. SIAM J. Comp., 18: 30-55.

[ 141 Morton, T.E., Lawrence, S.R., Rajagopolan, S. and Kekre, S., 1988. SCHED-STAR: A price-based shop scheduling module. J. Manufac. Oper. Manage., 1: 13 l- 181.

[ 151 Wittrock, R.J., 1988. An adaptable scheduling algo- rithm for flexible flow lines. Oper. Res., 36: 445-453.