66
A HEURISTIC ALGORITHM FOR FLOW SHOP SEQUENCING PROBLEMS by RAJASEKHAR AALLA, B.E. A THESIS IN INDUSTRIAL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN INDUSTRIAL ENGINEERING Approved Accepted December, 1992

A HEURISTIC ALGORITHM FOR FLOW SHOP A THESIS IN …

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

SEQUENCING PROBLEMS
Submitted to the Graduate Faculty of Texas Tech University in
Partial Fulfillment of the Requirements for
the Degree of
MASTER OF SCIENCE
ACKNOWLEDGEMENTS
I wish to express my deepest appreciation to Dr. Surya D. Liman for his invaluable
guidance and assistance throughout the preparation of this thesis without whose help this
thesis would not have materialized. I would also like to thank Drs. Milton L. Smith and
William J. Kolarik for their invaluable support and suggestions.
I also thank all my friends for their ideas, assistance and encouragement.
11
CHAPTER
11
1V
v
. Vl
ll. LITERATURE REVIEW 7
lli. THE PROPOSED ALGORITHM 17
Development of the Proposed Algorithm 17 The Proposed Algorithm 21 A Numerical Dlustration 22
IV. COMPUTATIONAL EXPERIENCE 25
Test problems 26 Makespan Accuracy 27 Sensitivity Analysis 36 Computational Efficiency 41 Computational Effectiveness 41 Analysis of Results 54
V. SUMMARYANDRECOMMENDATIONS 55
REFERENCES 58
1. General description of the heuristics mentioned in research 18
2. Makespan averages calculated for 200 replications for each method 28
3. Results of Dunnett's parametric test for makespan performance 33
4. Friedman's non parametric multiple comparison test results 35
5. Empirical comparison of proposed algorithm with NEH algorithm 37
6. Empirical comparison of proposed algorithm with CDS algorithm 38
7. Empirical comparison of proposed algorithm with Palmer's 39 algorithm
8. Percentage relative difference from the NEH solution for the 40 new heuristic by number of jobs and machines
9. CPU time chart showing computational time taken by each 42 method for each size of problem
. IV
LIST OF FIGURES
1. Slope order index principle used in Palmer's algorithm 11 showing fit of simple schedule in slope order and reverse
2. Types of delay in: flow shop schedule 15
3. Solution obtained by proposed algorithm for the 23 numeric illustration
4. Solution obtained by both Palmer's and King and Spachis' algorithms for the numeric illustration
24
9. CPU time in seconds for 4 machine problems 43
10. CPU time in seconds for 6 machine problems 44
11. CPU time in seconds for 10 machine problems 45
12. CPU time in seconds for 20 machine problems 46
13. Average computational effectiveness for 10 machine 47 6 job problems
14. Average computational effectiveness for 10 machine 48 10 job problems
15. Average computational effectiveness for 10 machine 49 20 job problems
16. Average computational effectiveness for 10 machine 50 30 job problems
17. Average computational effectiveness for 10 machine 51 50 job problems
18. Average computational effectiveness for 10 machine 52 7 5 job problems
19. Average computational effectiveness for 10 machine 53 100 job problems
v
a Last job scheduled in the partial sequence
b Number of job being considered for scheduling immediately after job a
Caj Completion time of job a on machine j in the partial sequence
Cbj Completion time of job b on machine j in the partial sequence
CPU time Computer processing time on ffiM 386 machine
~ Slope index sign factor of job i
1 Number of the job under consideration
iabj idle time between jobs a and b on machine j on a partial sequence
lab Total weighted idle time between jobs a and b when job b is processed
immediately after job a
k temporary variable
m Total number of machines (stages) in the flow shop
M S Make span obtained by a algorithm for m machine n job problem
n Total number of jobs to be sequenced
N Iteration counter with initial value equal to total number of jobs ie., n
Pij Processing time of job i on machine j
Pab Ratio for deciding priority for job b for processing after job a in the partial
sequence
Si Slope index of the job i
W b Job weight or slope index factor of job b
. VI
INTRODUCTION
Defined as "the allocation of resources over time to perform a collection of tasks,"
scheduling has evolved from simple routines to highly complex heuristics capable of
controlling high-tech environments (Baker, 1974). The importance of scheduling will only
increase as technology moves into the next century and as the complexity of manufacturing
systems increases.
Baker's systems approach for solving scheduling problems is similar to the
scientific method, whose fundamental steps of the approach are: problem formulation,
analysis, synthesis and evaluation. The identification of problem and the development of
the criteria for decision making process is defmed during the formulation stage. The
analysis encompasses a detailed process of examination. The synthesis builds alternative
solutions to the problem. Finally, the evaluation compares all feasible solutions and selects
the desirable course of action. In many instances, it is impossible or infeasible to build a
model that truly represents the system under study. However, this does not prevent the
model from providing useful information for the decision maker. Because mathematical
models cannot always capture the essence of the system and define it in terms of an
objective function and a series of constraints, other methodology has been developed.
Heuristic methods, network analysis, and simulation modeling are three examples of
methodology for solving scheduling problems. These methods are powerful tools with
specific applications that eliminate the need to build intricate mathematical models for
systems with a high degree of complexity (Baker, 1974).
The basic elements of a system under study are resources and tasks. The resources
(machines) are required to perform a service on the tasks Gobs) and are classified as being
1
of one or several types. Resources of one type are usually referred as single-stage
resources, while resources of several types are referred to as multiple stage or multistage
resources. Both of these types of resources can be available for parallel or serial processing
of the tasks. When more than one resource is available for performing the same set of
tasks, then such resources are called parallel processsors. On the other hand, if a task itself
is multistage in nature and requires more than one processor for successive operations, then
such resources are called serial processors.
Bed worth ( 1987) classified all systems as static or dynamic. If the set of tasks
available for scheduling does not change over time, the system is considered static.
However, if the set of available tasks does vary with time, the system is considered dynamic.
Furthermore, if the parameters of the problem (e.g., processing times, arrival times, due
dates, etc.) are known in advance, the problem is called a deterministic problem. OtheiWise,
it is called a stochastic problem. Whichever type of system is under study, there are
common measures of performance. These measures of performance usually involve
information about all jobs and are generally one-dimensional. Some examples of
measures of performance are machine utilization, maximum job completion time
(makespan), waiting time (as a function of tardiness, due dates, or setup times, etc.).
Systems can also fall into one of four general categories: flow shop, job shop,
dependent shop and general or unordered shop. A flow shop is characterized by all jobs
having the same properties and following the same path or sequence of machines. The job
shop is one where there are different job types, each having its own characteristics and
following a particular path through a group of machines. In a dependent shop, the order of
operations depend on other jobs. A general shop is one in which all jobs can be processed
in any order.
2
Over the last three decades, tremendous research effort has been expended for the
solution of static flow shop problems with the minimization ofmakespan as the
performance measure. Such problems are labeled n/m/F/Cmax (n-job/m-machine/flow
shop/with maximum completion time as performance measure) by Conway et al. (1967).
Makespan is defined as the time that elapses between the start of the first job in the
sequence on the first machine and the finish time of the last job in the sequence on the last
machine.
Whatever measure of performance is considered as the optimizing criteria, the flow
. shop research is limited within the framework of certain standard assumptions. Some of the
important assumptions reported in the literature are:
(1) All then jobs are simultaneously available at the beginning of the planning
period (a case of static problem).
(2) A single job cannot be processed simultaneously by more than one machine.
(3) The processing time of each job on each machine is known and is
deterministic.
(4) Set-up and transportation times are independent of the sequence and are
included in the processing times of the jobs.
(5) All m machines are available at the beginning of the planning period and
are ready to take up any of the n jobs.
(6) No machine can process more than one job at a time.
(7) There is only one machine of each type in the shop.
(8) Pre-emption is not permitted, i.e., once a job has started its processing on
any machine, it must be carried through to completion on that machine.
There is a special class of flow shop problem called permutation flow shop where
there is no job passing. That is, once a job is started on the first machine, it continues its
3
processing on each of the subsequent machines with the same sequence priority relative to
the other jobs. This is typically the situation in many manufacturing plants where jobs are
moved from machine to machine by conveyor. This type of problem is labeled as
n/m/PF/Crnax problem (n-job/m-mchine/permutation flow shop/with maximum completion
time as the measure of performance).
Flow shops emulate the flexibility of job shops to some extent with scheduling
flexibility while retaining the efficiency of dedicated production lines. Such systems should
increase productivity by reducing inventory and throughput time. A flow shop's
productivity performance is affected by operational flexibility which is impacted by the
sequencing procedure that is used. Therefore, efficient and effective flow shop sequencing
procedures contribute significantly to the efficient utilization of the plant.
This research deals with the static permutation flow shop problem with makespan as
the measure of performance. The next section explains the research objectives and the
methodology adopted.
Research Objectives
With the current growing interest in group technology (GT), just-in-time
manufacturing (JTI), computer-integrated manufacturing (CIM), and others, manufacturing
systems design is moving away from the job shop orientation, and is moving towards a flow
shop or hybrid job shop orientation. Thus, the importance of flow shop-oriented research
promises to increase in the future (Stafford, 1988). As stated earlier in the previous section,
productivity of the flow shop is affected directly by the sequencing decisions made.
The objectives of this investigation are to examine the n-job, m-machine static
permutation flow shop sequencing problem with the objective of minimizing makespan, to
review the literature, and to determine if an effective, approximate algorithm can be
4
developed. A survey of the literature indicates that the existing algorithms either solve the
problem optimally /near optimally with considerable amount of computational effort or
compute very efficiently in terms of CPU time but giving solutions which are far from
optimal especially for large size problems.
There are three main areas of concentration in this investigation.
1. Investigation of the previous approaches to understand the underlying
assumptions and to study the effectiveness of those assumptions in solving the problem.
2. Analysis and development of an effective algorithm which is both efficient in
terms of CPU time and accurate in giving optimal/near optimal solution for the static case of
the permutation flow shop problem.
3. Evaluation of the algorithm developed by comparing its performance with some
well known algorithms.
The next section describes the methodology adopted to cover each one of the three
main areas of investigation.
Methodology of Research
Scientific methodology of analysis, synthesis and evaluation is used during this
investigation. During the analysis stage of this investigation, existing heuristic solutions for
the permutation flow shop problems are studied thoroughly. Although the heuristics
chosen for analysis are wide-ranging in their approaches to the problem, they are
representative, rather than an exhaustive collection of the sequencing heuristics. The
analysis performed in this study revealed patterns and characteristics of some efficient
heuristics. Some cases where these heuristics tend to perform less accurately are identified.
Based on these observations, during the synthesis stage, a set of rules for effective
scheduling of n jobs on m-machine static and deterministic flow shop is developed. These
5
rules are later used to develop a new algorithm for sequencing jobs in a n-job, m-machine
flow shop. In the evaluation stage of this investigation, this new heuristic is tested for
accurate performance in terms of makespan, efficiency in terms of computational time and
effectiveness in terms of accuracy vs computational effort Comparisons are made with
three well known heuristics: Palmer heuristic, CDS heuristic, and NEH heuristic. Various
graphical, empirical and statistical methods are used for comparison. Dunnett's parametric
test and Friedman's non parametric test are employed to evaluate the algorithm statistically.
Sensitivity analysis of the performance of the new algorithm with respect to the performance
of NEH heuristic is made to analyze the effect of size of problem on the accuracy of the
solution. Finally, based on the results, some appropriate suggestions and conclusions are
mentioned.
The next section provides the literature review necessary for an investigation of this
type. It presents the cornerstones of flow shop scheduling theory and the latest
developments.
6
CHAP1ERII
LITERATURE REVIEW
The permutation flow shop sequencing problem is a combinatorial search problem
with n! possible sequences. If one could enumerate all n! sequences, the sequences with
minimum total completion time could be identified, but this procedure is quite expensive and
impractical.
This interesting problem has inspired a considerable amount of research for the past
three decades. The first reported study was that of Johnson (1954). He formulated and
solved the two-machine, n-job problem and showed that the same permutation of jobs can
be used on both machines. He also showed that if there are m machines, an optimal
schedule will consist of identical job sequence on the first two machines and on the last two
machines. Since then, the research on the static flow shop sequencing has been on the
development of solution techniques. Elmaghraby (1968) classified these techniques as
optimizing, statistical, and heuristic approaches. Each of these is discussed in the following
paragraphs.
Optimizing Algorithms
Four distinct optimizing solutions approaches have been used for the static flow
shop problem Gupta et al. (1968) listed these as programming approaches, combinatorial
search techniques, branch and bound techniques, and graphical approaches.
Programming Approaches
Proponents of the programming approach regard sequencing as a conventional
mathematical programming problem, solvable by the application of well-established
7
techniques. These approaches include linear, dynamic, integer, and mixed integer
programming. The literature is quite replete on these approaches. Danzig (1959),
Wagner (1959), Stafford (1975), Selen and Hott (1986), Stafford (1988), and Wilson
(1989) are some of the contributors.
Combinatorial Search Techniques
This approach relies on changing one sequence to another through the switching of
jobs that satisfy some specified criteria (Gupta et al., 1968). Dudek and Teuton (1964), and
Smith and Dudek (1967) have, among others, proposed combinatorial search techniques.
Branch and Bound Techniques
Essentially, the branch and bound method is a curtailed enumeration in which lower
bounds are computed for all branches of the solution tree. The branch with the best bound
is selected as part of the sequence. Lomnicki ( 1965), Ignall and Schrage ( 1965), Brown and
Lomniki (1966), McMahon and Burton (1967), Panwalker and Khan (1975), Brooks and
White (1975), and Bestwich and Hastings (1976) are among notable researchers using this
approach.
Graphical Approach
Hargrave and Nemhauser (1963) suggested a graphical solution approach for the
three machine case. The difficulty here is the visualization of the topological consideration
for a greater number of machines. The literature is sparse on reports of graphical
approaches.
8
Computational Complexity of Optimizing Techniques
Karp ( 197 4) stated that most of the computationally complex combinatorial
problems are NP-complete in nature. Flow shop problems as well as job shop problems,
with one or two special case exceptions, are NP-complete. NP-complete problems form an
extensive equivalence class of combinatorial problems for which no non enumerative
algorithms are known. Therefore, the time complexity of a NP-complete problem is
exponential function of the size of the problem. Karp also added that the branch and bound
and programming techniques, discussed in the previous sections, are effective for NP-
complete problems but limited by the exponential growth of computational time as a
function of problem size.
Using the theory of computational complexity, Garey et al. ( 197 6) proved that the n­
job, m-machine flow shop problem ( the subject of this research) is an NP-complete
problem form> 3. For NP-complete problems, Karp (1974) suggested that heuristics have
proved more successful. Elmaghraby (1968), Gupta (1973), Kohler and Steiglitz (1975),
and Garey et al. (1976) also suggests that for larger, more complicated inputs, the best
approach may well be to seek statistical or heuristic solutions which will result in near-
optimal results.
Statistical Solutions
The most common statistical method relative to the static flow shop problem has
been the Monte Carlo sampling method (Ashour, 1970, for example ). In this method,
sequences are randomly generated and the sequence with the best makespan is selected.
The accuracy of the solution generally depends on number of random sequences generated.
Hence accuracy can be increased at the cost of computational time.
9
Heuristic Solutions
V arlo us heuristic approaches have been tried in the past Page ( 1961 ), for example,
used computer sorting methods involving individual job exchanging, job group exchanging,
pairing and merging of jobs. In the exchanging case, starting with a specific job sequence,
each successive pair of adjacent jobs is tested to see whether such a change will lower the
makespan of the schedule. If an improvement is achieved, then the process is repeated. The
same process is applied to exchanging strings of jobs instead of single jobs. The pairing
and merging of strings is based on regrouping each successive pair of job strings into a new
ordered string of jobs and testing for an improvement in the makespan. In essence, the
method involves a form of restricted enumeration.
Page also introduced the idea of a job priority function based on job processing
times which could be used to compute a priority number for determining the job processing
sequence in flow shop scheduling problems. This approach was taken up by Palmer
(1965), in the specification of a job priority function which he called the "slope index" for
the job. The form of priority function was deliberately chosen to give priority to jobs that
have a tendency to progress from short to long processing times in their passage through
the machines. Figure 1 illustrates Palmer's geometrical argument. The term "slope index"
is appropriately chosen since it reflects the geometry of the Gantt chart representation of the
schedule, whereby, in effect, the jobs are arranged by this priority process in accordance
with the "slopes of the job time-block profiles."
Palmer defmed his slope order index for job i, Sh as:
'" I, { [ m - ( 2j - 1 ) ] } Pij· j•l
10
The sequence is then determined by ranking the jobs on the basis of the slope order
index, by ordering jobs in non-increasing order of si.
Gupta (1971) suggested another algorithm which is similar to Palmer's except that
he defined the slope index in a slightly different manner. He noted that when Johnson's
rule is optimal in the three machine case, it is in the form of S 1 > S2 > ... > Sn, where
Si - ei where, ~ is 1 if Pil < Pi2 , -1 otherwise. min. {pit +pi2, Pi2+pi3 l
Generalizing from this structure, Gupta proposed that form > 2, the job index to be
calculated as
Si = ei where, ei is 1 if Pit< Pi2' -1 otherwise. min. { Pij + Pij+ 1}
lSjS m-l
process order
JOBS IN REVERSE SLOPE ORDER
Figure 1. Slope order index principle used in Palmer's algorithm showing fit of simple schedule in slope order and reverse.
11
Hundal and Rajagopal (1988) recognized that Palmer's schedule ignores machine
1/l(m+ 1) when m is odd, and this may affect the quality of the schedule, especially when n
is large. They extended Palmer's heuristic by computing two other set of slope indices
which account for machine 1/l(m+ 1) when m is odd Two more schedules are produced
and the best is selected The two other sets of slope indices are
m si - I (m- 2j ) Pij,
j =1
j =1
A variation on this theme has been proposed by Bonney and Grundy (1976) which
involves two slope indices for each job, a "start-slope" and an "end-slope." They devised a
slope matching method (BONNE1) using geometric relationships between the cumulative
processing times. The shape of the job profiles is approximated by fitting linear regression
lines to the start and end time of each operation, and then a job sequence is chosen by
matching the start slope of one job with the end slope of the previous job. In an extension '
to their method, they also employed Johnson's two machine algorithm to match jobs in their
slope matching method (BONNE2). Subject to the assumption that start and end slopes are
a reasonable representation of the job profile, these slopes are used to derive an equivalent
two-machine problem.
Petrov (1966) proposed a heuristic to generate a pair of not necessarily different
schedules, choosing the best schedule with regard to makespan. This heuristic is actually an
extension of Johnson's algorithm for minimizing makespan in the two-machine flow shop.
12
An altogether different heuristic approach has been suggested by Campbell, Dudek
and Smith ( 1970) which consists essentially of splitting the m machines into two groups,
the first with machines 1,2, .... ,k and the second with machines m-k+ 1, m-k+2, .... m where k
= 1 ,2, ... , m -1. If Pij is the processing time for job i on machine j, then combined times for
any job i within each of the two machine groups may be formed respectively as
and
m
L Pij· j =m-k+l
If we now consider the two groups as constituting an artificially created two-machine
problem, with jobs times as given by the combined times above, then it is possible to solve
this problem using Johnson's algorithm. The optimal sequence for this artificially created
problem thus forms a permutation sequence for the original problem. This process is
repeated for each of the m-1 artificial two-machine problems that are possible. The heuristic
solution which gives the lowest makespan time is selected as the best solution to the original
problem.
Dannenbring (1977) suggested a variation of the CDS heuristic. The method is
called rapid access procedure (RA) and its pwpose is to provide a good solution as quickly
and easily as possible. Instead of solving m-1 artificial two-machine problems, it solves
only one problem in which the processing times are determined from a weighting scheme.
As in many cases, a single transposition of adjacent jobs in the RA solution sequence
yielded better solution, Dannenbring proposed two improvement processes: the rapid access
with close order search (RACS) and the rapid access with extensive search (RAES). He
defined a neighbor as a new sequence that can be formed by the transposition of a single
13
pair of adjacent jobs. With the RACS heuristic, n-1 new sequences are examined in order
to find a better objective function value. The last method proposed by Dannenbring is an
improvement of the RACS heuristic. Instead of terminating the search after one set of
interchanges, RAES uses the best immediate neighbor to generate further neighbors. This
process is continued as long as new solutions with better values can be found.
Gelders and Sambandam ( 1978) presented an heuristic (GELDER) for minimizing
a complex cost function and suggested the extension of this heuristic for the makespan
problem. They used dynamic lateness and sequenced the jobs in such a way as to minimize
idle time.
King and Spachis (1980) argued that minimizing total machine idle time (which is
made up of run-in delay, between-job delay and run-out delay as shown in Figure 2) is
equivalent to minimizing total makespan. Thus, the heuristic (SPAC1) is used to build up a
single chain of sequence. The next job is always selected so as to minimize total between-
jobs delay, and that job is added to the end of the chain from the set of unscheduled jobs.
Each job is considered in turn as the first job in the sequence in order to decide the best one.
King and Spachis ( 1980) also mentioned that delays on early machines might have
no ultimate effect on the makespan, whereas delays on later machines are much more likely
to have a "knock-on" effect that would cause delay to be induced into the fmal machine.
Therefore, any delay on the last machine has a direct consequence in extending the
makespan. Hence, in order to provide a penalty for between-job delay, a weighting factor is
employed (SPAC2). The simplest weighting factor and the one finally chosen by them was
the sequence number of the machine.
Nawaz, Enscore and Ham (1983) assumed that a job with a higher total processing
time needs more attention than a job with a lower processing time. They proposed a new
curtailed-enumeration algorithm (NEH) which finds the best partial sequence by an
14
exhaustive search. An important fact must be emphasized; the NEH algorithm does not
transform the original m-machine problem into a two-machine one like CDS or RAES. It
builds final sequence in a constructive way, adding at each step a new job and finding the .
best partial solution. Each time a new job is added by fixing the relative position of the jobs
already sequenced, the new job is tried at various relative positions and the best position is
finalized. An interesting study comparing NEH with RAES as well as CDS has been made
by Booth and Turner ( 1987). Their study shows that NEH performs better than both CDS
and RAES in terms of makespan performance.
Widmer and Hertz (1989) developed a two-phase solution called SPIRIT to fmd a
initial sequence using an analogy with the traveling salesman problem and then to improve
this solution using taboo search techniques. They also claim that their method can be made
flexible to consider the trade-off between solution speed and solution quality by simply
adjusting a parameter which indicates the number of improvements of the objective function.
15
Figure 2. Types of delay in flow shop schedule
In addition to all the above mentioned heuristics, there are reports about some
improvement heuristics wherein the initial solution for these heuristics are obtained from
one of the above mentioned methods. Generally techniques like pair wise switching,
neighborhood search, simulated annealing or taboo search are employed in this type of
heuristics. Examples include Ho and Chang (1991), Osman and Potts (1989), Ogbu and
Smith (1990a, 1990b), and Widmer and Hertz (1989).
Many of the above mentioned heuristics are compared and analyzed in some
interesting studies (Dannenbring, 1977; Park et al., 1984; and Turner and Booth, 1986, for
example).
The next section explains the analysis of the above mentioned heuristics and
development of a new algorithms based on the rules developed from the analysis.
16
Development of the Proposed Algorithm
For developing the new procedure, most of the existing heuristics for flow shop
sequencing problem are analyzed thoroughly. These heuristic can generally be divided into
three categories based on the underlying concepts as follows:
(1) the application of Johnson's two machine algorithm,
(2) the generation of a slope index for the job processing times, and
(3) the minimization of the total idle time on the machine.
Table 1 gives a general description of heuristics chosen from the literature. It is
interesting to note for this table that all the algorithms with the exception of NEH fall into
atleast one of the above three categories. The power of the NEH algorithm seems to be in
its generation and testing of large number (n(n+1)!2)-1) of partial sequences. But from the
computational point of view, the application of this algorithm might be restrictive atleast for
problems with large n.
There are several other algorithms, that can give reasonably good solutions with
very little computational time. As stated earlier, this research is aimed at developing a better
solution in a more efficient manner.
It can be observed from the literature that most of the efficient (with respect to
computational time) algorithms are based on the generation of slope indices. However, all
these methods assume an overall regularity of progressively increasing or decreasing job
time-block profiles which can appropriately be described by simple slope indices. If the
17
Basic idea of sequencing procedure Number of sequences
Heuristic name Iohnson's algorithm Slope Total machine generated Single Multiple index idle time
PALMER X 1
GUPTA X 1
PETRO X 2
BONNEt X 1
proposed algorithm X X 1
job time-time block profiles are irregular in shape, then they are less then adequately
described by linear slope indices. The situation is particularly acute in problems involving
fewer machines and slope index methods in these cases generally perform badly. In larger
problems, the job time-block profile may be better approximated by a linear form. Bonney
and Grundy ( 197 6), for example, calculated the slopes using linear regression of the start
18
and finish times of the job time-block profiles and reported results that indicate that the
slope matching performs better in larger problems.
In this research, a slope matching mechanism, based on approximation and the real
time sequence dependent idle times is developed. However, the idle times are given weights
depending on the machine in the technological sequence on which the idle time occurred.
Logically, the idle time on the earlier machine is more critical than that on the later machine,
as the idle time on an earlier machine tends to delay completion times of jobs on all the
successive machines. This conclusion is contrary to the inferences made by King and
Spachis (1980). As stated in the literature review in the previous section, they argued that
delays on later machines are much more likely to have a "knock-on" effect that would
cause delay to be induced in to the fmal machine. But in this study it is noticed that this is
true only in some cases when the overall work load on the later machines is relatively more
than that of the earlier machine. H the earlier machine have more or equal overall work load,
the idle times on the earlier machines seem to have an adverse effect on the makespan.
Therefore based on the preliminary study, the weighing factor employed is a simple
technological number of the machine in the reverse order. Based on this, if iabj is the idle
time between jobs a and b on machine j, the total weighted idle time lab between job a and b
in a partial schedule is calculated as:
m lab = L iabj (m - j + 1).
j=2 (1)
The factor (m - j + 1) gives a higher penalty to the idle times on the first machines as
any idle times on these machines tend to delay operations on successive machines. It may
also be noted that lab is dynamic as it depends on the actual idle times between jobs a and b.
Each idle time iabj is dynamic and depends on the already fixed partial schedule ending
with job a.
19
Then to identify the jobs having stronger tendency to progress from shorter to
longer processing times in the sequence of processes, a simple static slope index otherwise
called as job weight factor is introduced. This factor is calculated for each job as follows:
m wb - I. G - 1) Pbj.
j=2 (2)
In the above formula for the slope index, G-1) is selected as the weighing factor for
the following reasons.
(1) The factor G-1) gives zero weight to the processing time on the frrst machine.
This is appropriate as the processing time on the frrst machine has no effect on the end-
slope of the job time block profile. As long as the idle time induced by these processing
times are measured by lab (equation 1), it is not necessity to consider the processing times
on the first machine to calculate the slope index. This is one reason for not using the
Palmer's or Gupta's priority function to represent slope for this heuristic.
(2) This G-1) factor gives more weight to the processing times on the later
machines and hence jobs with higher Wb tend to get higher priority according to the extent
to which process time increases with process number in order.
After identifying the Idle time function lab and Slope index or job weight function
wb as stated above, the priority function p ab is calculated as the ratio of lab and wb as
follows:
P ab = Iat/W b' (3)
This is because the purpose is to reduce idle time and simultaneously select a job
that has a tendency to progress from short to long processing times in their passage through
the machines thereby enhancing chances of better slope matching for the successive jobs.
20
If there are any ties for P ab ratio, then priority is given to the job with maximum Wb- By
doing this, a better chance to reduce the idle times iabj in the next iteration can be provided.
Once the above functions are calculated, a simple heuristic to build up a single chain
sequence where every time another job is added to the end of the chain, the end profile is
determined and used for matching with the job for consideration was developed. Once the
successor job b is chosen, its completion time C(bj) on machine j is determined from the
following recursive relation:
C(bj) = Max. { C(a,j), C(bj-1) }+Pbj· (4)
The process is repeated until all the jobs are ftxed and a complete sequence is found.
It can be observed that only one sequence is generated during this iterative procedure.
The Proposed Algorithm
The step by step procedure for the proposed algorithm is as follows:
Step 1: For each job b calculate Wb using equation 2. It may be noted that Wb is
static for all the iterations.
Step 2: Set counter to N = n where n is the number of jobs to be sequenced.
Step 3: For each possible job b that can be processed after last fixed job a in the partial
schedule, calculate iabj (there will not be any job a for frrst iteration; so
calculate idle time iobj from time '0' on all the machines) as if each
unscheduled job b is to be processed after job a. Then calculate lab for each
unscheduled job using equation 1.
Step 4: Find ratios P.h = Iat/Wb for each job b.
Step 5: Schedule the Job with minimum ratio P ab for processing immediately after a.
Break ties with the job with higher Job weight (slope index) Wb.
21
Step 6: Set N = N - 1.
Step 7: If N = 1, stop and assign the remaining job to the last position, otherwise
update all C(aj) and go to step 3.
For a better understanding of this new procedure, a simple example is illustrated in
the next section.
A Numerical lliustration
To illustrate the procedure outlined above, the five-job three-machine flow shop
problem given below is solved.
Machines
1
2
3
1
99
46
65
Jobs
2
43
89
59
3
20
99
64
227
4
49
2
16
34
wb for each job is calculated and listed above. The iteration to solve the problem is
as follows (idle time iabj is represented in bold face, job weight (slope index) factors in italic
and multiplication factor (m-j+ 1) in plain text and job selected in each iteration is indicated
by symbol <--)
Po2 = (43 * 2 + 132)/207 = 1.05
Po3 = (20 * 2 + 119)/227 = 0.70 <--
Po4 = (49 * 2 + 51)/34 = 4.38
22
p32 = (0 * 2 + 25)1207
p34 = (0 * 2 + 0)/34
Pt4 = (3 * 2 + 0)/34
<--
<--
The solution obtained is job sequence 3 - 1 - 2 - 4 with makespan 329 (Fig 3.). The
solutions obtained using other well-known heuristics are as follows:
MC2
MC3
Heuristic Sequence Palmer 3-2-4-1 CDS 3-2-1-4 King and Spachis 3-2-4-1 NEH 3-4-2-1
20 119 162 211
I 254 256
Make span 348 348 348 334
313 329 §I Between jobs delay • Run in delay Iii Run out delay
Figure 3. Solution obtained by proposed algorithm for the numerical illustration.
23
The run in delay, between job delay and run out delay in the schedule are illustrated in Figure 3 for the solution of the proposed heuristic. Palmer's heuristic and King and Spachis method yielded same solution and is illustrated in Figure 4. From both these figures, it can be noticed that all the above mentioned delays are minimum in the case of the solution obtained by the new method.
This algorithm is throughly tested both statistically and empirically for the
performance. A detailed description of the computational experience with the proposed
algorithm is given in the next section.
MC2
MC3
------------------------------------------------------------------------------------------ 211
183 208 267 283 348 l§j Between jobs delay
B Run in delay
1§1 Run out delay
Figure 4. Solution obtained by both Palmer's and King and Spachis' algorithms for the numerical illustration.
24
CHAPTERN
The proposed algorithm is tested experimentally against the results obtained from
Palmer, CDS and NEH. The frrst criteria to choose these heuristics for comparison is the
accuracy of the heuristics in giving a solution which is optimal or near optimal. Park et al.
( 1984) examined accuracy of these heuristics. Due consideration is also given to the
complexity of the algorithm. The algorithms span a wide range of computational
complexity and number of sequences generated. The computational time required by each
algorithm to solve a problem is an important criteria in choosing these heuristics. These
heuristics give a wide range in both computation time and accuracy. The results are
analyzed using both statistical and non statistical methods.
Following is the summary of the tests performed.
Criteria analyzed
Makespan Accuracy
Computational Efficiency
Computational Effectiveness
Test performed
graphs
25
The test pattern is designed similar to some of the well known evaluations done by
the researchers (Dannenbring, 1977; Park et al., 1984; and Nawaz et al., 1983). All the
programs are written in Turbo C and run on IBM 386 machine. Statistical tests are
performed using Lotus spread sheets.
Test Problems
Determination of a sample size (number of random problems to be considered for
each size) was necessary for the investigation. There was no specific method reported in the
literature to determine the required sample size for scheduling problems. However, Baker
( 197 4) conducted an experiment to observe the effect of different sample sizes on the test
statistics for the scheduling problems. He reported that the higher the number of samples
taken, the better the accuracy of the test statistic was. Since, large sample size is preferable,
a sample size of 200 problems is decided for each size of problem for our experiments.
The sample size of this range is supported by earlier investigators (Nawaz et al., 1983, for
example). Two hundred problems for each size and twenty eight different size flow shop
problems with 4, 6, 10 or 20 machines and 6, 10, 20, 30, 50, 7 5 or 100 jobs are employed.
All the four heuristics considered are evaluated by simulating the performance on these
5,600 randomly generated problems. The same problems are used for all the different
heuristics. The processing times are randomly generated and uniformly distributed over the
interval (1, 99). Evidence exists that the uniform distribution provides the more difficult
problems to solve (Campbell et al., 1970).
The details of the results and the analysis of the tests are presented in the following
section.
26
Makespan Accuracy
For each size, the average makespan obtained over the 200 runs for each of the four
methods is calculated and is tabulated ( Table 2). To observe the relative performance in
terms of makespan accuracy, for each of 4, 6, 10 and 20 machine problems for different
number of jobs considered, the ratio of average makespan of each heuristic and average
makespan ofNEH heuristic is graphed (see Figures 5, 6, 7, and 8). NEH is taken as the
base line as the average makespan of NEH happened to be the lowest for all the sizes of
problems. These graphs indicate that when the number of jobs is more than approximately
twice the number of machine, the proposed heuristic performs better than Palmer and CDS
heuristics. It can also be seen that for large number of jobs, the new method appears to be
giving solutions which are very close to that given by NEH and that too with computational
effort which is many times less than that required by NEH algorithm.
To compare the makespan performance statistically, Dunnett's parametric test and
Friedman's non-parametric test are employed to analyze the performance with respect to
make span accuracy.
Because of its effectiveness, the CDS heuristic method was employed as a standard
heuristic with which to compare the new and the two other standard heuristics. Park et al.
(1984) compared sixteen heuristics employing CDS as control (standard). Therefore, by
using CDS as standard for this investigation also, there is an added advantage of comparing
the performance of the new algorithm with other thirteen methods. For each problem size,
the three treatment means (average makespans of proposed algorithm, Palmer heuristic and
CDS heuristic) are compared with the control (average makespan of CDS algorithm). See
Montgomery ( 197 6) for a detailed description of the method.
The results are tested at 2% error rate. Table 3 gives the results of the Dunnett's
test A heuristic having a lower mean makespan means that the heuristic is better than CDS
27
Table 2. Makespan averages calculated for 200 replications for each method
Problem size mxn
New heuristic
497.91 714.76
644.63 871.33
894.30 1156.12 1728.85 2275.68 3350.82 4688.36 5966.93
1504.59 1809.86 2477.22 3116.00 4262.58 5673.92 7023.17
Palmer heuristic
497.59 720.47
641.66 881.20
894.06 1161.03 1763.91 2331.08 3422.77 4790.89 6086.23
1506.16 1812.98 2512.47 3161.74 4377.98 5836.10 7268.12
CDS heuristic
481.87 696.11
621.59 845.53
863.36 1117.56 1701.62 2284.33 3387.31 4788.00 6117.61
1461.85 1746.00 2421.83 3058.76 4275.95 5726.73 7153.38
NEH heuristic
473.92 676.52
613.86 819.40
855.05 1088.26 1613.35 2146.72 3174.79 4492.77 5762.73
1454.63 1710.37 2309.32 2883.30 3986.06 5322.83 6638.31
28
rJ 'l ::s ~ ~ <
1.0 6
1.0 5
1.0 4
1.0 3
1.0 2
l'
1 . 0 8 1 - - - - t ~ - - ~ ' - - - - - - - - . . ; ; ; : a , ~ - - - - - - - - - - - - - - - - t
Cl 'l
1.0 7
Table 3. Results of Dunnet's parametric test for makespan performance
problem size
Number of jobs
100
100
100
100
* - the standard heuristic (CDS)
0 0 0 + +
0 0 0 0
0 + + + + + +
0 + + + + + +
0 + + + + + +
0 + + + + + +
+ - the heuristic's mean makespan is lower than COS's - the heuristic's mean makespan is higher than COS's
0 - there is no significant difference between the heuristic's and COS's mean makes pan.
33
in performance effectiveness. The Dunnett's test results indicate that for the problems
where the number of jobs is less than approximately double the number of machines, CDS
performs better than the proposed algorithm. But as the number of jobs increases, the new
algorithm performs significantly better than CDS. The results also show that NEH is
always better than CDS except on the 6-job problems and Palmer is never better than CDS.
To evaluate the ability of the proposed algorithm to produce better results (number
of better results irrespective of the dimension of improvement) statistically, Friedman's non
parametric test is conducted for each problem size separately. See Conover (1980) for the
details of the test. The assumptions for conducting this test are:
1. The results for a particular problem do not influence the results for another
problem (the results within one block do not influence the results within the other blocks).
2. Within each block (each problem) the observations may be ranked according to
some criterion of interest (minimum makespan in this case)
Hypothesis:
Ho: Each ranking of the heuristics within each problem is equally likely.
Hl: At least one of the heuristics tends to yield lesser observed values than
at least one other heuristic.
The test is conducted at 5% of significance level and the test rejected the null
hypothesis for all the twenty eight problem sizes. This supported our alternative hypothesis
that at least one of the heuristics tends to yield lesser observed values than at least one other
heuristic. Therefore multiple comparison test is performed comparing each pair of the four
heuristics at 5% error level. The results are listed in Table 4. The results indicate that the
proposed algorithm performs better than Palmer's method in almost all the cases and better
than CDS algorithm with increasing number of jobs. The Friedman's test also indicated
that NEH performs better than all the three compared algorithms. It is observed that, the
34
20x6 20x10 20x20 20x30 20x40 20x50 20x75 20x100
Table 4. Friedman non parametric multiple comparison test results
i=New j =Palm
i=Palm j=NEH
Multi12Ie Comparisons: + = Treatment i tends to yield better observed values than treatment j. 0 = Treatment i and treatmentj tend to yield equal observed values. - = Treatment j tends to yield better observed values than treatment i.
i=CDS j=NEH
35
rank sum statistics increased as the problem size increased. This means that differences in
makespans among heuristics significantly increased as the problem size is increased.
The solutions obtained from the proposed heuristic are subject to empirical
comparison with the results obtained from Palmer, CDS and NEH heuristics. Tables 5, 6,
and 7 contain the empirical comparisons. The results show that for a given number of
machines, the proposed algorithm gives higher number of better solutions and outperforms
both Palmer and CDS heuristics as the number of jobs increases. When compared to NEH,
the performance in terms of number of better solutions seems to be decreasing with
increasing number of jobs. But it may be noted at this point that even though the number of
better solutions given by NEH is increasing, the difference between average makespan
given by NEH and the new algorithms is decreasing for larger number of jobs.
Sensitivity Analysis
In another experiment, the relative performance of the new algorithm with respect to
the NEH algorithm is subjected to sensitive analysis, to study the effects of number of
machines and number of jobs. Table 8 shows a breakdown of difference between average
makespan of the new heuristic and that of NEH heuristic for all the above mentioned
problem sizes. The most interesting trend is the apparent rise, peak and decline of the
relative difference measure as either the number of jobs increases with the number of
machines held constant or the number of machines increases with the number of jobs held
constant. This trend seems to indicate that the hardest problems for the new heuristic to
solve (i.e., more subject to error) are not the largest problems, but the intermediate size
ones, where the number of jobs is approximately equal to double the number of machines.
These results are approximate since, the NEH solution value was assumed as the optimal
solution value.
Table 5. Empirical comparision of proposed algorithm with NEH algorithm
Number of
Number of
100
100
100
100
Number of Number of Number of Number times solution times solution times solution
of better by equal by better by problems proposed proposed and NEH
algorithm NEH algorithm algorithm
200 11 40 149 200 10 17 173 200 5 11 184 200 11 16 173 200 13 8 179 200 5 17 178 200 8 19 173
200 15 27 158 200 9 9 182 200 5 7 188 200 5 2 193 200 5 2 193 200 3 3 194 200 3 1 196
200 12 15 173 200 4 7 189 200 5 0 195 200 3 0 197 200 2 1 197 200 2 1 197 200 4 1 195
200 14 9 177 200 5 0 195 200 5 0 197 200 3 0 200 200 0 0 200 200 0 0 200 200 0 0 200
37
Table 6. Empirical comparision of proposed algorithm with CDS algorithm
Number of
Number of
100
100
100
100
Number of Number of Number of Number times solution times solution times solution
of better by equal by better by problems proposed proposed and CDS
algorithm CDS algorithm algorithm
200 46 32 122 200 57 17 126 200 83 5 112 200 108 10 82 200 119 3 78 200 107 7 86 200 115 9 76
200 34 23 143 200 63 3 134 200 93 2 105 200 112 1 87 200 127 0 73 200 149 2 49 200 154 2 44
200 38 12 150 200 42 4 154 200 70 1 129 200 109 1 90 200 128 1 71 200 154 0 46 200 164 0 36
200 20 11 169 200 27 2 171 200 58 0 142 200 62 0 138 200 124 0 76 200 136 1 63 200 159 0 41
38
Table 7. Empirical comparision of proposed algorithm with Palmer algorithm
Number of
Number of
100
100
100
100
Number of Number of times solution times solution
better by equal by proposed proposed and
Number of times solution
76 67 57 92 42 66
100 37 63 125 25 50 115 37 48 123 31 46 125 31 44
71 42 87 98 26 76
112 10 78 124 6 70 131 7 62 141 7 52 135 8 57
90 25 85 104 6 90 136 2 62 149 0 51 146 2 52 159 1 40 155 0 45
96 13 91 106 1 93 130 0 70 132 0 68 158 2 40 172 1 27 175 1 24
39
Table 8. Percentage relative difference from the NEH solution for the new heuristic by number of jobs and machines.
Number of jobs
100
4
Number of machines 6 10 20
4.77 4.39 3.32 5.96 5.87 5.50 5.54 6.68 6.78 4.75 5.67 7.47 3.23 5.25 6.49 2.18 4.17 6.19 1.71 3.42 5.48
40
Computational Efficiency
For all the above problem sizes, the CPU times are noted and the average processing
times are calculated for each problem size and for each method. Table 9 shows the average
CPU times for each problem size and method Since the programs for each method are
written with almost equal efficiency, the CPU times are assumed to represent the true
computational efficiency of the heuristics. Figures 9, 10, 11, and 12 clearly represent very
moderate increase in CPU time with the increase in number of jobs for Palmer, CDS and
the proposed methods and very rapid increase in the case of NEH. Palmer's heuristic was
the most attractive from the computational point of view. On the other hand, NEH is less
attractive in the economic aspect, even though it performs well on minimizing makespan.
Computational Effectiveness
As mentioned by San joy (1986), computational effectiveness measures the accuracy
of the heuristic vs. the computational efforts. The value of such an analysis is that it
provides a measure of both computational effort and accuracy using an unbiased standard.
The computational effectiveness for each size of problem is plotted and analyzed. This
investigation clearly shows that for a given number of machines the effectiveness of the
proposed algorithm increases as the number of jobs increases. Figures 13 through 19
show the effectiveness plots for all the four heuristics for 10 machine n job problem.
Graphs for 4, 6 and 20 number of machines also show similar trend. It can be noticed from
these figures that the proposed algorithm is more effective for problem with relatively large
number of jobs.
41
Table 9. CPU time chart showing computational time in seconds taken by each method for each size of problem
Problem size mxn
New heuristic
Palmer heuristic
CDS heuristic
NEH heuristic
11.81318
17.76923
12.63736 29.67582
25.27473 59.46721
- - 0
20
/ /
/ /
~ /
~
- ::::
- 0
- 0
20
t2
~
/ ~
/
~
0.018 0.020 0.022 0.024 0.026 0.028 CPU time in seconds
Figure 13. Average computational effectiveness for 10 machine 6 job problems.
47
NEH
0.06 0.07
Figure 14. Average computational effectiveness for 10 machine 10 job problems.
48
49
Palmer
0.0 0.1 0.2 0.3 0.4 CPU time in seconds
Figure 15. Average computational effectiveness for 10 machine 20 job problems.
50
Palmer
NEH
0.0 0.2 0.4 0.6 0.8 1.0 CPU time in seconds
Figure 16. Average computational effectiveness for 10 machine 30 job problems.
51
Palmer
CDS
0 1 2 3 4 CPU time in seconds
Figure 17. Average computational effectiveness for 10 machine 50 job problems.
52
CDS
NEH
0 5 10 15 CPU time in seconds
Figure 18. Average computational effectiveness for 10 machine 75 job problems.
53
Palmer
0 10 20 30 CPU time in seconds
Figure 19. Average computational effectiveness for 10 machine 100 job problems.
Analysis of the Results
The following conclusions can be drawn from the above comparative study of the
solutions by different methods :
(1) The proposed algorithm is the most effective algorithm for problems where the
number of jobs far exceeds the number of machines. This means that the proposed
algorithm gives the best makespan accuracy for the expended computational effort
(2) The proposed algorithm performs significantly better than Palmer algorithm in
ahnost all sizes of problems.
(3) The proposed algorithm performs significantly better than CDS for problems
with large n.
(4) Even though NEH is the least biased among the heuristics tested, for large
problems, percentage improvement in terms of makespan is little over the proposed
algorithm.
(5) In terms of computational efficiency, the proposed algorithm takes more CPU
time than Pahner or CDS but the difference appears to be insignificant even for large
problems. The computational time requirements of NEH appears to be restrictive for
problems with large number of jobs.
( 6) Accuracy of performance of the proposed algorithm seems to be varying to some
extent with variation in number of jobs. The worst performance in general was noticed with
problem sizes where number of jobs is approximately double the number of machines.
But in general, the makespan performance of the new heuristic is improving with the
increase in number of jobs.
The next chapter presents the summary and conclusion of the research along with
recommendations for future research.
Summary
The main objectives of this research are to analyze the scheduling heuristics for
static permutation flow shop problem with makespan as minimization criteria from the
literature, to develop an effective algorithm for the permutation flow shop model, and to
evaluate the developed heuristic with respect to some standard heuristics.
The underlying concepts behind well known efficient heuristics for flow shop
sequencing are investigated. The investigation revealed patterns and characteristics of some
of the efficient heuristics. Based on this study, a set of rules are developed and are
subjected to preliminary tests. Finally a new algorithm for effective scheduling of jobs in a
permutation flow shop in a static case is developed. .
The proposed algorithm is based on concepts of minimizing the real time sequence
dependent idle time as well as giving priority to jobs according to the extent to which
process time increases with process number in order. This algorithm gives higher weights
to idle times on earlier machines and builds up a single chain sequence where every time
another job is added to the end of the chain, the end profile is determined and used for
matching with the job for consideration.
The proposed algorithm is evaluated thoroughly using empirical and statistical
methods and the results are compared with the solutions from Palmer, CDS and NEH
methods. A total of 5,600 problems with different sizes are employed for testing. CPU
times are recorded for each method and for each size of problem. The algorithm is
evaluated and compared with the above mentioned heuristics for makespan accuracy,
computational efficiency and computational effectiveness.
55
Conclusions
In this research, a new heuristic algorithm for static case of permutation flow shop
is developed and evaluated. This algorithm is based on the concepts of reducing the idle
times between jobs and profile matching by prioritizing jobs to the extent their processing
times increases with the process number in order. The results indicate that the new
algorithm is the most effective for problems with large number of jobs when compared to
Palmer's heuristic, CDS heuristic or NEH heuristic. Results also indicate that the proposed
algorithm performs significantly better than Palmer method and significantly better than
CDS heuristic in solving problems with comparatively large number of jobs. NEH heuristic
gave better results in terms of makespan but when the computational effort is also
considered, it appeared to be less attractive. Palmer's heuristic appears to be the most
attractive among the four algorithm compared, in terms of computational efficiency. The
new algorithm seems to be improving in terms of makespan performance as the number of
jobs increased and gives very close results when compared to NEH solution. With an
added advantage that the CPU time is very moderate and even comparable that of either
Palmer or CDS methods, the proposed algorithm seems to be a very attractive proposition
in situations where there is a necessity to sequence large number of jobs effectively.
Recommendations for Future Research
As the manufacturing systems design is moving away from the job shop orientation,
and is moving towards a flow shop or hybrid job shop orientation, the importance of flow
shop-oriented research promises to increase in the future (Stafford, 1988). Although lot of
research has been done in the area of flow shop scheduling, there are many areas where
improvement is desired. More effective methods for solving the flow shop problem with
different measures of performance and multi-criteria are to be developed.
56
Specifically with respect to the formulation of this new solution method in this
research, although many methods of giving weights to the idle time and slopes to the job are
tested and the best is selected, the list is not complete and it is an open question whether
there are better ways of formulating the idle time and slope indices. Possible application of
this new method to solve dynamic flow shop problem and flow hop problem with
precedence constraints has not been studied in detail. This can be an interesting extension
of this research.
There is not much of literature on the successful application of latest methods for
solving the combinatorial problems which include generic algorithms and neural networks
to the flow shop problem. For example if we take solutions from very efficient heuristics
like Palmer, Gupta, RA and the proposed algorithm and use them as parent solutions to fmd
an improved solution using genetic algorithm, optimal solution might occur within very few
iterations. Subjects like this might also be very promising for future research.
57
REFERENCES
Aggarwal, S. C. and Stafford, E. F. "A heuristic algorithm for the flowshop problem with a common job sequence on all machines," Decison Sciences, 1975, 6, 237- 251.
Baker, K. R., Introduction to Sequencing and Scheduling, John Wiley and Sons, Inc., New York, 1974.
Bonney, M. C. and Gundry, S. W. "Solutions to the constrained flow shop sequencing problem," Operational Research Quarterly, 1976,27, No.4.
Campbell, H. G., Dudek, R. A. and Smith, M. L. "A heuristic algorithm for then job, m machine sequencing problem," 1970, Management Science, 16, No. 16.
Conover, W. J. Practical nonparametric statistics, John Wiley & Sons, New York, 1980.
Conway, R.W., Maxwell, W.L., and Miller, L.W. Theory of Scheduling, Addison- Wesley, Reading, MA, 1967
Dannenbring, G. D. "An evaluation of flow shop sequencing Heuristics," Management Science, 1977, 23, No. 11.
Dudek, R. A. and 0. F. Teuton, Jr. "Development of M - stage decision rule for scheduling n jobs through m machines," Operations Research, 1964, 12, No.3.
Elmaghraby, S. E. "The Machine Sequencing problem -- Review and Extensions," Naval Research Logogistics Quarterly, 1968, 15, 205-232.
Garey, M.R., Johnson, D.S. and Ravi Sethi. "The complexity of flowshop and Jobshop Scheduling," Mathematics of Operations Research, 1976, 1, No.2.
Gelders, L. F., and Sambandam. N. "Four simple heuristics for scheduling a flowshop," International journal of Production Research, 1978, 16, No.3.
Guptha, J. N. D. "A functional heuristic algorithm for the flow-shop scheduling problem," Operations Research Quarterly, 1971, 22, No. 1.
Guptha, J.N.D., Smith, M. L. , Martz, H. F. , Jr. and Dudex, R. A. "Sequencing Research Report," QT- 1030 68, Texas Tech University, Lubbock, Texas, 1968.
Hargrave, W.W. and Nemhauser, G.L. "A geometric model and a graphical algorithm for a sequencing problem," Operations Research, 1963, 11, 889-900.
Hundal, T. S. and Rajgopal, J. "An extension of Palmer's heuristic for the flow shop scheduling problem," International Journal of Production Research, 1988, 26, 1119- 1124.
58
lgnall, E. and L.E. Schrage. "Application of the branch and bound technique to some flow shop scheduling problems," Operations Research, 1965, 13, No.3.
Johnson, S. M. "Optimal two and three-stage production schedules with setup times included," Naval Research Logistics Quarterly, 1954, 1, 61-68.
Karp, RM. "On the computational complexity of combinatorial problems," Networks, 1974, 5, 45-68.
King, J. R. and Spachis, A. S. "Heuristics for flow-shop scheduling," International Journal of Production Research, 1980, 19, No. 3.
Kohler, W. H. and Steiglitz, K. "Exact, approximate and Guaranteed accuracy algorithms for the flowshop problem n/2/F/F ,"Journal of Association for Computational Machines, 1975, 22, 106-114.
Kumar, Sanjoy "Computational effectiveness of flow shop heuristics," Masters thesis, Texas Tech University, 1986.
Lenstra, J. K. "Sequencing by Enumeration methods," Mathematish Centrum Amsterdam, 1977.
Lomnicki, Z. " A branch and bound algorithm for the exact solution of the three - machine scheduling problem," Operational Research Quarterly, 1965, 16, No. 1.
Montgomery, D. C. "Design and Analysis of Experiments," John Wiley & sons, New York, 1991
Nawaz, M., Enscore, E.E., and Ham, I." A heuristic algorithm for them-machine, n- job flow shop sequencing problem," OMEGA, International Journal of Management Science, 1983, 11 No. 1.
Page, E.S. "An approach to the scheduling of jobs on machines," Jl R. statist. Soc., 1961, 323-484.
Palmer, D. S. "Sequencing jobs through a multi stage process in the minimum total time - a quick method of obtaining a near optimum," Operations Research Quarterly, 1965, 16, No. 1.
Panwalkar, S. S. and A. W. Khan. "An improved branch and bound procedure for n x M flow shop problems," Naval Research Logistics Quarterly, 1975, 22, 787- 790.
Park, Y. B. "A simulation study and an analysis for evaluation of performance- effectiveness of flowshop sequencing heuristics: a static and a dynamic flowshop model," Masters Thesis, Pennsylvania State University, 1981.
Park, Y. B., Pegden, C. D., and Enscore, E. E." A survey and evaluation of static flowshop scheduling heuristics," International Journal of Production Research, 1984, 22, No. 1.
Petro, V. A. Flow line group production planning, London: Business Publications Ltd., 1966.
59
Selen, W. J. and D. D. Hott. "A mixed integer goal-programming formulation of a flow-shop scheduling problem," Journal of Operational Research Society , 1986,37, 1121-1128.
Smith, R. D. and R. A. Dudek. "A general algorithm for solution of then-job M- machine sequencing problem of the flow shop," Operations Research, 1967, 15, No. 1.
Stafford, E. F. "On the development of a mixed-integer linear programming model for the flowshop sequencing problem," Journal of Operational Research Society, 1988, 39, 12.
Wagner, H. M. "An integer linear-programming model for machine scheduling," Naval Research Logistics Quarterly, 1959, 6, 131-140.
Wilson, J. M. "Alternative formulations of a flow-shop scheduling problem," Journal of Operational Research Society , 1992.
60
31295007107591_0000
31295007107591_0001
31295007107591_0002
31295007107591_0003
31295007107591_0004
31295007107591_0005
31295007107591_0006
31295007107591_0007
31295007107591_0008
31295007107591_0009
31295007107591_0010
31295007107591_0011
31295007107591_0012
31295007107591_0013
31295007107591_0014
31295007107591_0015
31295007107591_0016
31295007107591_0017
31295007107591_0018
31295007107591_0019
31295007107591_0020
31295007107591_0021
31295007107591_0022
31295007107591_0023
31295007107591_0024
31295007107591_0025
31295007107591_0026
31295007107591_0027
31295007107591_0028
31295007107591_0029
31295007107591_0030
31295007107591_0031
31295007107591_0032
31295007107591_0033
31295007107591_0034
31295007107591_0035
31295007107591_0036
31295007107591_0037
31295007107591_0038
31295007107591_0039
31295007107591_0040
31295007107591_0041
31295007107591_0042
31295007107591_0043
31295007107591_0044
31295007107591_0045
31295007107591_0046
31295007107591_0047
31295007107591_0048
31295007107591_0049
31295007107591_0050
31295007107591_0051
31295007107591_0052
31295007107591_0053
31295007107591_0054
31295007107591_0055
31295007107591_0056
31295007107591_0057
31295007107591_0058
31295007107591_0059
31295007107591_0060
31295007107591_0061
31295007107591_0062
31295007107591_0063
31295007107591_0064
31295007107591_0065