27
* Corresponding author. Tel.: #1-765-285-5301; fax: #1-765-285-8024. E-mail address: jgupta@bsu.edu (J.N.D. Gupta). Supported by Deutsche Forschungsgemeinschaft (Project ScheMA). Computers & Operations Research 29 (2002) 123}149 Local search heuristics for two-stage #ow shop problems with secondary criterion Jatinder N.D. Gupta*, Karsten Hennig, Frank Werner Department of Management, Ball State University, Muncie, IN 47306, USA Otto-von-Guericke-Universita ( t, Fakulta ( t fu ( r Mathematik, 39106 Magdeburg, Germany Received 1 January 1999; received in revised form 1 March 2000 Abstract This paper develops and compares di!erent local search heuristics for the two-stage #ow shop problem with makespan minimization as the primary criterion and the minimization of either the total #ow time, total weighted #ow time, or total weighted tardiness as the secondary criterion. We investigate several variants of simulated annealing, threshold accepting, tabu search, and multi-level search algorithms. The in#uence of the parameters of these heuristics and the starting solution are empirically analyzed. The proposed heuristic algorithms are empirically evaluated and found to be relatively more e!ective in "nding better quality solutions than the existing algorithms. Scope and purpose Traditional research to solve multi-stage scheduling problems has focused on single criterion. However, in industrial scheduling practices, managers develop schedules based on multi-criteria. Scheduling problems involving multiple criteria require signi"cantly more e!ort in "nding acceptable solutions and hence have not received much attention in the literature. This paper considers one such multiple criteria scheduling problem, namely, the two-machine #ow shop problem where the primary criterion is the minimization of makespan and the secondary criterion is one of the three most popular perfor- mance measures, namely, the total #ow time, total weighted #ow time, or total weighted tardiness. Based on the principles of local search, development of heuristic algorithms, that can be adapted for several multi- criteria scheduling problems, is discussed. Using the example of the two-machine #ow shop problem with secondary criterion, computational experiments are used to evaluate the utility of the proposed algorithms for solving scheduling problems with a secondary criterion. 2001 Elsevier Science Ltd. All rights reserved. 0305-0548/01/$ - see front matter 2001 Elsevier Science Ltd. All rights reserved. PII: S 0 3 0 5 - 0 5 4 8 ( 0 0 ) 0 0 0 6 1 - 7

Local search heuristics for two-stage flow shop problems with secondary criterion

Embed Size (px)

Citation preview

Page 1: Local search heuristics for two-stage flow shop problems with secondary criterion

*Corresponding author. Tel.: #1-765-285-5301; fax: #1-765-285-8024.E-mail address: [email protected] (J.N.D. Gupta).� Supported by Deutsche Forschungsgemeinschaft (Project ScheMA).

Computers & Operations Research 29 (2002) 123}149

Local search heuristics for two-stage #ow shop problems withsecondary criterion

Jatinder N.D. Gupta��*, Karsten Hennig�, Frank Werner���

�Department of Management, Ball State University, Muncie, IN 47306, USA�Otto-von-Guericke-Universita( t, Fakulta( t fu( r Mathematik, 39106 Magdeburg, Germany

Received 1 January 1999; received in revised form 1 March 2000

Abstract

This paper develops and compares di!erent local search heuristics for the two-stage #ow shop problemwith makespan minimization as the primary criterion and the minimization of either the total #ow time, totalweighted #ow time, or total weighted tardiness as the secondary criterion. We investigate several variants ofsimulated annealing, threshold accepting, tabu search, and multi-level search algorithms. The in#uence of theparameters of these heuristics and the starting solution are empirically analyzed. The proposed heuristicalgorithms are empirically evaluated and found to be relatively more e!ective in "nding better qualitysolutions than the existing algorithms.

Scope and purpose

Traditional research to solve multi-stage scheduling problems has focused on single criterion. However,in industrial scheduling practices, managers develop schedules based on multi-criteria. Schedulingproblems involving multiple criteria require signi"cantly more e!ort in "nding acceptable solutionsand hence have not received much attention in the literature. This paper considers one such multiplecriteria scheduling problem, namely, the two-machine #ow shop problem where the primary criterion isthe minimization of makespan and the secondary criterion is one of the three most popular perfor-mance measures, namely, the total #ow time, total weighted #ow time, or total weighted tardiness. Based onthe principles of local search, development of heuristic algorithms, that can be adapted for several multi-criteria scheduling problems, is discussed. Using the example of the two-machine #ow shop problem withsecondary criterion, computational experiments are used to evaluate the utility of the proposed algorithmsfor solving scheduling problems with a secondary criterion. � 2001 Elsevier Science Ltd. All rightsreserved.

0305-0548/01/$ - see front matter � 2001 Elsevier Science Ltd. All rights reserved.PII: S 0 3 0 5 - 0 5 4 8 ( 0 0 ) 0 0 0 6 1 - 7

Page 2: Local search heuristics for two-stage flow shop problems with secondary criterion

Keywords: Flow shop scheduling; Secondary criterion; Local search; Tabu search; Threshold accepting; Simulatedannealing; Multi-level search; Empirical evaluation

1. Introduction

Consider the following scenario: a set N"�1, 2,2, n� of n simultaneously available jobs is tobe processed on two machines where each job i3N requires processing on machine 1 "rst and thenon machine 2. A job once started on a machine must be completed on that machine withoutinterruption (i.e., no preemption is allowed). For each job i, let a

�and b

�be the processing times of

job i3N at the "rst and second machine, respectively, and d�be its due date. The completion time of

job �(i) at sequence position i in schedule �"(�(1),2, �(n)), C���� , is calculated by using thefollowing recursive relationship:

C����" max�����

������

a�� ��#�����

b�� ���.For the above scenario, it is required to "nd a schedule �"(�(1),2, �(n)) such that a secondarycriterion C2 is optimized subject to the condition that the primary criterion C1 is optimal.

The scheduling problem representing the above scenario is a speci"c case of the multicriteriascheduling problems that can be modeled in three di!erent ways. First, when the criteria areequally important, we can generate all the e$cient solutions for the problem. Then by usingmulti-attribute decision methods, trade-o!s are made between these solutions. Second, when thecriteria are weighted di!erently, we can de"ne an objective function as the sum of weightedfunctions and transform the problem into a single criterion scheduling problem. Finally, whenthere is a hierarchy of priority levels for the criteria, we can "rst solve the problem for the "rstpriority criterion, ignoring the other criteria and then solve the same problem for the secondpriority criterion under the constraint that the optimal solution of the "rst priority criterion doesnot change. This procedure is continued until we solve the problem with last priority criterion asthe objective and the optimal solutions of the other criteria as the constraints. The literature onmultiple and bicriteria problems for single machine problems is summarized by Dileepan and Sen[1], Fry et al. [2], Hoogeveen [3] and Lee and Vairaktarakis [4]. Nagar et al. [5] providea detailed survey of the multiple and bicriteria scheduling research involving multiple machines.

For a scheduling problem with two criteria of interest, if the "rst two approaches are required, wecall the problem a bicriteria scheduling problem. If the third approach is required, we term theproblem a secondary criterion problem. Using the standard three "eld notation [6], a bicriteriascheduling problem for "nding all e$cient solutions can be represented as ����F(C1,C2), where� denotes the machine environment, � corresponds to the deviations from standard schedulingassumptions, and F(C1,C2) indicates that e$cient solutions relative to criteriaC1 andC2 are beingsought. If the bicriteria scheduling problem involves the sum of weighted values of two objectivefunctions, the problem is denoted as ����F

�(C1,C2) where F

�(C1,C2) represents the weighted sum

of two criteria C1 and C2. Similarly, a secondary criterion problem can be denoted as����F

�(C2/C1), whereC1 and C2 denote the primary criterion and secondary criterion, respectively,

and the notation F�(C2/C1) represents the hierarchical optimization of criterion C2 given that

criterion C1 is at its optimal value.

124 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 3: Local search heuristics for two-stage flow shop problems with secondary criterion

Related to our scenario, the F2��C���

problem can be solved in O(n log n) computational time byusing Johnson's rule [7]. However, the F2��C2 problem is NP-hard in the strong sense for all othermeasures of performance [8]. Using the latter result, Chen and Bul"n [9] show that theF2��F

�(C2/C

���) problem is NP-hard in the strong sense where C2 is any measure of performance

other than makespan. A forward branch and bound algorithm for the F2��F�(�C

�/C

���) problem is

developed by Rajendran [10]. However, Rajendran's algorithm cannot e$ciently solve problemsinvolving more than 10 jobs. Rajendran [10] developed two heuristics for the F2��F

�(�C

�/C

���)

problem and tested their e!ectiveness in solving problems involving 25 or fewer jobs. Neppalli et al.[11] developed a genetic algorithm to solve the F2��F

�(�C

�/C

���) problem. A systematic procedure

to design a tabu search algorithm is described and tested by Gupta et al. [12]. Constructivealgorithms for this problem are given by Gupta et al. [13].

This paper develops and compares various local search heuristics to "nd feasible and optimal ornear-optimal solutions for the F2��F

�(�C

�/C

���),F2��F

�(�w

�C

�/C

���) and F2��F

�(�w

�¹

�/C

���)

problems. A feasible solution to the problems considered here is one for which the primarycriterion C

���is at its minimum value. An optimal schedule is a feasible solution for which

secondary criterion value is as small as possible. Our discussion includes and improves upon theknown local search algorithms for the F2��F

�(�C

�/C

���) problem and shows that the proposed

improvements are quite e!ective in minimizing the secondary criteria considered.The rest of the paper is organized as follows. Section 2 describes the neighborhoods applied in

the local search algorithms. Using the standard variants from the literature as well modi"cationsand generalizations of known procedures, Section 3 describes di!erent types of local searchheuristics to "nd approximate solutions for the scheduling problems considered here. The com-putational results of empirical experiments are given in Section 4. For the problems considered,neither benchmark problems exist nor strong lower bounds on the optimal function values areavailable or expected to be obtained. Therefore, we compare the e!ectiveness of di!erent algo-rithms relative to each other and their dependence on the starting solution and the number ofiterations allowed. For small size problems and C2"�C

�, we also compare the heuristics with the

optimal objective function value obtained by a branch and bound algorithm. Finally, Section5 provides conclusions of this research and suggests some directions for future research.

2. Local search

In a local search algorithm, we start with an initial solution p and generate a neighbor p� of p. Theset of neighbors that can be reached in one step is denoted as N(p). In an iterative improvementalgorithm, only moves from p to p�3N(p) are accepted which improve the objective function value.Often the whole set of neighbors is investigated, and then a move is made to the neighbor with thebest objective function value if it is an improvement (best improvement). Sometimes, a generatedneighbor with a better objective function value is immediately accepted ("rst improvement). Theabove procedure stops when no neighbor with a better objective function value exists, i.e., if nolocal optimum in the neighborhood has been found. Often neighbors are generated randomly.In this case, the procedure does not necessarily stop with a local optimum; however, thetime-consuming procedure of investigating all neighbors can be avoided.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 125

Page 4: Local search heuristics for two-stage flow shop problems with secondary criterion

2.1. Neighborhoods

For the quality of a local search algorithm, the choice of a suitable neighborhood is of signi"cantimportance. First we introduce some neighborhoods for a permutation problem, where the set offeasible solutions is given by the set of permutations of n jobs:

Shift (SH): In a permutation p"(p(1), p(2),2, p(n)), select an arbitrary job p(i), and shift it toa smaller position j, j(i, or to a larger position k, k'i. Thus, we have �N(p)�"(n!1)�. In someapplications this neighborhood is used in a specialized version, where only right or left shifts of anarbitrary job are allowed for the generation of a neighbor. The shift neighborhood is sometimesalso referred to as an insert neighborhood.

Pairwise interchange (PI): In permutation p, select two arbitrary jobs p(i) and p( j) (iOj) andinterchange them. The set N(p) contains (�

�) elements. Sometimes this neighborhood is referred to as

a swap neighborhood.Adjacent pairwise interchange (API): This is a special case of both the shift and the pairwise

interchange neighborhood. In permutation p, two adjacent jobs p(i) and p(i#1) (1)i)n!1) areinterchanged to generate a neighbor p�. Thus, we have �N(p)�"n!1.

k-Pairwise interchange (k-PI): This is a generalization of the pairwise interchange neighborhood.A neighbor is generated from p by performing at most k consecutive pairwise interchanges.

(k�, k

�)-Adjacent pairwise interchange neighborhood (k

�, k

�-API): In this neighborhood, a neigh-

bor is generated from p by performing k successive adjacent pairwise interchanges, wherek�)k)k

�.

A neighborhood structure may be represented by a directed, an undirected, or a mixed graph.The set of vertices is the set of feasible solutions M, and there is an arc from p3M to each neighborp�3N(p). In the case of p�3N(p) and p3N(p�), we replace both arcs by an undirected edge. Wedenote the graphs describing the above neighborhoods as G(SH),G(PI),G(API),G(k-PI) andG(k

�, k

�-API), respectively. Obviously, all these graphs are undirected, i.e., these neighborhoods

are symmetric. Note that the graphs representing the right and left shift neighborhoods aredirected.

For the problem under consideration, only a subset of the permutations of n jobs is feasible,namely all job sequences with optimalC

���value. In this case, the set of neighborsN(p) of a feasible

sequence in the shift neighborhood is the set of feasible sequences that can be obtained by a shift ofa job. The following theorem gives an important property for the API neighborhood since itguarantees that a local search algorithm may reach a global optimum from an arbitrary startingsolution.

Theorem 1. Let M be the set of feasible sequences for a problem F2//F�(C2/C

���) where

C2"F3��C�, �w

�C

�, �w

�¹

��. Then the graph G(API) is connected.

Proof. Let p"(p(1),2, p(n)) and p�"(p�(1),2, p�(n)) be two permutations with minimum makes-pan. Moreover, let i*1 be the smallest and j)n be the greatest position on which in p and p�di!erent jobs are sequenced. For NH"�p(i),2, p( j)�, let pH"(pH(i),2, pH( j)) be the Johnsonsequence for the jobs of set NH and p� "(p(1),2, p(i!1), pH(i),2, pH( j), p( j#1),2, p(n)). Then it isclear that there is a path in G(API) to p� from p as well as from p�, such that all permutations(vertices) on the paths have a minimum makespan value. These permutations are obtained through

126 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 5: Local search heuristics for two-stage flow shop problems with secondary criterion

consecutive pairwise interchanges such that after an interchange, the corresponding pair (k, l ) ofjobs satis"es the condition: min�a

�, b

��)min�a

�, b

��. By connecting both paths from p to p� and

from p� to p�, we obtain the statement of the theorem. �

As a consequence of Theorem 1, all neighborhoods introduced above are connected. If in somelocal search algorithm a neighbor is generated which is not C

���optimal, it is interpreted as a &very

bad' solution which is not accepted by any of the metaheuristics, and the search is continued fromthe current starting solution. In this way, each proposed algorithm considers only theC

���-optimal

sequences.

2.2. Dominance criteria

Dominance criteria are useful tools to guide the search to better solutions. In the following, wegive two such criteria from the literature (see [13]) which we will include into some local searchalgorithms.

Theorem 2. A schedule p"(p(1), p(2),2, p(n)) satisfying the conditions

min�a����

; b������

�)min�a������

; b����

�, 1)i)n!1, (1)

a����

)a������

, 1)i)n!1, (2)

b����

)b������

, 1)i)n!1, (3)

optimally solves problem F2//F�(�C

�/C

���).

We incorporate the su$cient optimality criteria (1)}(3) into the local search algorithms asfollows. Assume that a shift neighbor p� of sequence p has been generated by moving a job u to theright such that the new adjacent job to the right is job v, i.e., we have p�"(2, u, v,2). Then wecheck conditions (1)}(3) for the jobs p(i)"v and p(i#1)"u. If these conditions are satis"ed, wemay interchange both jobs u and v without increasing the makespan and the #ow time values, i.e.,sequence p� is dominated by sequence p"(2, v, u,2) and we have &enlarged' the shift of job u byone position.

In a similar way, we apply this approach when a pairwise interchange of jobs p(i)"u andp( j)"vwith i(j has been performed to generate p�. In this case we try to interchange job u with itscurrent neighbor to the right and then job v consecutively with its left neighbor as long asinterchanges are justi"ed by Theorem 2. This dominance criterion is denoted as DOM1.

We also consider the following stronger dominance criterion which includes Theorem 2 asa special case for C2"�C

�.

Theorem 3. Let C2"F3��C�, �w

�C

�, �w

�¹

��, p�"(p(1),2, p(i!1), p(i), p(i#1), p(i#2), 2,

p(n)), p"(p(1),2, p(i!1), p(i#1), p(i), p(i#2),2, p(n)), and � and be the partial sequences ofthe xrst i#1 jobs of p� and p, respectively. If

C���

())C���

(�) and F())F(�)

then C���

(p))C���

(p�) and F(p))F(p�) hold, i.e., sequence p� is dominated by sequence p.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 127

Page 6: Local search heuristics for two-stage flow shop problems with secondary criterion

This dominance criterion is denoted as DOM2 and will be applied only in the API neighborhoodas described later (see the multi-level heuristics in Section 3.4). While the DOM1 applies to only the�C

�criterion, DOM2 can be applied to all secondary criteria considered in this paper. We note that

both dominance criteria can be checked in constant time for a pair of adjacent jobs.

2.3. Stopping criterion

The stopping criterion is the method used to terminate the search process. There are threecommon stopping criteria for local search algorithms: a maximum number of iterations orsolutions reached, no improvement of the best solution for a speci"ed number of iterations,or a maximum CPU time allowed to solve a problem. The second criterion may be more e$cientin speed, but since the number of iterations with no improvement will be a!ected by thecomplexity of the solution space and the problem size, a suitable number of iterations is di$cultto determine. Using a limit on the maximum CPU time is partially captured in settingthe maximum number of iterations. Therefore, and for the comparability of the di!erent heuris-tics, the "rst stopping criterion is chosen. However, to investigate whether di!erent C2criteria, di!erent starting solutions, di!erent number of generated solutions, or di!erent parametersettings lead to di!erent solution values, we investigated the use of short, medium-size and longruns.

2.4. Initial solution

The initial solution is the point from which the local search procedure is started. This could bea solution obtained from a heuristic or generated randomly. Since a random solution may notsatisfy the minimum makespan constraint, two heuristic algorithms that ensure a schedule withoptimal makespan were used to generate the initial solutions for the local search heuristics. Theseheuristics are: Johnson's [7] O(n log n) algorithm for the F2��C

���problem (JOHN, for short), and

the insertion algorithm (INS, for short) recommended by Gupta et al. [13].The "rst heuristic can be interpreted as a &bad' initial solution since no optimization is performed

with respect to the secondary criterion whereas the insertion algorithm INS has been found to bethe best constructive heuristic among 9 tested algorithms for the F2��F

�(�C

�/C

���) problem. As

extensive tests have shown, this type of constructive algorithm also performs well for many otherproblems [14,15].

Algorithm INS

The insertion algorithm (INS) works as follows, where J�� denotes Johnson's schedule for the jobsnot contained in sequence �.

1. Let J"(�(1),2, �(n)) be Johnson's schedule of all n jobs. Let �"(�(1),2, �(n))"J"(�(1),2, �(n)), C(�)"C(J) and F(�)"F(J). Set �"��� and r"0. Let i"1, and �"�(1).Further, let �"�

�����

for 0)u)i. Enter step 2.

128 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 7: Local search heuristics for two-stage flow shop problems with secondary criterion

2. For each of the (n!i) jobs � �, generate (n!i)*(i#1) partial sequences represented by��k�

���, where k � � and 0)u)i. For each of the generated partial sequences, if

C(��k�

���J�������

)"C(J), set �"��, ��k�

���� and r"r#1. Enter step 3.

3. Let �"���, �

�,2, �

�. For each q)r, if F(�

J�

)(F(�), set F(�)"F(�J�

) and�"(�

J�

). Let F(�� )"min����

�F(��)�. Set �"�� , r"0, and �"���. Enter step 4.

4. If i(n, set i"i#1 and return to step 2; otherwise accept the schedule � with secondarycriterion F(�) and makespan C(�) as the solution of the problem.

The time requirement for algorithm INS is O(n�).

3. Local search heuristics

We now describe various search heuristics for the problems considered here and discuss variousparameters that speci"es the design of each heuristic. Our discussion includes adaptations ofknown procedures as well as modi"cations and extensions speci"cally suitable for the problems inthis paper.

3.1. Simulated annealing

Simulated annealing (SA) has its origin in statistical physics, where the process of cooling solidsslowly until they reach a low energy state is called annealing. It was originally proposed byMetropolis et al. [16] and was "rst applied to combinatorial optimization problems by Kirkpat-rick et al. [17] and by Cerny [18]. In such an algorithm, the sequence of the objective functionvalues does not necessarily monotonically decrease.

Starting with an initial sequence p, a neighbor p� is generated (usually randomly) in a certainneighborhood. Then the di!erence

"F(p�)!F(p)

in the values of the objective function F is calculated. When )0, sequence p� is accepted as thenew starting solution for the next iteration. In the case of '0, sequence p� is accepted as newstarting solution with probability exp(! /¹), where ¹ is a parameter known as the temperature.

Typically, in the initial stages, the temperature is rather high so that escaping from a localoptimum in the "rst iterations is rather easy. After having generated a certain number ¹CON ofsequences, the temperature usually decreases. Often, this is done by a geometric cooling schemewhich we will also apply. In this case, the new temperature ¹�� is chosen such that ¹��"�¹��,where 0(�(1 and ¹�� denotes the old temperature. An alternative cooling scheme was given forinstance by Lundy and Mees [19].

A possible stopping criterion would then be a cycle of a "nal temperature, which is su$cientlyclose to zero (for instance 0.01 as in [14]). Since we use for all heuristics a given number ofgenerated solutions as stopping criterion, we determine on the base of the initial temperature ¹,the "nal temperature ¹"0.01 and the number ¹CON of generated solutions with a constanttemperature, the reduction factor � in the geometric cooling scheme.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 129

Page 8: Local search heuristics for two-stage flow shop problems with secondary criterion

3.2. Threshold accepting

Threshold accepting (TA) was originally proposed by Dueck and Scheuer [20] and can beregarded as a deterministic variant of simulated annealing. The idea is to accept moves witha nonimproving objective function value not with a certain probability, but only if the increase inthe objective function value of the neighbor does not exceed a given threshold value <. Thethreshold value is usually rather large in the initial stages to allow the search an adequate coveringof the solution space, but then it is reduced as the algorithm progresses. Glass and Potts [21]experimental work shows that the use of linear and quadratic reduction of the threshold value yieldvery similar results. In our tests, we considered a linear reduction of the value <.

As for simulated annealing, we generated a certain number <CON of solutions with a constantthreshold value<, and the initial threshold value< was chosen such that for all instances the samemaximum percentage of a decrease in the objective function value is accepted. Based on the initialthreshold value <, the value <CON, and the number of generated solutions, we linearly reducedthe threshold value such that the "nal cycle with constant < is performed for <"0. Note thatiterative improvement is included as a special case when <"0 (with the di!erence that alsomoves to solutions with the same objective function value are accepted).

In addition to decreasing thresholds, we also considered an adjusted scheme which always startswith <"0. If during the last NCON generated solutions no neighbor has been accepted, weincrease the value <"<# <. If during the next NCON generated solutions, again no neighborhas been accepted, we once more increase the value of < by <. However, as soon as a neighborhas been accepted, we reset <"0. Such a re"ned scheme allows moves to solutions with worseobjective function value only when it becomes di$cult to "nd better neighbors and when thedanger that the search stagnates in a local optimum increases.

3.3. Tabu search

Tabu search's (¹S) origin dates back to the 1960s and 1970s and was proposed in its present formby Glover [22]. The majority of the applications of ¹S started in late 1980s [23]. One of the mainideas of ¹S, as its name depicts, is its use of a #exible memory (tabu list) to tabu certain moves fora period of time. In every iteration of ¹S, a move will be instantly assigned to the tabu list when themove is chosen to lead the search from the current solution to its neighbor solution. This move willthen not be chosen for a number of immediately succeeding iterations. This number of iterations isdenoted as tabu list size, and the size is limited to a certain length. When the list has reached itsspeci"ed length, the move that was assigned to the list earliest is released from the list and the mostcurrent move is inserted. With an appropriate design of the tabu list, ¹S is able to prevent cyclingof the search and guide the search to the solution regions which have not been examined andapproach to good solutions in the solution space. However, design of the tabu list may alsoprohibit the search to appealing solution regions. To compensate for this disadvantage, Glover[22] suggested the using the concept of aspiration criterion de"ned as follows: if a speci"c move thatis currently tabued has the potential to lead the search to good solution regions, that move shouldbe removed from the tabu list (aspired). We include the most common one where a tabued move isremoved from the tabu list if this move can provide a better solution than the incumbent solution.Next, we discuss some parameters a!ecting the performance of ¹S.

130 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 9: Local search heuristics for two-stage flow shop problems with secondary criterion

The neighborhood size NSIZE represents the number of candidate solutions to be evaluated ineach iteration of the search process. Tabu search uses two common types of neighborhood size. The"rst kind is to evaluate all possible neighbors and select the best nontabued solution in eachiteration. This kind of examination may be suitable if the cardinality of the neighborhood is not toolarge. The quality of the solution obtained by this neighborhood examination may be good but thediversi"cation capability of ¹S may be a!ected. The second type is to evaluate only a "xed numberof neighbors in one iteration. This type of neighborhood examination may improve the capabilityof diversi"cation of ¹S. In our experiments, we considered the SH and PI neighborhoods witha random generation of neighbors, and the API neighborhood with a complete examination of theneighborhood.

Past research [22] indicates that it is reasonable to set the list sizes to be constant. To describethe tabu list, we consider order and position attributes. We describe the application of theseattributes to the SH and PI neighborhoods.

Order attribute

SH neighborhood* If a job i is shifted to the right immediately after job j by an accepted move,we add the pair ( j, i) to the list. Analogously, if job i has been shifted to the left immediately beforejob k, we add the pair (i, k) to the list.

PI neighborhood* If two jobs i and j are interchanged by an accepted move, we add the pair ( j, i)to the list.

Position attribute

SH neighborhood* If a job i from position k is shifted to another position, we store the pair (i, k).PI neighborhood* If the jobs i and j from positions k and l are interchanged, we store the pairs

(i, k) and ( j, l ).

If the API neighborhood is applied, we use the description of the tabu list given for the SHneighborhood. Applying the order attribute, a sequence is tabu, if any of the ordered job pairs inthe tabu list are sequenced in reverse order. A neighbor is tabu under the position attribute list, ifany job i is sequenced at a position k for which the pair (i, k) is contained in the tabu list (SHneighborhood), or if jobs i and j are interchanged back to positions k and l for which the pairs (i, k)and ( j, l ) had been added to the tabu list in one iteration (PI neighborhood).

We also tested a tabu search variant including long-term memory and search intensi"cation inthe following way. If, for a "xed number of iterations N¹SI¹ER, no improvement of the currentlybest objective function value has been obtained, we return to the best solution p and thecorresponding tabu list and we intensify the search in the neighborhood of this solution. Inparticular, we generate as neighbors only permutations obtained from p by an interchange of jobsp(i) and p( j) with �i!j�)max (or similarly, by a shift of restricted length), wheremax is a parameterto be chosen. In our tests, we experimented with max"n/4. After accepting the generatedneighbor, we continue with tabu search in the whole neighborhood of the current starting solution.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 131

Page 10: Local search heuristics for two-stage flow shop problems with secondary criterion

If within the search again for N¹SI¹ER iterations, no improvement of the best solutionhas been found, we return to the best solution obtained within the search after sequence p, andintensify the search again as described above, and so on. We denote this tabu search variant asTS-intens.

3.4. Multi-level heuristics

Multi-level heuristics are local search procedures which apply di!erent neighborhoods. Typi-cally, when generating a neighbor, a neighborhood with &large' changes is used (high-levelneighborhood). Then, another neighborhood with &small' changes (low-level neighborhood) isapplied within which local optimization is performed. This is similar to the large-step optimizationprocedures, where one neighborhood is normally used, but, occasionally a move to a solution ismade, which also makes &large' changes in the structure of the current solution. Martin et al. [24]applied such an approach to the traveling salesman problem. The papers by Brucker et al. [25,26]showed that for the single-machine, parallel-machine, and #ow shop scheduling problems, localoptimization within the low-level neighborhood can be done in a constructive way during thegeneration of a high-level neighbor. By applying structural properties of the problem considered,the high-level neighborhood is de"ned as the set of feasible solutions that are locally optimal withrespect to some low-level neighborhood, often based on the API neighborhood. Lourenc7 o [27]considered the job shop scheduling problem. In this paper, adjacent pairwise interchanges of twooperations are considered to generate a neighbor, and a large step is de"ned by performing someadjacent pairwise interchanges consecutively, or by reoptimizing the sequence of operations ona machine by applying the shifting bottleneck procedure [28]. We extend the multi-level approachdeveloped by Danneberg et al. [14] where the search is guided by three parameters <, h, and l. Inthe "rst step of an iteration, an &acceptable' high-level neighbor is generated. In an ideal case, the"rst generated high-level neighbor should have an objective function value which is not more than<"F(p)/¹H over the objective function value of the current starting solution, where ¹H is anyconstant. However, if h high-level neighbors have been generated and none of these satis"es theabove condition, the best of them is accepted as a high-level neighbor.

From the accepted high-level neighbor, iterative improvements are performed in the low-levelneighborhood. To avoid a complete investigation of the low-level neighborhood the maximumnumber of low-level neighbors generated in one iteration is limited by the parameter l. Thesequence obtained is taken as the generated (iterative) neighbor. The choice of an appropriatehigh-level neighborhood is of particular importance for the problems considered here sincechanges that are too &large' would often lead to sequences that are not C

���-optimal. In this study,

we consider the following high-level neighborhoods in such a multi-level algorithm:

(a) PI�API* a neighbor (obtained by performing an interchange of two jobs, which is not anadjacent pairwise interchange);

(b) k-PI with k3�2, 3� and(c) (k

�, k

�)-API.

In case (b), we "rst decide how many pairwise interchanges will be performed. Then, wesuccessively choose job pairs to perform the pairwise interchanges such that no job is selected

132 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 11: Local search heuristics for two-stage flow shop problems with secondary criterion

twice. For simplicity, in case (c) we perform consecutive adjacent pairwise randomly choseninterchanges (thus, such an interchange may reverse some previous interchanges).

As the low-level neighborhood, we always use the API neighborhood including dominancecriterion DOM2. To perform local optimization starting from the high-level neighbor p�, weconsider two randomly determined consecutive jobs p�(i) and p�(i#1), i"1,2, n!1. If theysatisfy conditions (1)}(3) (which can be checked in constant time), the interchange of both jobs neednot be considered since it may not improve the objective function value. Otherwise, we considersequence p, obtained from p� by interchanging both jobs. If the partial sequences � and satisfythe conditions of Theorem 3, we replace sequence p� by sequence p.

Applying multi-level search, we usually generate several high-level and low-level neighborswithin one iteration. Since our stopping criterion is based on the number of solutions generated, wecount the number of objective function value calculations that have been done, and only after eachiteration do we check whether the stopping criterion is satis"ed. For accepting an iterativeneighbor, we consider further acceptance schemes in addition to that in [14]:

(1) TA scheme: Each iterative neighbor whose increase in the objective function value incomparison with that of the starting solution of the current iteration does not exceed a given bound< is accepted. We use the same bound < as for the acceptance of a generated high-level neighborwithin an iteration, namely<"F(p)/¹H with current starting solution p and di!erent values of ¹H.

(2) Iterative improvement scheme: Only iterative neighbors with a better objective function valueare accepted.

(3) Modixed nonimproving scheme: If for a certain number of iterations NM¸I¹ER no improve-ment of the best solution so far has been obtained, we return to the best solution p and continue thesearch from there. If further iterations produce no improvement within NM¸I¹ER, we return tothe best solution found after p and continue from there, and so on. This acceptance schemecorresponds to that used in the tabu search variant ¹S-intens.

(4) SA scheme: We apply simulated annealing as described in Section 3.1 with the di!erence thatafter generating an iterative neighbor the temperature is rapidly decreased in accordance with thenumber of solutions generated within one iteration. This guarantees that the search "nishes withthe "nal value ¹"0.01 (this scheme was used in [14]).

3.5. Genetic algorithm

For comparison purposes, we included into our tests a genetic algorithm GA given in [11]originally for C2"�C

�. This algorithm can be used immediately also for the other C2 criteria

considered in this paper.Genetic algorithms are probabilistic search techniques based on the mechanism of evolution.

The solution space is usually represented by a population. New structures are generated byapplying simple genetic operators such as cross-over, mutation, and inversion to the parentstructures. The members with higher "tness values (i.e., better objective function values) in thecurrent population will have higher probability of being selected as parents, which is similar toDarwin's concept of survival of the "ttest. The initial population is randomly generated, whichmeans that the feasibility of the "nal solution would not be guaranteed. Therefore, in the initialpopulation, at least one solution having the minimum makespan is included (for instance, by

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 133

Page 12: Local search heuristics for two-stage flow shop problems with secondary criterion

applying a constructive method given in [13]), and the best solution is saved in every generation. Inthis way, the feasibility of the "nal solution is guaranteed.

In our tests, we included the vector evaluated approach from [11]. In this case, the "tness valueof a solution is a vector representing the function values with respect to both criteria. A sub-population for each criterion is generated by selecting the best solutions for the criterion from thecurrent population. Then, solutions with good "tness values in each sub-population are selectedand recombined in each generation to produce a solution that minimizes criterion C2 subject toa minimum makespan. We note that the mutation operation is based on the pairwise interchangeof two jobs in the corresponding sequence. For further details, the reader is referred to [11]. Weadjusted the parameters (population size and number of generations) such that the number ofsolutions produced are the same as for the other types of algorithms described in this section.Details are given in Section 4 below.

4. Computational results

The e!ectiveness of each proposed local search heuristic in "nding an optimal or near-optimalsolution was empirically evaluated by solving a large number of problem instances. In this section,we "rst describe the design of experiments and the manner in which various parameter values wereset. Subsequently, we discuss the experimental results.

4.1. Design of experiments

The performance of the local search heuristics was evaluated with four groups of problems, eachclassi"ed by the number of jobs (20, 40, 60, and 80). Within each group, 20 di!erent instances wererandomly generated for each objective function considered. For initial experiments used to "nd thebest parameter settings for each heuristic, the processing times of the jobs for each of the problemswere integers from the uniform distribution [1,100]. All algorithms were coded in C and run ona Power PC 603e (166 MHz). For all three objective functions considered, all parameter tests useda &bad' starting sequence obtained by algorithm JOHN and a &good' starting solution obtained bythe insertion algorithm INS.

Our initial experiments considered a constant number of generated sequences (NSO¸"4000),and a number of generations linearly depending on the number of jobs (NSO¸"100n). Inmost cases, the results turned out to be superior for a linear number of generated permutations(in particular for the problems with n"80). Therefore, we use in the following the lattervariant exclusively. We describe the parameter settings that were tested for the individualalgorithms.

For simulated annealing, we "rst tested the in#uence of parameters ¹ and ¹CON. In particular,we considered the following variants for both the PI and the SH neighborhoods:

(a) ¹�"F(p)/5 and ¹

�"F(p)/15;

(b) ¹CON�"1 and ¹CON

�"n/2.

Subsequently, we investigated whether the inclusion of criterion DOM1 improved the performanceof simulated annealing for C2"�C

�.

134 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 13: Local search heuristics for two-stage flow shop problems with secondary criterion

For threshold accepting, we considered a variant TA-I that does not accept inferior moves (i.e., weset <"0). For decreasing threshold schemes (variant TA-dec), we considered:

(a) <�"F(p)/2000, <

�"F(p)/1000 and <

"F(p)/200;

(b) <CON�"1 and <CON

�"n/2.

Concerning variable threshold schemes (variant TA-var), we considered:

(a) <�"F(pH)/2000, <

�"F(pH)/5000 and <

"F(pH)/10000;

(b) NCON�"20 and NCON

�"40.

where pH denotes the current starting solution within the search.For tabu search, we "rst tested for a larger length of the tabu list (¸S"7) and a larger

neighborhood size (NSIZE"20), the in#uence of both attributes, position and order, for both thePI and the SH neighborhoods. Then we experimented with alternative values of the parameters ¸Sand NSIZE:

(a) ¸S3�3, 5, 12�;(b) NSIZE3�10, 40, 60, n/4, n�.

We also checked whether the variant TA-intens improved the results obtained so far with tabusearch. For comparison purposes, we also tested a ¹S variant with complete investigation of thenontabu neighbors in the API neighborhood.

For the multi-level search, we "rst used the ¹A acceptance scheme for an iterative neighbor withthe following variants in our initial tests:

(k�, k

�)-API: We experimented with h3�8, 15, 20�, l3�10, 20, 40�, ¹H3�1000, 2000, 5000� and

2)k�(k

�)20.

PI�API: We experimented with h, l3�3, 5, 10, 15, 20� and ¹H3�1000, 2000, 5000�.Having settled the above parameters, we then compared the high-level neighborhoods 2-PI and

3-PI with PI�API again for the ¹A acceptance scheme, and "nally we compared several accept-ance schemes.

4.2. Comparison of neighborhoods

First, we compared both the SH and the PI neighborhood on some parameter variantsfor simulated annealing, threshold accepting, and tabu search. For each parameter constella-tion considered, we observed quite similar trends. For one typical variant of each metaheuristic,Table 1 shows the percentage improvements over the starting solution. In almost all cases,the PI neighborhood performed better (and sometimes even clearly better). This is in contrastto results for scheduling problems with makespan minimization, where often the SH neighbor-hood is superior. Therefore, only PI-based neighborhoods were applied in the multi-levelsearch.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 135

Page 14: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 1Comparison of the PI and SH neighborhoods

PI SH

�C�

�w�C

��w

�¹

��C

��w

�C

��w

�¹

Starting solution with algorithm JOHNSA 30.21 41.80 83.74 29.27 40.66 83.01TA-I 29.69 42.00 83.15 28.43 40.21 81.45TA-dec 28.65 40.93 83.19 27.82 39.55 81.37TS 29.87 41.67 83.54 29.19 41.10 83.54

Starting solution with algorithm INSSA 0.20 15.20 64.15 0.11 13.35 61.93TA-I 0.24 14.73 61.58 0.29 12.53 60.71TA-dec 0.14 14.15 63.74 0.14 12.30 61.18TS 0.12 14.30 63.43 0.17 13.39 61.30

4.3. Parameter settings

For local search heuristics, the parameter setting usually does not depend on the quality of thestarting solution or on the objective function. For this reason, we recommend a setting that workswell from an overall point of view. However, as seen from Table 1, there are large di!erences in theachieved percentage improvements depending on the starting solution and the C2 criterionconsidered. Most improvements were obtained for C2"�w

�¹

�which indeed is the &most complic-

ated' criterion. This usually leads to higher percentage improvements, and the di!erences becomelarger for the individual variants of an algorithm. For C2"�C

�criterion, the insertion algorithm

produces a rather good solution so that iterative algorithms produced only small additionalpercentage improvements. Although algorithm INS is a constructive one, the required number offunctional value computations is rather large. In particular, for the problem sizes included into ourexperiments, algorithm INS considers 1,521 sequences for n"20; 11,441 sequences for n"40;37,761 sequences for n"60; and 88,481 sequences for n"80. Consequently, for larger values of n,the constructive algorithm requires considerably larger number of sequences than the iterativevariants tested here (and, therefore, requires larger computational times).

Concerning simulated annealing, the choice of ¹CON hardly in#uenced the results. Since keepingthe temperature constant for a certain number of iterations did not improve the results, forsimplicity, we choose ¹CON

�. Moreover, use of ¹

�was slightly superior to ¹

�. The consideration

of a dominance criterion DOM1 within the simulated annealing algorithm for C2"�C�

did notimprove the results (the di!erences in the average percentage improvements are lower than 0.02%and have practically no in#uence). For threshold accepting with decreasing threshold schemes,a high initial threshold values (<

�, <

) worked poorly. Variant <

�clearly performed best. This

variant mostly outperformed variant TA-I. For C2"�w�¹

�and the starting solution determined

by algorithm JOHN, the improvements over the starting solution for this variant were more than2% higher than with TA-I.

136 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 15: Local search heuristics for two-stage flow shop problems with secondary criterion

In connection with variable threshold schemes, we observed that results become better fora smaller value of <, and we recommend the use of <

. Concerning parameter NCON, the

results are almost identical for C23��C�, �w

�C

��, but NCON

�performs better for C2"�w

�¹

�.

Thus, we recommend NCON�. In particular, we found that the variable threshold schemes were

superior to decreasing schemes, and the percentage improvements over the starting solution weresometimes more than 1% higher than with decreasing schemes. In the comparative study,therefore, we include threshold accepting with the above recommended variable threshold scheme,and for comparison purposes the variant TA-I.

Applying tabu search, we observed that for ¸S"7 and NSIZE"20, the position attributeperformed better, both for the PI and the SH neighborhood. The complete investigation of theAPIneighborhood within a ¹S algorithm produced poor results. Even though, on average, an iterativeimprovement algorithm with complete investigation requires the evaluation of more than 40,000sequences to determine a local optimum for the problems with n"80, its results were still ratherpoor when we started with Johnson's sequence.

Having "xed the position attribute and the PI neighborhood, we found that ¸S"3 andNSIZE"n obtained the best results and thus were used in the following experiments. Inparticular, longer tabu lengths (¸S"12) and smaller neighborhood sizes (NSIZE"10) clearlyperformed worse.

With the above parameter settings, we tested this variant against TA-intens for the PI neighbor-hood. The intensi"cation of ¹S did not improve the results as the average percentage improvementsfor TA-intens are up to 0.26% lower for N¹SI¹ER"8 which turned out to be the best choice.However, this could have been due to the large neighborhood size of n and the rather small number oftabu search iterations (100). Nevertheless, since the results with TS-intens were still better than thosewith ¹S using smaller neighborhood sizes and longer tabu lengths, we included this variant into ourcomparative study since it might be advantageous in the case of a larger number of iterations.

For multi-level procedures, we "rst tested the PI�API high-level neighborhood together with the¹A acceptance scheme. We found that the number l of low-level neighbors should not be largerthan the number h of high-level neighbors (which corresponds to the results for permutation #owshop problems with batch processing [14]). In particular, we found that variants with h3�3, 5� andl"15 performed poorly. In general, a ratio h/l between 2 : 1 and 3 : 1 can be recommended, andvariants with h3�10, 15� and l"5 produced good results. Concerning parameter ¹H we observedthat larger values of ¹H (i.e., smaller threshold values) were superior. For the comparative study,therefore, we used the variant with h"15, l"5 and ¹H"5000.

When applying high-level neighborhood (k�, k

�)-API, we found that the results are not competi-

tive with neighborhood PI�API. Higher values of h (h"15) and of (k�, k

�) (approximately

(k�, k

�)"(10, 12)) were the best variants for this high-level neighborhood. However, rather high

values of l worked better in this case (e.g., l"40 outperforms l3�10, 20�) which indicates thatmuch expense must be invested in the low-level optimization.

Moreover, with the parameter settings for high-level neighborhood PI�API, we found that theuse of 2-PI yielded results of comparable quality. However, the application of 3-PI led toconsiderably worse results. Thus, the use of a high-level neighborhood with few small ((10, 12)-API) or many large (3-PI) changes is not recommended.

Finally we tested the variant with the PI�API neighborhood recommended above against thealternative acceptance schemes. We found that the TA acceptance scheme produced the best results,

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 137

Page 16: Local search heuristics for two-stage flow shop problems with secondary criterion

followed by the SA acceptance scheme (which is only slightly worse). The biggest di!erences in thepercentage improvements were observed for C2"�w

�¹

�and the use of algorithm INS for determin-

ing the starting solution, where the average percentage improvements with the TA acceptance schemewere more than 2% higher than with the iterative improvement acceptance scheme.

4.4. Comparative evaluation of heuristics

We now discuss the comparative e!ectiveness of various heuristics in "nding an optimal ornear-optimal schedule. For this purpose, heuristics with the above parameter settings and the PIneighborhood were used. In order to analyze the e!ect of domination of the "rst and secondmachines the following three classes of problems were generated:

� Class 1: Both machines are equivalent (the range of processing times on the "rst machine andsecond machine is ;(1, 99), where ;(1, 99) means that the processing times are uniformlydistributed integers from the set �1, 2,2, 99�);

� Class 2: the "rst machine dominates the second machine (the range of processing times on the"rst machine is ;(1, 99) and on the second machine is ;(1, 49)); and

� Class 3: the second machine dominates the "rst machine (the range of processing times on the"rst machine is ;(1, 49) and on the second machine is ;(1, 99)).

The number of jobs (problem size) varied from n"10 to n"80. For each problem con"guration,20 problem instances were generated. The number of solutions generated, NSO¸, was set at threelevels, NSO¸"100n, NSO¸"200n, and NSO¸"300n. In the genetic algorithm we alwaysapplied a population size of 2n. To generate the same number of solutions as with the otheralgorithms, we set the number of generated populations equal to 50, 100 and 150, respectively.Concerning the initial population, 50% of the solutions were randomly generated, and the otherhalf was obtained by using algorithm JOHN and some random modi"cations of this sequence aswell as by using algorithm INS and again some modi"cations of this sequence.

4.4.1. Comparative ewectiveness of heuristics for large problemsFor each algorithm and each type of problem, Tables 2}7 depict the average percentage

deviation from the best value obtained for each instance of the corresponding 20 instances of eachseries (the lowest percentage deviations obtained are shown in bold face), where M¸ denotes themulti-level procedure with the PI�API high-level neighborhood and ML(2) denotes the multi-levelprocedure with the 2-PI high-level neighborhood.

Among di!erent classes of problem instances, those in class 1 are the most di$cult ones. Anindication of this observation are the higher percentage deviations of the iterative improvementvariant TA-I from the best function value obtained. Detailed results for each secondary criterion arediscussed below.

4.4.1.1. The total yow time criterion. In the case of C2"�C�

(see Tables 2 and 3), the multi-levelprocedures provided the best results independent of the starting solution. Both high-level neighbor-hoods PI�API and 2-PI yielded good results. However, it should be noted that the constructivealgorithm INS obviously produced excellent starting solutions (on average, the deviation from thebest obtained value is approximately only 1%, usually even smaller for large problems). Due to

138 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 17: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 2Results for C2"�C

�and starting solution JOHN

Class n NSOL TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 3.17 2.99 0.97 0.95 1.15 0.99 0.34 3.594000 3.17 2.98 0.97 0.55 0.79 0.88 0.50 1.546000 3.17 2.98 0.78 0.43 0.61 0.59 0.43 1.11

40 4000 1.22 1.16 1.43 1.20 1.35 0.72 0.93 5.348000 1.16 1.03 0.70 0.88 0.91 0.41 0.57 3.29

12,000 1.16 1.03 0.65 0.68 0.80 0.30 0.45 2.6660 6000 1.43 1.39 1.08 1.22 1.38 0.98 1.01 6.56

12,000 1.20 1.01 0.68 0.88 1.00 0.66 0.56 2.7918,000 1.18 0.95 0.41 0.71 0.87 0.55 0.44 1.59

80 8000 1.45 1.37 1.20 1.29 1.35 0.93 1.33 7.6716,000 1.27 1.27 0.52 0.97 1.02 0.64 0.79 3.6724,000 1.25 1.27 0.45 0.82 0.89 0.51 0.63 2.45

2 20 2000 1.57 1.46 0.34 0.73 0.80 0.69 0.26 1.794000 1.17 1.07 0.22 0.49 0.44 0.67 0.13 0.86000 1.17 1.07 0.20 0.43 0.30 0.62 0.05 0.66

40 4000 0.98 0.99 0.84 0.99 1.04 0.66 0.58 3.458000 0.96 0.96 0.64 0.86 0.85 0.57 0.48 1.44

12,000 0.96 0.95 0.42 0.81 0.79 0.56 0.47 0.9460 6000 0.98 1.06 0.87 0.97 0.97 0.70 0.96 5.32

12,000 0.94 0.99 0.50 0.88 0.89 0.63 0.81 2.0818,000 0.94 0.98 0.54 0.83 0.83 0.60 0.76 1.52

80 8000 1.06 1.29 0.82 1.03 1.09 0.72 0.66 8.7516,000 1.00 1.29 0.53 0.95 1.02 0.60 0.47 5.1324,000 1.00 1.29 0.44 0.91 0.94 0.56 0.42 4.42

3 20 2000 0.22 0.21 0.06 0.15 0.21 0.04 0.04 1.904000 0.15 0.14 0.08 0.11 0.11 0.04 0.03 0.716000 0.15 0.14 0.05 0.07 0.10 0.02 0.03 0.35

40 4000 0.19 0.18 0.27 0.21 0.31 0.13 0.19 4.818000 0.17 0.16 0.14 0.13 0.23 0.06 0.10 1.69

12,000 0.17 0.16 0.12 0.10 0.19 0.03 0.07 0.8360 6000 0.49 0.39 0.30 0.29 0.36 0.24 0.25 6.30

12,000 0.43 0.35 0.33 0.21 0.29 0.17 0.15 1.7218,000 0.43 0.35 0.17 0.18 0.26 0.15 0.12 0.92

80 8000 0.50 0.80 0.57 0.53 0.57 0.46 0.56 8.1316,000 0.45 0.80 0.43 0.46 0.52 0.40 0.44 2.8224,000 0.44 0.80 0.38 0.43 0.50 0.38 0.40 1.78

this, the results with the good starting solution INS were slightly better than those with the badstarting solution JOHN. When starting the population based on solution JOHN, the convergence ofthe genetic algorithm was rather slow.

4.4.1.2. The total weighted yow time criterion. For C2"�w�C

�(see Tables 4 and 5), the

multi-level procedures starting with solution JOHN, produced the best results. The high-level

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 139

Page 18: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 3Results for C2"�C

�and starting solution INS

Class n NSOL TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 0.70 0.71 0.57 0.63 0.58 0.37 0.33 0.954000 0.70 0.71 0.44 0.48 0.37 0.16 0.18 0.726000 0.70 0.71 0.42 0.43 0.22 0.11 0.14 0.61

40 4000 0.34 0.33 0.42 0.30 0.41 0.15 0.19 0.528000 0.33 0.32 0.30 0.23 0.35 0.10 0.14 0.47

12,000 0.33 0.32 0.26 0.21 0.32 0.06 0.10 0.4160 6000 0.24 0.26 0.40 0.29 0.31 0.20 0.25 0.45

12,000 0.22 0.24 0.28 0.23 0.29 0.15 0.15 0.4118,000 0.21 0.24 0.27 0.19 0.28 0.11 0.12 0.36

80 8000 0.35 0.40 0.43 0.34 0.39 0.27 0.32 0.4516,000 0.31 0.39 0.39 0.29 0.35 0.20 0.32 0.4124,000 0.30 0.39 0.29 0.26 0.32 0.16 0.19 0.24

2 20 2000 0.31 0.31 0.12 0.29 0.32 0.26 0.13 0.304000 0.31 0.31 0.10 0.27 0.28 0.25 0.08 0.226000 0.31 0.31 0.22 0.25 0.15 0.25 0.07 0.20

40 4000 0.43 0.44 0.29 0.38 0.34 0.33 0.33 0.508000 0.43 0.44 0.15 0.36 0.28 0.32 0.32 0.44

12,000 0.43 0.44 0.16 0.35 0.24 0.28 0.23 0.4360 6000 0.28 0.28 0.17 0.28 0.23 0.26 0.24 0.28

12,000 0.28 0.28 0.14 0.28 0.21 0.22 0.23 0.2818,000 0.28 0.28 0.17 0.28 0.18 0.21 0.22 0.27

80 8000 0.20 0.20 0.17 0.20 0.17 0.18 0.19 0.2016,000 0.20 0.20 0.13 0.16 0.11 0.15 0.16 0.1924,000 0.20 0.20 0.14 0.16 0.10 0.14 0.15 0.19

3 20 2000 0.07 0.05 0.03 0.09 0.10 0.01 0.01 0.114000 0.07 0.05 0.03 0.08 0.09 0.01 0.01 0.086000 0.07 0.05 0.06 0.06 0.07 0.01 0.01 0.07

40 4000 0.07 0.10 0.14 0.13 0.16 0.07 0.06 0.218000 0.06 0.10 0.10 0.10 0.14 0.06 0.03 0.17

12,000 0.06 0.10 0.07 0.09 0.13 0.05 0.02 0.1660 6000 0.14 0.26 0.25 0.15 0.22 0.10 0.13 0.25

12,000 0.13 0.26 0.15 0.13 0.16 0.08 0.06 0.2118,000 0.13 0.26 0.11 0.11 0.15 0.07 0.05 0.18

80 8000 0.39 0.46 0.46 0.40 0.44 0.39 0.40 0.4616,000 0.38 0.46 0.43 0.36 0.43 0.35 0.37 0.3024,000 0.38 0.46 0.40 0.34 0.42 0.32 0.36 0.13

neighborhood 2-PI outperformed the high-level neighborhood PI�API. When using INS asa starting solution, simulated annealing, tabu search and multi-level procedures worked best. Thetabu search variant performed particularly well for short runs (NSO¸"100n). For NSO¸"300n,algorithm M¸(2) produced the best results (67 times with starting solution INS and 65 times withJOHN), followed by algorithm SA (41 times with INS and 53 times with JOHN) and algorithm M¸

(23 times with INS and 52 times JOHN).

140 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 19: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 4Results for C2"�w

�C

�and starting solution JOHN

Class n NSOL TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 2.22 2.21 1.69 2.04 2.12 1.88 1.34 4.504000 2.21 2.20 1.55 1.57 1.70 1.50 1.19 2.526000 2.21 2.20 1.38 1.36 1.55 1.49 1.00 2.02

40 4000 1.71 1.72 1.24 1.66 1.88 0.97 1.22 7.948000 1.54 1.52 0.74 1.22 1.44 0.66 0.52 3.52

12,000 1.54 1.52 0.80 0.99 1.14 0.55 0.38 2.2660 6000 2.20 2.24 2.13 2.30 2.42 2.36 2.36 11.06

12,000 1.92 1.94 1.75 1.78 2.06 1.93 1.61 4.6218,000 1.88 1.81 1.19 1.57 2.00 1.80 1.37 1.98

80 8000 1.67 1.68 1.52 1.69 1.84 1.51 2.13 14.3516,000 1.37 1.45 1.02 1.35 1.57 1.08 1.36 5.2724,000 1.34 1.19 0.60 1.20 1.39 0.90 1.11 2.78

2 20 2000 0.93 0.93 0.43 0.90 0.86 1.04 0.22 4.234000 0.93 0.93 0.48 0.80 0.54 0.89 0.17 1.466000 0.93 0.93 0.30 0.49 0.46 0.86 0.17 0.67

40 4000 1.25 1.27 1.02 1.21 1.37 0.80 1.30 5.978000 1.14 1.14 0.90 1.06 1.09 0.57 0.89 2.20

12,000 1.14 1.14 1.09 0.97 1.02 0.55 0.79 1.2060 6000 1.01 1.10 0.82 1.06 1.18 0.56 0.53 9.05

12,000 0.96 1.07 0.45 0.94 0.99 0.49 0.39 3.5118,000 0.96 1.07 0.79 0.90 0.95 0.46 0.35 2.33

80 8000 1.19 1.37 1.36 1.20 1.25 0.88 0.92 13.0316,000 1.08 1.37 0.50 1.07 1.19 0.73 0.65 4.0424,000 0.95 1.37 0.86 1.04 1.14 0.66 0.57 2.62

3 20 2000 0.18 0.15 0.13 0.27 0.29 0.16 0.03 3.104000 0.17 0.13 0.06 0.22 0.18 0.12 0.02 0.996000 0.17 0.13 0.04 0.19 0.15 0.07 0.02 0.47

40 4000 0.78 0.81 0.85 0.90 1.02 0.77 0.79 7.058000 0.75 0.76 0.69 0.76 0.92 0.69 0.65 2.54

12,000 0.75 0.76 0.67 0.72 0.89 0.66 0.61 0.8560 6000 0.78 0.81 0.78 0.79 0.91 0.71 0.75 9.98

12,000 0.65 0.78 0.63 0.66 0.82 0.58 0.42 3.7118,000 0.64 0.78 0.60 0.60 0.76 0.55 0.33 2.38

80 8000 0.33 0.50 0.34 0.41 0.43 0.35 0.64 11.9516,000 0.21 0.50 0.20 0.25 0.35 0.20 0.23 3.9424,000 0.20 0.50 0.19 0.22 0.32 0.17 0.17 2.53

4.4.1.3. The total weighted tardiness criterion. For the C2"�w�¹

�criterion, Tables 6 and 7 show

that simulated annealing has clearly obtained the best results. We assume that this is due to thevery large di!erences in the objective function values of the starting solution and the "nal heuristicsolution (even when using INS as a starting solution, the initial objective function value is usuallyapproximately between 200 and 250% over the best function value obtained). For this reason, often

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 141

Page 20: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 5Results for C2"�w

�C

�and starting solution INS

Class n NSOL TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 3.10 3.05 1.89 1.83 2.21 1.58 2.06 5.954000 3.09 3.05 1.69 1.57 1.67 1.37 1.68 2.096000 3.09 3.05 1.62 1.29 1.48 1.34 1.39 1.18

40 4000 1.50 1.56 1.65 1.25 1.27 1.25 1.63 9.658000 1.37 1.42 1.08 0.90 0.93 1.04 0.95 4.69

12,000 1.37 1.42 0.70 0.71 0.85 0.95 0.75 2.6360 6000 2.24 2.22 2.19 2.14 2.26 2.31 2.54 11.03

12,000 2.05 1.99 1.54 1.71 1.97 1.86 1.76 5.9718,000 2.03 1.92 1.29 1.54 1.75 1.71 1.51 3.68

80 8000 1.49 1.46 1.45 1.49 1.59 1.58 2.13 12.3916,000 1.16 1.24 0.85 1.01 1.20 1.11 1.39 6.1724,000 0.97 1.21 1.02 0.88 1.01 0.93 1.02 3.67

2 20 2000 0.72 0.72 0.76 0.69 0.51 0.86 0.69 3.564000 0.72 0.72 0.32 0.37 0.30 0.76 0.67 1.216000 0.72 0.72 0.22 0.33 0.23 0.75 0.65 0.65

40 4000 0.43 0.34 0.57 0.46 0.56 0.56 0.56 5.648000 0.41 0.24 0.58 0.29 0.46 0.24 0.37 1.83

12,000 0.41 0.24 0.34 0.21 0.31 0.23 0.34 0.7060 6000 0.47 0.60 0.68 0.47 0.60 0.63 0.61 7.91

12,000 0.41 0.54 0.74 0.32 0.46 0.45 0.36 3.1418,000 0.39 0.54 0.52 0.28 0.43 0.41 0.27 1.86

80 8000 0.57 0.82 0.47 0.61 0.64 0.70 0.94 9.2616,000 0.45 0.82 0.32 0.50 0.56 0.48 0.53 3.9624,000 0.31 0.82 0.89 0.46 0.54 0.43 0.44 2.35

3 20 2000 0.17 0.15 0.12 0.29 0.29 0.19 0.04 2.654000 0.17 0.15 0.05 0.21 0.19 0.14 0.03 1.126000 0.17 0.15 0.09 0.14 0.15 0.12 0.03 0.47

40 4000 0.74 0.72 0.80 0.89 0.95 0.80 0.75 6.858000 0.73 0.70 0.67 0.77 0.85 0.69 0.65 3.46

12,000 0.72 0.7 0.64 0.71 0.81 0.64 0.59 1.7460 6000 0.85 0.84 1.01 0.63 0.89 0.83 0.80 8.53

12,000 0.75 0.79 0.66 0.37 0.63 0.69 0.42 3.6618,000 0.74 0.79 0.57 0.33 0.58 0.67 0.37 1.98

80 8000 0.26 0.52 0.49 0.32 0.38 0.38 0.55 8.6516,000 0.16 0.52 0.17 0.19 0.34 0.22 0.29 3.6424,000 0.15 0.52 0.19 0.16 0.30 0.18 0.22 1.85

a procedure that goes &straightforward to better solutions' is preferable. For the larger-sizedproblems (n*60), even the iterative improvement variant TA-I is not so bad. This indicates that theprobability of getting trapped in a local optimum in the early stages of the search procedure israther small and that the &re"ned' search variants such as multi-level procedures only &lose' time(when setting a maximum number of generated solution in the range considered as a stoppingcriterion). However, for the small-sized problems (n"20), ML(2) works particularly well. For

142 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 21: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 6Results for C2"�w

�¹

�and starting solution JOHN

class n NSO¸ TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 18.82 15.58 10.51 8.08 10.25 5.79 5.54 38.944000 18.82 15.39 4.17 4.48 5.62 4.54 1.61 17.956000 18.82 15.39 4.88 3.07 2.86 3.98 1.56 10.61

40 4000 7.56 5.86 5.42 4.91 5.09 5.89 10.57 52.088000 6.97 4.25 4.34 3.63 4.12 3.10 6.86 20.02

12,000 6.97 3.91 2.54 3.12 3.98 2.51 5.12 12.5960 6000 10.46 7.67 9.14 9.47 9.09 12.33 11.17 66.94

12,000 8.84 5.97 4.89 7.17 7.18 8.36 7.74 26.2918,000 8.55 5.48 4.63 6.85 6.64 7.17 6.56 15.88

80 8000 10.16 7.80 7.25 8.57 8.65 10.24 11.64 84.6116,000 6.43 4.29 4.58 5.14 5.10 5.52 6.77 35.3024,000 5.60 3.06 3.68 3.94 4.14 3.58 5.68 23.28

2 20 2000 5.48 2.51 2.21 0.90 1.45 1.98 3.22 21.374000 5.48 1.05 0.80 0.23 0.64 1.39 1.62 4.476000 5.48 0.38 0.13 0.23 0.60 1.18 1.08 2.58

40 4000 5.83 5.52 4.94 5.07 4.54 7.78 8.65 38.738000 5.67 4.10 4.86 4.07 2.08 4.01 5.32 19.69

12,000 5.67 3.91 1.78 3.49 1.80 3.61 3.99 13.6060 6000 4.00 3.21 5.02 3.02 3.14 5.53 6.03 53.55

12,000 3.40 2.38 2.26 2.07 2.27 3.49 2.99 19.1518,000 3.32 2.27 1.85 1.53 2.15 2.85 2.06 12.47

80 8000 6.34 5.30 5.63 6.30 6.26 7.21 7.64 98.5916,000 5.09 2.67 2.33 3.49 4.00 4.00 3.91 51.2024,000 4.95 2.15 2.23 3.02 3.21 3.14 3.03 42.27

3 20 2000 5.36 4.48 3.38 4.17 3.98 6.91 4.24 20.674000 5.36 4.34 3.53 3.88 3.80 6.60 3.75 7.666000 5.36 4.34 3.42 3.86 3.80 6.57 3.63 6.14

40 4000 3.93 3.42 3.88 4.58 5.09 4.71 4.71 32.258000 3.82 2.04 2.07 3.71 4.49 4.07 3.62 12.94

12,000 3.82 1.97 2.49 2.30 3.49 3.70 2.90 8.4060 6000 7.65 7.41 8.39 7.44 7.45 8.70 8.89 46.96

12,000 7.13 6.78 6.56 6.94 6.99 7.04 6.75 23.9818,000 7.11 6.51 6.29 6.91 6.94 6.75 6.36 17.13

80 8000 6.01 5.11 5.90 6.17 6.18 7.66 7.09 65.2116,000 4.76 3.14 5.16 4.63 4.57 4.78 5.34 34.8224,000 4.60 2.94 3.15 4.31 4.46 4.31 4.39 31.06

problems with C2"�w�¹

�, there is a need for looking for better initial solutions (as the algorithm

INS does not use any rule for breaking ties in the case of equal function values of partial solutions).

4.4.2. Comparative ewectiveness of heuristics for small problemsAs stated earlier, no optimization algorithms or e!ective lower bounds are available to optimally

solve the hierarchical criteria problems considered in this paper. Therefore, it becomes di$cult to

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 143

Page 22: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 7Results for C2"�w

�¹

�and starting solution INS

Class n NSO¸ TA-I TA-var SA TS TS-int ML ML(2) GA

1 20 2000 29.48 12.85 7.63 6.47 9.82 9.63 7.89 56.354000 29.48 9.99 5.17 3.77 5.42 7.60 2.78 21.086000 29.48 8.23 3.22 2.81 3.46 3.30 1.45 9.93

40 4000 11.01 8.89 6.23 9.67 10.48 7.68 9.16 70.488000 10.26 7.23 2.62 6.59 7.87 5.15 6.14 29.94

12,000 10.24 6.03 4.25 5.99 7.05 4.85 5.16 15.9660 6000 10.36 9.60 8.17 10.09 9.94 11.64 14.86 90.57

12,000 8.24 7.35 4.92 8.19 7.86 7.93 10.82 38.0218,000 8.16 6.56 4.16 7.65 7.09 6.84 8.99 21.20

80 8000 10.62 7.86 9.56 8.29 8.29 13.64 15.05 125.5816,000 8.21 4.63 4.21 5.28 5.71 8.79 9.54 52.2624,000 7.74 4.08 2.76 4.63 5.01 6.21 7.52 26.00

2 20 2000 9.41 2.92 1.47 3.21 2.56 5.46 2.86 34.494000 9.41 1.50 0.69 1.47 0.74 3.19 1.38 15.176000 9.41 1.50 0.27 0.48 0.44 2.15 0.89 9.93

40 4000 5.99 4.79 3.63 5.48 6.07 7.58 8.46 51.728000 5.62 4.30 2.72 4.99 4.78 5.60 5.63 17.74

12,000 5.62 3.22 2.27 3.95 4.74 4.77 5.09 10.6460 6000 4.77 4.07 3.24 4.70 4.53 8.03 8.52 68.76

12,000 3.64 2.68 2.15 3.18 3.27 4.27 4.69 27.7218,000 3.60 2.48 1.16 2.90 3.22 3.03 3.51 19.34

80 8000 6.20 5.60 5.35 5.51 5.62 9.34 10.84 97.6916,000 4.97 4.03 3.67 3.49 4.03 4.66 5.95 35.8124,000 4.79 3.13 2.65 3.10 3.85 3.38 4.61 22.21

3 20 2000 8.22 7.32 4.06 7.23 6.72 6.90 3.79 26.284000 8.22 6.96 3.46 6.73 6.57 6.76 3.24 12.066000 8.22 6.82 3.19 6.37 6.45 6.68 3.24 6.33

40 4000 7.02 7.40 3.17 7.16 8.01 6.42 6.45 45.928000 6.70 6.86 2.43 6.71 6.65 4.30 4.47 18.33

12,000 6.70 6.61 2.52 6.43 6.55 3.88 3.56 11.6560 6000 8.08 8.06 8.83 7.47 7.89 9.35 9.63 60.40

12,000 7.06 7.15 6.67 6.88 7.19 8.11 8.17 21.4918,000 7.02 6.89 6.62 6.78 7.12 7.47 7.70 10.12

80 8000 7.51 5.53 7.05 6.53 5.52 9.14 9.41 80.5816,000 4.72 4.17 5.15 4.39 4.63 6.52 6.36 34.8724,000 4.09 3.96 3.60 4.22 4.25 5.57 5.77 19.73

report the absolute percentage deviation of the heuristic solution value from the optimal value (or,as its surrogate, the lower bound). However, we were able to modify a branch and bound algorithmdeveloped by Gupta et al. [29] for a di!erent problem to obtain optimal solutions for the total #owtime criterion. While not very e$cient, this algorithm was able to optimally solve problemsinvolving about 20 jobs. For the 20 class 1 problem instances (which were found to be the mostdi$cult to optimize), n"20 and C2"�C

�, we measured the percentage deviation of the heuristic

144 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 23: Local search heuristics for two-stage flow shop problems with secondary criterion

Table 8Comparison for class 1 problems with n"20 and C2"�C

�with the optimal solution

TA-I TA-var SA TS TS-int ML ML(2) GA Best

C2"�C�; NSO¸"300n, JOHN

NOS 1 2 4 0 0 5 7 0 12AVE 3.20 3.01 0.81 0.45 0.64 0.62 0.46 1.14 0.08MAX 15.09 14.28 6.30 0.99 1.75 3.31 1.71 2.52 0.86

C2"�C�; NSO¸"300n, INS

NOS 3 3 2 1 3 7 9 4 13AVE 0.72 0.72 0.44 0.45 0.25 0.10 0.15 0.63 0.04MAX 3.57 3.57 1.57 1.35 0.72 0.77 1.19 2.51 0.41

values from the optimal. Only long runs with NSO¸"300n together with both starting solutionsJOHN and INS are considered. The following statistics were collected:

NOS: number of optimal solutions,AVE: average percentage deviation of the heuristic solution value from the optimal solution value,MAX: Average percentage deviation of the heuristic solution value from the optimal solution value.

The results are given in Table 8, where the column &Best' gives the percentage deviation of thebest value obtained with the 8 procedures from the optimal value. It can be seen from Table 8 thateven with the bad initial solution JOHN, most algorithms produce solutions that deviate from theoptimal value on average by less than 1%. In particular, the multi-level procedure M¸(2) obtainedan optimal solution in about 45% of the problems.

4.5. Ezciency of the heuristics

The computational time of the iterative algorithms for the 2-machine problem under considera-tion is rather small. When taking a maximum number of generated solutions as stopping criterion,most algorithms used approximately the same time (except the genetic algorithm which needslarger times due to the more time consuming procedure of performing the cross-overs in compari-son with a single pairwise interchange). In particular, for all 20 instances of a series with n"80 andNSO¸"300n"24,000, the required time was about 13 s.

4.6. Summary of experimental xndings

We now summarize the "ndings of our empirical evaluation of various heuristics as follows:

� In contrast to several scheduling problems with minimizing the makespan, for theF2��F

�(C2/C

���) problem where C2 is any of the three objective functions considered, the use of

the pairwise interchange neighborhood leads to better results than the shift neighborhood.� For threshold accepting, variable threshold scheme outperformed decreasing threshold schemes.

For simulated annealing, a cooling scheme should be used which initially allows an increase in

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 145

Page 24: Local search heuristics for two-stage flow shop problems with secondary criterion

the objective function value by a certain percentage over the starting value with a "xedprobability (instead of a "xed value of the increase). For tabu search, the position attributetogether with a small size of the tabu list (and thus a less restrictive tabu list) performed best.Random investigation of the pairwise interchange neighborhood when the neighborhood size isequal to the number of jobs worked best. Complete investigation of all nontabu neighbors in theadjacent pairwise interchange neighborhood worked poorly. The inclusion of the intensi"cationstrategy did not improve the results which is probably due to the big neighborhood size and therather small number of iterations in the initial parameter tests.

� For the multi-level algorithms, choice of the high-level neighborhood signi"cantly in#uences thequality of results. For the problem under consideration, the use of a (nonadjacent) pairwiseinterchange or the use of at most two pairwise interchanges worked best. The application ofseveral consecutive adjacent pairwise interchanges for generating a high-level neighbor did notyield competitive results. In the low-level neighborhood, a local optimum need not necessarily bedetermined. It was important to include some structural properties in the investigation of theadjacent pairwise interchange neighborhood (in our tests we included criterion DOM2 to excludenonpromising neighbors immediately). A ratio of approximately 3 : 1 between generated high-level and low-level neighbors in one iteration is suitable for the recommended high-levelneighborhood.

� From the comparative study of the di!erent type of algorithms, we found that for problems withC2"�C

�andC2"�w

�C

�, the multi-level algorithms generally produced the best results. In this

case, the use of high-level neighborhoodsPI�API and 2-PI can be recommended, where the 2-PIneighborhood is even preferable when considering how often the best value has been obtained.

� For the problems with C2"�w�¹

�, we found that simulated annealing obtained the best results.

Due to considerably larger di!erences between the initial and "nal objective functionvalues, even iterative improvement algorithms worked better than expected, especially forproblems with a large number of jobs. Re"ned algorithms such as multi-level probably lose toomuch time at the early stages while investigating several high-level neighbors in parallel andperforming only small moves in the low-level neighborhood. This observation con"rms the needfor the development of better constructive algorithms for developing better initial startingsolutions.

� The genetic algorithm converges rather slowly, thereby requiring a larger number of generatedsolutions to produce competitive results. The variable threshold scheme in algorithm TA waschosen since linear threshold schemes performed worse. However, it could also be advantageousto apply variable &cooling' schemes in algorithm SA. Both tested tabu search variants TS andTS-intens produced results of comparable quality, where in general TS is slightly superior.

� While the problems with the three di!erent hierarchical criteria were similar as far as the set offeasible solutions and the neighborhood relations are concerned, the comparative performanceof heuristics was di!erent for various problems. This was mainly due to the range of thepercentage di!erences in the objective function values of the starting and "nal solutions whichin#uenced the choice of the suitable procedure. For problems with large percentage di!erences inthese values (i.e., C2"�w

�¹

�) and a large number of jobs, the threshold variants TA-var and

sometimes even TA-I could lead to better intermediate results than algorithm SA (whichperformed well for this problem type) since unnecessary increases in the objective function valuesare avoided in the initial phase.

146 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 25: Local search heuristics for two-stage flow shop problems with secondary criterion

5. Conclusions

This paper developed and tested various local search heuristics for solving the two-stage #owshop problem with a secondary criterion . We designed experiments and analyzed the e!ects ofvarious parameters used in local search algorithms in optimally solving the given problem. Ourexperimental results indicate that local search algorithms can signi"cantly improve the quality ofa solution obtained by a polynomially bounded algorithm. Use of a pairwise interchange neighbor-hood was found to lead to better results than the shift neighborhood.

The multi-level search heuristics were found to be most appropriate for the #ow time relatedobjective functions, while simulated annealing worked best for the due date related objectivefunction. Use of structural properties of an optimal solution of the speci"c problem generallyimproved the performance of the local search heuristics. The genetic algorithm was found to yieldpoor results unless it was augmented with some other local search heuristics.

We now describe some directions for future research. First, extension of the proposed localsearch heuristics to solve m-stage #ow shop problems with a secondary criterion is both interestingand useful. Since the m-stage #ow shop problem to minimize makespan is NP-hard, de"nition ofa feasible schedule requires modi"cation. For the m-stage #owshop problems, we may want tominimize the secondary criterion subject to the constraint that the primary criterion deviates fromits optimal value by no more than some constant. However, this de"nition is useful only if anapproximation algorithm with this performance guarantee exists. But, then, the neighborhoodsmay not be connected. This makes the design of an appropriate local search heuristic di$cult.Second, use of local search heuristics should be explored for "nding an improvement potential ofvarious polynomially bounded scheduling heuristics. Third, several other due date related criteria,like the earliness and tardiness penalties should be considered in future research. Fourth, consid-eration of a di!erent primary criterion (other than makespan) would be worthwhile. Fifth, futureresearch could investigate the combination of several strategies into one algorithm. This couldinclude changes in the neighborhood applied during the algorithm or a combination of di!erentprocedures. For instance, one could start with a descent algorithm (or with a variable acceptancescheme like in TA-var), and when the search comes to a point where the search for better solutionbecomes more di$cult, one could change to re"ned search strategies like multi-level to guide thesearch to &better' regions in the solution space. Finally, further study of local search techniques tosolve other types of bicriteria scheduling problems will be interesting and useful.

References

[1] Dileepan P, Sen T. Bicriterion static scheduling research for a single machine. OMEGA 1988;16(1):53}9.[2] Fry TD, Armstrong RD, Lewis H. A framework for single machine multiple objective sequencing research.

OMEGA 1989;17(6):595}607.[3] Hoogeveen H. Single-machine bicriteria scheduling. Ph.D. dissertation, Center for Mathematics and Computer

Science, Amsterdam, The Netherlands, 1992.[4] Lee C-Y, Vairaktarakis GL. Complexity of single machine hierarchical scheduling: a survey. Research Report No.

93-10, Department of Industrial and Systems Engineering, University of Florida, Gainesville, FL, USA, 1993.[5] Nagar A, Haddock J, Heragu S. Multiple and bicriteria scheduling: a literature survey. Eur. J. Opl. Res.

1995;81:88}104.

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 147

Page 26: Local search heuristics for two-stage flow shop problems with secondary criterion

[6] Lawler EL, Lenstra JK, Rinnooy Kan AHG, Shmoys DB. Sequencing and scheduling: algorithms and complexity.In: Graves SC, Rinnooy Kan AHG, Zipkin PH, editors. Logistics of Production and Inventory. Amsterdam, TheNetherlands: North-Holland, 1995. p. 445}522.

[7] Johnson SM. Optimal two- and three-stage production schedules with set-up times included. Naval ResearchLogistic Quarterly 1954;1:61}8.

[8] Garey MR, Johnson DS, Sethi R. The complexity of #ow shop and job shop scheduling. Maths. Ops. Res.1976;1:117}29.

[9] Chen CL, Bul"n RL. Complexity results for multi-machine multi-criteria scheduling problems. Proceedings of theThird Industrial Engineering Research Conference, 1994; p. 662}5.

[10] Rajendran C. Two-stage #ow shop scheduling problem with bicriteria. J. Ops. Res. Soc. 1992;43(9):871}84.[11] Neppalli VR, Chen C-L, Gupta JND. Genetic algorithms for the two-stage bicriteria #ow shop problem. European

Journal of Operational Research 1996;95:356}73.[12] Gupta JND, Palanimuthu N, Chen CL. Designing a tabu search algorithm for the two-stage #owshop problem

with secondary criterion. Production Planning and Control: An International Journal 1999;10:251}65.[13] Gupta JND, Neppalli VR, Werner F. Minimizing total #ow time in a 2-machine #owshop problem with minimum

makespan. International Journal of Production Economics 2001;69:323}38.[14] Danneberg D, Tautenhahn T, Werner F. A comparison of heuristic algorithms for #ow shop scheduling problems

with setup times and limited batch size. Math. Comput. Modell. 1999;29:101}26.[15] Sotskov YN, Tautenhahn T, Werner F. Heuristics for permutation #ow shop scheduling with batch setup times.

OR Spektrum 1996;18:67}80.[16] Metropolis M, Rosenbluth A, Rosenbluth M, Teller A, Teller M. Equation of state calculations by fast computing

machines. Journal of Chemical Physics 1953;21:1087}92.[17] Kirkpatrick S, Gelatt Jr CD, Vecchi MP. Optimization by simulated annealing. Science 1983;220:671}80.[18] Cerny V. Thermodynamical approach to the traveling salesman problem. Journal of Optimization Theory and

Applications 1985;45:41}51.[19] Lundy M, Mees A. Convergence of an annealing algorithm. Mathematical Programming 1986;34:111}24.[20] Dueck G, Scheuer T. Threshold accepting: a general purpose optimization algorithm appearing superior to

simulated annealing. Journal of Computational Physics 1990;90:161}75.[21] Glass CA, Potts CN. A comparison of local search methods for #ow shop scheduling. Annals of Operations

Research 1996;63:489}509.[22] Glover F. Tabu search, Part I. ORSA Journal of Computing 1989;1:190}206.[23] Reeves CR. Modern heuristic techniques for combinatorial problems. Oxford: Blackwell Scienti"c, 1993.[24] Martin O, Otto SW, Felten EW. Large-step Markov chain for the TSP incorporating local search heuristics.

Operations Research Letters 1992;11:219}24.[25] Brucker P, Hurink J, Werner F. Improving local search heuristics for some scheduling problems. Discrete Applied

Mathematics 1996;65:87}107.[26] Brucker P, Hurink J, Werner F. Improving local search heuristics for some scheduling problems. Part II. Discrete

Applied Mathematics 1997;72:47}69.[27] Lourenc7 o HR. Job-shop scheduling: computational study of large-step optimization methods. Eur. J. Opl. Res.

1995;83:347}64.[28] Adams J, Balas E, Zawack D. The shifting bottleneck procedure for job shop scheduling. Management Science

1988;34:391}401.[29] Gupta JND, Lau! V, Werner F. An enumerative algorithm for two-machine #ow shop problems with earliness and

tardiness penalties. Preprint 33/99, Otto-von-Guericke-UniversitaK t, FMA, Magdeburg, 1999.

Jatinder N. D. Gupta Ph.D. CFPIM is Professor of Management, Information and Communication Sciences, andIndustry and Technology at the Ball State University, Muncie, Indiana, USA. He holds a Ph.D. in Industrial Engineering(with specialization in Production Management and Information Systems) from Texas Tech. University. Coauthor ofa textbook in Operations Research, Dr. Gupta serves on the editorial boards of several national and internationaljournals, has published numerous research and technical papers in such journals as International Journal of InformationManagement, Journal of Management Information Systems, Operations Research, IIE Transactions, Naval Research

148 J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149

Page 27: Local search heuristics for two-stage flow shop problems with secondary criterion

Logistics, European Journal of Operational Research, etc. His current research interests include information technology,scheduling, planning and control, organizational learning and e!ectiveness, systems education, and knowledge manage-ment.Karsten Hennig studied Mathematics at the Otto-von- Guericke-University Magdeburg. He wrote his diploma thesis

in the "eld of scheduling and works since 1999 at the DG-Bank in Frankfurt (Germany).Frank Werner is Professor of Mathematics and Operations Research at the Otto-von-Guericke-University, Mag-

deburg in Germany. He received his Ph.D. in Mathematics from the University of Magdeburg. His research interests arediscrete optimization and scheduling, particularly the development of exact and heuristic algorithms for schedulingproblems. He has published about 70 papers in this "eld in such journals as the Annals of Operations Research,International Journal of Production Research, Computers and Operations Research, Discrete Applied Mathematics,European Journal of Operational Research, and Mathematical and Computer Modelling. He was a research fellow at theuniversities of Minsk (Belarus) and OsnabruK ck (Germany).

J.N.D. Gupta et al. / Computers & Operations Research 29 (2002) 123}149 149