12
A Pareto archive particle swarm optimization for multi-objective job shop scheduling Deming Lei * School of Automation, Wuhan University of Technology, 122 Luoshi Road, Wuhan, Hubei Province, People’s Republic of China Received 7 November 2006; received in revised form 5 November 2007; accepted 5 November 2007 Available online 22 November 2007 Abstract In this paper, we present a particle swarm optimization for multi-objective job shop scheduling problem. The objective is to simultaneously minimize makespan and total tardiness of jobs. By constructing the corresponding relation between real vector and the chromosome obtained by using priority rule-based representation method, job shop scheduling is con- verted into a continuous optimization problem. We then design a Pareto archive particle swarm optimization, in which the global best position selection is combined with the crowding measure-based archive maintenance. The proposed algorithm is evaluated on a set of benchmark problems and the computational results show that the proposed particle swarm opti- mization is capable of producing a number of high-quality Pareto optimal scheduling plans. Ó 2007 Elsevier Ltd. All rights reserved. Keywords: Particle swarm optimization; Pareto optimal; Archive maintenance; Global best position; Multi-objective job shop scheduling 1. Introduction Multi-objective job shop scheduling problem (MOJSSP) is the one with multiple conflicting objectives, which mainly presents some difficulties related to the objectives. If the objectives are combined into a scalar function by using weights, the difficulty is to assign weights for each objective; if all objectives are optimized concurrently, the problem is to design the effective search algorithm for some extra steps and the considerable increment of time complexity. In the past decades, the literature of MOJSSP is notably sparser than the literature of single-objective job shop scheduling problem (JSSP). Sakawa and Kubota (2000) presented a genetic algorithm incorporating the concept of similarity among individuals by using Gantt charts for JSSP with fuzzy processing time and fuzzy due date. The objective is to maximize the minimum agreement index, to maximize the average agreement index and to minimize the maximum fuzzy completion time. Ponnambalam, Ramkumar, and Jawahar (2001) proposed a multi-objective genetic algorithm to derive the optimal machine-wise priority dispatching 0360-8352/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2007.11.007 * Tel.: +86 2786534910. E-mail address: [email protected] Available online at www.sciencedirect.com Computers & Industrial Engineering 54 (2008) 960–971 www.elsevier.com/locate/caie

artikel 3.pdf

Embed Size (px)

Citation preview

  • Abstract

    (2001) proposed a multi-objective genetic algorithm to derive the optimal machine-wise priority dispatching

    * Tel.: +86 2786534910.E-mail address: [email protected]

    Available online at www.sciencedirect.com

    Computers & Industrial Engineering 54 (2008) 960971

    www.elsevier.com/locate/caie0360-8352/$ - see front matter 2007 Elsevier Ltd. All rights reserved.Multi-objective job shop scheduling problem (MOJSSP) is the one with multiple conicting objectives,which mainly presents some diculties related to the objectives. If the objectives are combined into a scalarfunction by using weights, the diculty is to assign weights for each objective; if all objectives are optimizedconcurrently, the problem is to design the eective search algorithm for some extra steps and the considerableincrement of time complexity.

    In the past decades, the literature of MOJSSP is notably sparser than the literature of single-objective jobshop scheduling problem (JSSP). Sakawa and Kubota (2000) presented a genetic algorithm incorporating theconcept of similarity among individuals by using Gantt charts for JSSP with fuzzy processing time and fuzzydue date. The objective is to maximize the minimum agreement index, to maximize the average agreementindex and to minimize the maximum fuzzy completion time. Ponnambalam, Ramkumar, and JawaharIn this paper, we present a particle swarm optimization for multi-objective job shop scheduling problem. The objectiveis to simultaneously minimize makespan and total tardiness of jobs. By constructing the corresponding relation betweenreal vector and the chromosome obtained by using priority rule-based representation method, job shop scheduling is con-verted into a continuous optimization problem. We then design a Pareto archive particle swarm optimization, in which theglobal best position selection is combined with the crowding measure-based archive maintenance. The proposed algorithmis evaluated on a set of benchmark problems and the computational results show that the proposed particle swarm opti-mization is capable of producing a number of high-quality Pareto optimal scheduling plans. 2007 Elsevier Ltd. All rights reserved.

    Keywords: Particle swarm optimization; Pareto optimal; Archive maintenance; Global best position; Multi-objective job shop scheduling

    1. IntroductionA Pareto archive particle swarm optimizationfor multi-objective job shop scheduling

    Deming Lei *

    School of Automation, Wuhan University of Technology, 122 Luoshi Road, Wuhan, Hubei Province, Peoples Republic of China

    Received 7 November 2006; received in revised form 5 November 2007; accepted 5 November 2007Available online 22 November 2007doi:10.1016/j.cie.2007.11.007

  • rules to resolve the conict among the contending jobs in the Gier and Thompson procedure (Gier &

    to muopera

    Th

    Wu (2006) developed a crowding measure-based multi-objective evolutionary algorithm to the problems of job

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 961shop scheduling. The proposed algorithm makes use of crowding measure to adjust the external populationand assign dierent tness for individuals and is applied to MOJSSP to minimize makespan and the total tar-diness of jobs.

    PSO is seldom applied to JSSP since 1995 and the optimization capacity and advantage of PSO on JSSP arenot intensively considered. Compared with evolutionary algorithm, PSO has its own superiorities on schedul-ing. For instance, it is unnecessary for PSO to design the special crossover and mutation operators to avoid theoccurrence of the illegal individuals. The main obstacle of directly applying PSO to the combinatorial optimi-zation problems such as JSSP is its continuous nature.

    To remedy the above drawback, this paper presents an eective approach to convert JSSP to a continuousoptimization problem. Once JSSP is converted into the continuous problem, the direct application of PSObecomes possible. In addition, an eective multi-objective particle swarm optimization (MOPSO) is proposedfor solving MOJSSP. The proposed algorithm combines the global best position selection with the externalarchive maintenance, whereas these two steps in most of MOPSO are separately considered.

    The remainder of the paper is organized as follows. The basic concepts of multi-objective optimization areintroduced in Section 2. Section 3 formulates JSSP with multiple objectives. In Section 4, we introduce stan-dard PSO and describe the method which converts scheduling problems into the continuous optimizationones. In Section 5, we describe a Pareto archive particle swarm optimization (PAPSO) for MOJSSP. The pro-posed algorithm is applied to a set of benchmark problems and the performances of three algorithms are com-pared in Section 6. Conclusions and remarks for the future work are given in Section 7.

    2. Basic concepts

    The general multi-objective optimization problem is of the form

    max y f ~X f1~X ; f2~X ; fM~X Subject to gi~X 6 0 i 1; 2; D 1

    where ~X x1; x2 xnT is called decision vector, ~X 2 H 2 Rn, H is search space. y 2 Y is objective vector andY is objective space. gi is a constraint. For simplicity, we suppose that all fk(k = 1,2, ,M) is greater thanzero in this paper. If fk 6 0, we replace fk with fk + s. Where s is a big enough positive number.

    The following basic concepts are often used in multi-objective optimization case.Denition 1: Let a decision vector ~X 0 2 H

    1. ~X 0 is said to dominate a decision vector ~X 1 2 H~X 0 ~X 1 if and only if~0 ~1 ~0 ~1several optimal solutions. Some studies have attempted to simultaneously optimize all objectives and obtain agroup of Pareto optimal solutions. Arroyo and Armentano (2005) presented a genetic local search algorithmwith features such as preservation of dispersion in the population, elitism et al. for ow shop scheduling prob-lems in such a way as to provide the decision maker with the approximate Pareto optimal solutions. Lei andlti-objective exible job shop scheduling problems. The hybrid algorithm makes use of PSO to assigntions on machines and simulated annealing algorithm to arrange operations on each machine.e above-mentioned approaches optimize the weighted sum of objective functions and can produce one orThompson, 1960) applied in job shop scheduling. The objective is to minimize the weighted sum of makespan,the total idle time of machines and the total tardiness. The weights assigned for combining the objectives into ascalar tness are not constant. Esquivel et al. (2002) proposed an enhanced evolutionary algorithm with newmulti-re-combinative operators and incest prevention strategy for single and multi-objective job shop sched-uling problem. Kacem, Hammadi, and Borne (2002) presented a combination approach based on the fusion offuzzy logic and multi-objective evolutionary algorithm to the problems of exible job shop scheduling. Xiaand Wu (2005) proposed a hybrid algorithm of particle swarm optimization (PSO) and simulated annealingfiX P fiX i 1; 2; M f iX > fiX 9i 2 f1; 2; Mg:

  • 2. ~X 0

    3. Pa

    962 D. Lei / Computers & Industrial Engineering 54 (2008) 960971This technique has good performance, low computational cost and easy implementation. Due to these advan-tages, PSO has attracted signicant attentions from researchers around the world since its introduction in1995.

    4.1. The standard PSO

    The standard algorithm is given in some form resembling the following.

    ~tt1 w~tt r1c1~Pt ~X t r2c2~G t ~X t 2~X ~X ~t 3more than 50 years. The elements of JSSP are the set of machines and a collection of jobs to be scheduled.The processing of a job on a certain machine is referred to as an operation. The processing time of each oper-ation are xed and known in advance. Each job passes each machine exactly once in a prescribed sequence andshould be delivered before due date.

    The minimization of cost and the maximization of customer satisfaction are two main foci of the practicalapplication. For reecting real-world situation adequately, we formulate multi-objective job shop schedulingproblems as two-objective ones which simultaneously minimize makespan and total tardiness. Makespan isthe most frequently considered objective and many ecient heuristics including tabu search and simulatedannealing have been developed for the minimum makespan. For tardiness objectives, several local searchapproaches and genetic algorithms have also been introduced. In this paper, we propose a PSO-based heuristicto minimize these objectives.

    (1) Makespan Cmax = max{Ci} where Ci is the completion time of job i.(2) Total tardiness of jobs T tot

    Pni1 maxf0; Lig where Li is the lateness of job i.

    The assumptions considered in this paper are as follows:

    Setting up times of machines and move times between operations are negligible;Machines are independent of each other;Jobs are independent of each other;A machine cannot process more than one job at a time. No jobs may be processed on more than onemachine at a time;There are no precedence constraints among the operations of dierent jobs.

    4. Particle swarm optimization

    PSO is a population-based stochastic optimization technique inspired by the choreography of a bird ock.Pareto optimal decision vector cannot be improved in any objectives without causing degradation in at leastone other objective. When a decision vector is non-dominated on the whole search space, it is Pareto optimal.

    3. Problem formulation

    Determining an ecient scheduling for the general shop problem has become the subject of research for4. Pareto optimal front PF of all objective function values corresponding to the decision vectors in PS :PF ff ~X f1~X ; f2~X ; fM~X j~X 2 PFg.

    5. ~X 0 is said to be non-dominated regarding a given set if ~X 0 is not dominated by any decision vectors in theset.is said to be Pareto optimal if and only if :9~X 1 2 H : ~X 1 ~X 0.reto optimal set PS of all Pareto optimal decision vectors. PS f~X 0 2 Hj:9~X 1 2 H; ~X 1 ~X 0g.t1 t t1

  • whereU is an empty set, St is a set of operations which can be scheduled in the t-th iteration and PSt is a set of

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 963operations which have been scheduled in the t-th iteration. oij denotes the operation of job i processed onmachine j. C(oij) and rij, respectively, indicate the earliest completion time and the earliest beginning timeof operation oij. ct is the conict set which consist of all operations competing for the same machine.

    There are many genetic representation methods of JSSP, such as operation-based, priority rule-based andjob-based representation et al. The following conversion procedure between chromosome and real vectorshows that priority rule-based representation is one of the most suitable methods to handle job shop sched-uling by using PSO.

    With respect to the priority rule-based representation, For n m JSSP, the chromosome of a feasible sched-uling plan is a string (p1,p2, ,pnm) with n m genes, in which each gene corresponds to a priority rule. In theGier and Thompson procedure, when more than one job contends for one machine, the conict occurs. Thegene pij means that its corresponding priority rule is used to eliminate the conict occurring in the (i j)th iter-ation by choosing a job from those contending jobs.

    Five priority rules such as FCFS, LPT, SPT, CR and S/OPN are chosen in terms of the chosen objectives.FCFS, LPT and SPT are the processing time-based rules. CR and S/OPN are related to due dates and ecientwhere w is an inertia weight, r1 and r2 are two learning factors, c1 and c2 are the random number following theuniform distribution between [0,1]. ~tt and ~X t represent the speed and position of a particle, respectively, attime t. ~Pt and ~G t, respectively, denote the personal best position of a particle and the global best positionat time t.

    The procedure of PSO is as follows:

    (1) Initialize a population of particles with random positions and velocities in the search space.(2) Update the velocity and the position of each particle according to Eqs. (2) and (3).(3) Map the position of each particle into solution space and evaluate its tness value according to the

    desired optimization tness function. At the same time, update ~Pt and ~G t.(4) If the stopping criterion is met, terminate search; else go to (2).

    The original PSO design is suited to a continuous solution space. For better solving combinatorial optimi-zation problem, we propose an eective approach to handle JSSP by using PSO.

    4.2. Handling JSSP using PSO

    Two strategies can be used to apply PSO to JSSP. The rst strategy is to deal with scheduling problems byusing the discrete PSO (Hu, Eberhart, & Shi, 2003). The second is to transform JSSP into the continuous oneand solve the latter by using PSO.

    Generally, it is dicult to design the discrete PSO. The main diculty is the redenition of the position orvelocity update method for JSSP. The redenition is essential for the application of PSO; on the other hand,the low performance of the discrete PSO mainly results from its redenition. This is a paradox.

    Compared with the rst strategy, the second strategy has the following advantages: JSSP is easily convertedinto the continuous optimization; moreover, once the conversion is implemented, the high-quality solutionscan be obtained by using the various improvement methods of PSO. In this study, we adopt the second strat-egy and convert scheduling problems into continuous ones.

    The Gier and Thompson procedure is rst described before the introduction to the conversion.

    (1) Let t = 1, PSt = U, determine St.(2) Calculate Co minf~Coijjoij 2 Stg, record the corresponding machine j*.(3) Dene conict set ct foij 2 Stjrij < Cog. choose ouj 2 ct, PSt1 PSt [ foujg, delete ouj from St

    and add the next operation of job i into St and form St+1.(4) t = t + 1, go to (2) until St = U.in delivering on due dates. The combination of these rules may produce an eective scheduling plan and con-

  • (1)(2)

    Finally, x = 4.753 corresponds to substring 3 4 0 0. where dxe indicates the biggest integer number below

    964 D. Lei / Computers & Industrial Engineering 54 (2008) 9609711

    x. When all substrings of the corresponding variables are constructed, a new chromosome is obtained.

    5. A Pareto archive particle swarm optimization

    Archive maintenance and ~G t selection are two main steps in MOPSO. These two steps are independentlyimplemented in most cases. This paper combines these two steps and constructs a new MOPSO.

    5.1. Main algorithmPA

    1.

    2.3.

    4.5.EndBeginhj = dx1/54je x1 = x1 hj*54jx1 = dx1 100e = 475For j = 1 to 4x 0:01 541 h1 542 h2 543 h3 544 h4: 4So we can specify the domain [0,6.24] for each real variable xi, i = 1,2,3,4. For substring 3 3 4 0, x1 = 4.70,

    substring 4 1 2 3, x2 = 5.38, substring 3 1 2 4, x3 = 4.14 and substring 1 2 3 4, x4 = 1.94.When xi, i = 1,2,3,4 is assigned a new value, the corresponding substring can be obtained in the following

    way. If x1 = 4.753currently minimize the above-mentioned objectives. Table 1 shows the relation between priority rule and itsgene value.

    The above chromosome is converted into real vector in the following way: The whole string is rst dividedinto a number of substrings. The length of all substrings can be identical to or dierent from each other. Theneach substring corresponds to a real variable.

    An instance is used to illustrate the above procedure.For 8 2 JSSP, the chromosome has 16 genes and is divided into four substrings. The length of each sub-

    string is 4.

    3 3 4 0 j 4 1 2 3 j 3 1 2 4 j 1 2 3 4Each substring has 625 possible combinations for pij 2 {0,1,2,3,4}. For substring h1 h2 h3 h4, the value of the

    corresponding real variable x is obtained from the following formula.

    Table 1Value of gene and priority rule

    Value of gene Priority rule

    0 FCFS rst come rst served1 SPT shortest processing time2 LPT longest processing time3 CR smallest critical rate4 S/OPN smallest slack per number of operation remainingPSO is outlined as follows.

    t = 0, Initialize initial swarm St, calculate the objective vectors of each particle and include the non-dom-inated solutions into archive At.Determine initial ~G t and ~Pt for each particle.Under the condition that particles y within search space, determine the new position and velocity for allparticles and form St+1. Update ~Pt for each particle.Implement the hybrid procedure of the archive maintenance and ~G t selection and produce At+1.Perform mutation on some chosen archive members and maintain archive At+1 again.

  • and propose the following approach to ensure particle to y within the search space. Suppose a particle goesbeyon

    The distance dij between solution ~X i and ~X j is dened asMk1 uk fk ~X

    i fk ~X j . uk is a constant and

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 965chosen to make all ukfk close to each other. To decide the appropriate value of uk, rst we let all uk = 1. Thenthe problem is optimized and some near optimal solutions are obtained, max fk for each objective is alsoobtained by using these near optimal solutions. Finally, uk is decided in the following way: u1 = a > 0,uk uk1 max fkmax fk1 for fk, k > 1.

    Denition 2: Q is the set of some solutions. For ~X i 2 Q, d1i minfdijj~X j 2 Q; j 6 ig,d2i minfdijjdij > d1i ; ~X j 2 Q; j 6 ig, then the crowding measure of ~X i in Q

    ciQ d1i d2i

    =2:

    If there exist only two points in Q, then ciQ is dened as the distance between these points. Crowding mea-sure depicts the density of other solutions surrounding a solution. A point in the crowded region on the objec-tive space has low crowding measure while the solution in the sparse region is assigned high crowding measure.

    The density metric of PAES just describes some solutions are located in the same region. The density metricIf xitj vitj > bj, then vit1j h bj xitj, xit1j xitj vit1j;If xitj vitj < aj, then vit1j h xitj aj, xit1j xitj vit1j. 0 < h < 1, where ~X t xt1; xt2 xtn,

    xtj 2 [aj,bj], aj and bj are the lower and upper boundary of the jth decision variable. j = 1,2 n.~tt vt1; vt2 vtn. h is a constant.

    In the next three subsections, archive maintenance, ~G t selection procedure and the mutation in PAPSO aredescribed in detail.

    5.2. Archive maintenance

    The external archive is used to store some of the non-dominated solutions produced in the searching pro-cess of multi-objective evolutionary algorithm (MOEA) and MOPSO. In most cases, a maximum size is spec-ied for the external archive. When the archive size reaches its predetermined maximum value, the externalarchive must be maintained to decide which solution can be inserted into the archive. Pareto archived evolu-tionary strategy (PAES) (Knowles & Corne, 2000) and strength Pareto evolutionary algorithm2 (SPEA2) (Zit-zler, Laumanns, & Thiele, 2001) update the archive in terms of the following principle: when meeting one ofthe following conditions, the new non-dominated solution becomes a member of the archive.

    (1) The new solution dominates some members of the archive.(2) The archive size is less than its maximum value.(3) The archive size is equal to its maximum value and the new solution has higher density value than at

    least one member of the archive.

    Density-estimation metric is vital to the above approach and directly inuences the diversity of algorithm.Some density metrics are introduced in MOEAs. In PAES, the whole objective space is divided into a numberof grids. The number of solutions in a grid is the density of the grid. When the archive becomes full, a solutionin a less crowded grid always preferably becomes the member of the archive. In SPEA2, the density of an indi-vidual is the distance between the individual and its kth nearest individual, where k N N 0p , N is popula-tion scale and N 0 indicates the maximum size of the archive. In non-dominated sorting genetic algorithm 2(Deb, Pratap, Agarwal, & Meyarivan, 2002), crowding distance is dened as density-estimation metric. In thisstudy, a density-estimation metric is introduced (Lei & Wu, 2006).P 2qin SPd the boundaries of a decision variable, that is, xitj vitj > bj or xitj vitj < aj.6. t = t + 1, If the termination condition is met, then terminate search; else go to 3.

    Where St and At are, respectively, the particle swarm and the external archive at time t.We adopt the method presented by Coello, Pulido, and Lechuga (2004) to determine ~Pt for each particleEA2 is dened based on the distance between two points on objective space. Crowding distance and

  • an emIn

    ~X it1 dticlesquick

    Thparticpartic

    966 D. Lei / Computers & Industrial Engineering 54 (2008) 960971towards some new regions. Moreover, if the particle swarm just follows a part of archive members, MOPSO isdicult to approximate the whole Pareto optimal front. Thus, ~G t selection approach of PAPSO is helpful toobtain high diversity performance.

    5.4. Inclusion of mutation

    The external archive greatly inuences the performance of MOPSO. If the archive cannot be updated inces-santly or the archive members just locate in a narrow region, the search may stagnate or just converge to a partof Pareto optimal front. This motivates the introduction to a mutation in PAPSO to produce new non-dom-inated solutions and lead the evolution of the whole swarm.

    In PAPSO, the mutation process is described as follows:

    (1) Select some archive members, perform mutation on the copy of these chosen members and store all newnon-dominated solutions in set X.

    (2) Implement the hybrid procedure of archive maintenance and ~G t selection again by using the solutions inpty set.single-objective PSO, particles always select the solution with optimal tness value as ~G t. In PAPSO, ifominates some archive members, it will replace those members and the global best position of some par-. This is the generalization of single-objective method in multi-objective case and helpful to make PSOly approximate to the true Pareto optimal front.e above procedure also ensures that each archive member acts as the global best position of at least onele. For MOPSO with small archive and big population, if some archive members do not act as ~G t of anyles, these archive members cannot participate in new search of MOPSO and cannot guide particles to ycrowding measure are both dened on the basis of distance among three solutions. However, for the boundarysolution with the biggest or smallest function value of at least an objective function, its crowding distance isinnite and the corresponding crowding measure nite.

    5.3. The combination procedure of archive maintenance and ~G t selection

    A hybrid procedure to maintain archive and select ~G t is described in the following way.

    (1) Assign all members of the archive At into At+1, nd all non-dominated solutions in St+1 and store themin set #.

    (2) For each solution ~X it1 2 #, if ~X it1 dominates some members of At+1, then substitute ~X it1 for those dom-inated members and the global best position of all particles in the set j 2 St1j~G jt1

    n~X kt1; ~X

    it1 ~Xkt1 2 At1g.

    (3) For each solution ~X it1 2 # and ~X it1 62 At1, rst insert it into the archive At+1 and then(3.1) If Narc = N

    0, remove a member ~X lt1 with minimum crowding measure from At+1; If ~Xlt1 6 ~X it1,

    substitute ~X it1 for the global best position of all particles in j 2 St1j~G jt1 ~X lt1n o

    .(3.2) If Narc < N

    0,(1) F ~X kt1 2 At1jnp~X kt1 > g

    .

    (2) Select a solution ~X kt1 2 F with the minimum dik, replace the global best position of all particles inj 2 St1j~Gjt1 ~X it1

    n owith ~Xkt1 and remove ~X

    kt1 from F; repeat the above procedure until

    np~X it1 g or F = U. If F = U and np~X it1 < g, go to (1).

    Where N is population scale, N 0 is the maximum value and Narc is the actual size of the archive. ~Gjt1 ~X

    means that the global best position of particle j is ~X . np~X t denotes the number of particles whose globalbest position is ~X t. g(2[0.025N,0.05N]) is a constant. The cardinality of the set F is denoted by jFj. U isset X. The new solutions excluded from archive are thrown away.

  • the a

    the nal rank value is determined by the summation of the strengths of the points that dominate the

    in Table 4.

    qY nY i : 5

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 967i ntot

    Inertia weight is rst tested between 0.4 and 0.9 in increments of 0.1. When inertial weight is tested, twolearning factors are set to be 2.0. As shown in Table 2, the setting w = 0.4, 0.5 is superior to other settings.Especially for w = 0.5, PAPSO with this setting produces more non-dominated solutions than PAPSO withany other settings for 9 of 17 instances. The setting w = 0.6, 0.7 is notably worse than other settings. Thentwo learning factors are tested between 1.5 and 2.0 in increments of 0.1 and two factors are always assignedA metric is dened to test the performance of PAPSO with dierent parameters: rst, for each algorithm,the non-dominated solutions are chosen from all external archives obtained by the algorithm in all runs andstored in a set H; then the non-dominated solutions are chosen from the set H. Suppose that ntot is the totalnumber of non-dominated solutions in H, if algorithm Yi produces nY i solutions of ntot, the metric qY i of algo-rithm Yi is the ratio of nY i to ntot,current individual. Meanwhile, a kth nearest neighbor density estimation method is applied to obtainthe density value of each individual. The nal tness value is the sum of rank and density values. Inaddition, a truncation method is used in the elitist archive in order to maintain a constant number ofindividuals in the archive.

    InMOPSO, a secondary repository of particles is used by other particles to guide their own ight and theadaptive grid method of PAES is used to maintain the external repository. Meanwhile, roulette-wheel methodis developed to select the global best position for particle.

    Eighteen JSSPs are used to test the performance of PAPSO, SPEA2 andMOPSO. For problems with 10jobs, the due date of job 2, 3 is 1.5 times the total processing time of the corresponding job, the deadline of job10 is equal to its total processing time and the due date of other jobs is twice the corresponding total process-ing time. For the problem with 20 jobs, the due date of job 2, 3, 11 is 1.5 times the corresponding total pro-cessing time, the deadline of job 20 is equal to its total processing time and the due date of other jobs is twicethe corresponding total processing time.

    6.2. Sensitivity analyses

    In this section, the impact of inertia weight and two learning factors on the performance of PAPSO arediscussed and the computational results are shown in Tables 2 and 3. Other parameters of PAPSO are shown6. Simulation results

    In this section, we rst describe SPEA2 and a MOPSO (Coello et al., 2004) calledMOPSO for simplicityand test problems. Then we perform sensitivity analyses and nally compare the computational results of threealgorithms.

    6.1. Algorithm description and test problems

    In SPEA2, each individual in the main population and the external archive is assigned a strengthvalue, which incorporates both dominance and density information. On the basis of the strength value,the sarchive members and exists in the whole search process of PAPSO.In sub-step (2), the hybrid procedure having been shown in Section 5.3 starts from step 2.Mutation operator has been used in MOPSO. Coello et al. (2004) proposed a method of merging mutation

    and MOPSO. Their mutation was applied not only to the particles of swarm, but also to the range of eachdecision variable. At the beginning, all particles are aected by mutation and then the number of particlesaected by mutation will diminish over time in terms of a nonlinear function. Our mutation is applied tome value and inertial weight is 0.5 in the test procedure. From Table 3, when two learning factors is

  • 968 D. Lei / Computers & Industrial Engineering 54 (2008) 960971Table 2The computational results of PAPSO with dierent inertia weightsequal to 1.8, 1.9 and 2.0, the performance of PAPSO does not signicantly vary with these factors and is betterthan that of PAPSO with learning factors of 1.5, 1.6 or 1.7. Thus, we choose inertia weight of 0.5 and twolearning factors of 2.0.

    Table 3The computational results of PAPSO with dierent learning factors

    Problem 1.5 1.6 1.7 1.8 1.9 2.0

    FT10 0.000 0.000 0.417 0.333 0.25 0.000FT20 0.000 0.308 0.000 0.308 0.231 0.153ABZ5 0.167 0.000 0.000 0.333 0.000 0.500ABZ6 0.200 0.133 0.133 0.266 0.068 0.200ABZ7 0.500 0.125 0.125 0.000 0.125 0.125ABZ8 0.133 0.534 0.200 0.000 0.000 0.133ORB1 0.182 0.364 0.000 0.182 0.000 0.232ORB2 0.000 0.000 0.143 0.143 0.286 0.428ORB3 0.118 0.118 0.000 0.000 0.646 0.118ORB4 0.071 0.143 0.071 0.215 0.357 0.143ORB5 0.25 0.000 0.25 0.000 0.000 0.500LA26 0.152 0.000 0.30 0.000 0.000 0.538LA27 0.000 0.000 0.333 0.333 0.334 0.000LA28 0.20 0.000 0.300 0.000 0.100 0.400LA14 0.000 0.25 0.000 0.125 0.625 0.000LA15 0.222 0.389 0.000 0.389 0.000 0.000LA16 0.066 0.268 0.133 0.467 0.000 0.066

    Problem x = 0.4 x = 0.5 x = 0.6 x = 0.7 x = 0.8 x = 0.9

    FT10 0.333 0.000 0.133 0.133 0.133 0.268FT20 0.900 0.100 0.000 0.000 0.000 0.000ABZ5 0.167 0.067 0.000 0.333 0.333 0.100ABZ6 0.154 0.231 0.154 0.154 0.0776 0.231ABZ7 0.285 0.000 0.000 0.143 0.571 0.000ABZ8 0.084 0.416 0.500 0.100 0.134 0.166ORB1 0.25 0.417 0.1665 0.1665 0.000 0.000ORB2 0.25 0.25 0.125 0.125 0.125 0.125ORB3 0.166 0.25 0.000 0.000 0.25 0.334ORB4 0.292 0.333 0.000 0.167 0.167 0.041ORB5 0.143 0.571 0.000 0.143 0.143 0.000LA26 0.231 0.538 0.000 0.0777 0.154 0.000LA27 0.000 0.000 0.000 1.000 0.000 0.000LA28 0.143 0.571 0.000 0.000 0.000 0.286LA14 0.182 0.000 0.000 0.000 0.818 0.000LA15 0.388 0.388 0.056 0.112 0.000 0.056LA16 0.125 0.1875 0.0625 0.125 0.375 0.125

    Table 4Parameter settings of three algorithms

    PAPSO MOPSO SPEA2

    h = 0.8, u1 = 1 d = 30 N = 80, N 0 = 20N = 80, N0 = 20, w = 0.5, r1 = r2 = 2 pc = 0.9, pm = 0.1

    pc, crossover probability; pm, mutation probability; d, the number of subdivision of the range of each objective.

  • 6.3. Results and discussions

    Metric C (Zitzler & Thiele, 1999) is used to compare the approximate Pareto optimal set, respectively,obtained by three algorithms. CL;B measures the fraction of members of B that are dominated by membersof L.

    CL;B b 2 B : 9h 2 L; h bf gj jjBj : 6

    For multi-objective optimization algorithm with elitism, the ratio of population size to the maximum size ofthe archive is frequently set to be 4 to 1 for maintaining an adequate selection pressure for elite solutions, sowe choose population scale of 80 and the maximum size of 20. We use d = 30 because of the recommendationof Coello et al. (2004). We found that SPEA2 with cross probability of 0.9 and mutation probability of 0.1yielded the best results. So, we choose the parameters of Table 4 for three algorithms.

    All algorithms use the priority rule-based representation. Two kinds of MOPSO adopt the approach in Sec-tion 4 to make chromosome correspond to a real vector. We use two-point crossover and two-point mutation inSPEA2. Two-point crossover is described below: rst randomly select two genes from chromosome and thenexchange genes between the chosen genes. Two-point mutation also has two steps: rst stochastically choosetwo genes and then alter the value of the chosen genes. When the number of objective function evaluation

    [0,1].

    ORB2ORB3ORB4

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 969ORB5 1.000, 0 0.000, 8 0.909, 1 0.400, 3 0.800, 1 0.000, 8LA26 0.667, 3 0.000, 8 0.445, 5 0.181, 9 0.363, 7 0.000, 8LA27 1.000, 0 0.000, 7 0.665, 3 0.143, 6 0.571, 3 0.000, 7LA28 1.000, 0 0.000, 6 0.500, 4 0.166, 10 1.000, 0 0.000, 6LA14 0.888, 1 0.200, 8 1.000, 0 0.000, 10 0.000, 10 0.900, 1LA15 0.909, 1 0.000, 11 0.727, 3 0.230, 10 0.384, 8 0.181, 9LA16 1.000, 0 0.071, 13 0.777, 2 0.076, 12 0.615, 5 0.143, 121.000, 0 0.000, 6 0.750, 1 0.200, 3 0.800, 1 0.000, 60.818, 2 0.143, 10 1.000, 0 0.230, 10 0.380, 8 0.531, 61.000, 0 0.060, 14 1.000, 0 0.150, 17 0.650, 7 0.200, 12For JSSP, ai and bi are determined by the converting procedure shown in Section 4.2 and [ai,bi] is the domainof the variable xi, for the instance in Section 4.2, ai = 0, bi = 6.24, i = 1,2,3,4.

    If Y1, Y2, Y3 are used to denote PAPSO,MOPSO and SPEA2, then CY i; Y j indicates the fraction of allnon-dominated solutions stored in the archive of Yj in 20 runs that are dominated by the non-dominated onesobtained by Yi in all runs. Table 5 shows the computational results. In Table 5, the data in all columns except

    Table 5The comparative results of the three algorithms

    Problem CY 1; Y 2 CY 2; Y 1 CY 3; Y 2 CY 2; Y 3 CY 1; Y 3 CY 3; Y 1FT06 0.000, 5 0.000, 5 0.000, 5 0.000, 5 0.000, 5 0.000, 5FT10 0.785, 2 0.000, 10 1.000, 0 0.000, 6 0.000, 6 0.400, 6FT20 1.000, 0 0.000, 6 1.000, 0 0.333, 6 0.667, 3 0.000, 8ABZ5 1.000, 0 0.000, 6 0.800, 2 0.250, 3 0.750, 1 0.166, 5ABZ6 0.910, 1 0.273, 8 0.818, 2 0.230, 10 0.615, 5 0.353, 7ABZ7 0.555, 4 0.000, 15 1.000, 0 0.000, 6 0.000, 6 0.665, 5ABZ8 0.800, 2 0.200, 10 0.900, 1 0.166, 10 0.500, 6 0.250, 9ORB1 0.916, 1 0.210, 14 0.833, 2 0.200, 8 0.400, 6 0.235, 13gm is a constant and often set to be 20. [ai,bi] is the domain of the ith decision variable of problem.reaches 20,000, the algorithm terminates search. Each algorithm randomly runs 20 times for each instance.In this study, polynomial mutation (Deb & Agrawal, 1995) with gm = 20 is applied to one real variable cor-

    responding to sub-string of chromosome of each archive member. Polynomial mutation is described below.If polynomial mutation is performed on some genes of individual ~X x1; x2 xn, take xi as an instance, a

    new gene is produced in the following way.

    xi xi bi aid 7where du 2u

    1=gm1 1 if u < 0:51 21 u1=gm1 else

    .u is the random number following uniform distribution on

  • 970 D. Lei / Computers & Industrial Engineering 54 (2008) 960971the rst column is related to CY i; Y j and consists of two parts: the rst is the value of CY i; Y j and the sec-ond the number of non-dominated solutions nally obtained by Yj after the archive members of Yi have com-pared with those of Yj.

    As shown in Table 5, PAPSO produces more non-dominated solutions than MOPSO for all problemsexcept FT06 and most of the archive members obtained by MOPSO are dominated by archive membersof PAPSO. PAPSO has notably better performance thanMOPSO.MOPSO is also inferior to SPEA2. Com-pared with SPEA2, PAPSO obtains better computational results for 13 of 18 problems, especially for nineproblems such as FT20, ABZ5, LA26-28, ORB2, ORB4, ORB5 and LA16 et al., CY 1; Y 3CY 3; Y 1 > 0:35. On the other hand, SPEA2 only performs better than PAPSO for FT10, ABZ7, ORB3and LA14. Thus, PAPSO can obtain better solutions than other algorithms.

    The density-estimation metric inMOPSO just show a group of solutions are located in a grid and archivemaintenance based on this metric makes archive members distribute on a narrow region of Pareto optimalfront. Moreover, the ~G t selection also makes some of particles in repository unable to guide the ight of par-ticles in population andMOPSO does not approximate some parts of the Pareto front. Thus,MOPSO haslow performance in job shop scheduling.

    The mutation of PAPSO is performed on archive members and the mutation of SPEA2 is on individuals ofpopulation; as a result, the former easily generates more non-dominated solutions than the latter. This is themain reason that PAPSO outperforms SPEA2 for most of the problems. On the other hand, SPEA2 has acomplicated procedure of archive maintenance and tness assignment, while the structure of PAPSO is simple.As a result, PAPSO becomes more attractive than SPEA2 in job shop scheduling case.

    7. Conclusions

    We have proposed an eective method to convert JSSP to a continuous one. This is a new path to applyPSO to the scheduling problem. We also presented a PAPSO for MOJSSP. Unlike previous works, PAPSOcombines the global best position selection with archive maintenance. The eectiveness of PAPSO was testedon 18 benchmark problems. The computational results show that PAPSO perform better thanMOPSO andobtains better computational results than SPEA2 for most of the problems.

    In previous research, the application of PSO on combinatorial optimization problems such as JSSP is sel-dom investigated. The main contribution of this study is to provide an eective path to apply PSO to JSSP.Unlike the discrete PSO, the proposed path is simple and easily implemented. Moreover, the existing improve-ment strategies of PSO can be directly applied for the high-quality solutions. We will take a further discussionon the new path to handle multi-objective scheduling by using PSO algorithm and carry out research into theapplication of MOPSO to other scheduling problems such as exible job shop scheduling in the near future.

    Acknowledgements

    This research is supported by China Hubei Provincial Science and Technology Department under grant sci-ence foundation project (2007ABA332). The authors also want to express their deepest gratitude to the anon-ymous reviewers for their incisive and seasoned suggestions.

    References

    Arroyo, J. E. C., & Armentano, V. A. (2005). Genetic local search for multi-objective ow shop scheduling problems. European Journal ofOperational Research, 167, 717738.

    Coello, C. A. C., Pulido, G. T., & Lechuga, M. S. (2004). Handling multiple objectives with particle swarm optimization. IEEETransactions on Evolutionary Computation, 18(3), 256279.

    Deb, K., & Agrawal, R. B. (1995). Simulated binary crossover for continuous search space. Complex Systems, 9, 115148.Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algorithms: NSGA-II. IEEE

    Transactions on Evolutionary Computation, 6(2), 182197.Esquivel, S., Ferrero, S., Gallard, R., Salto, C., Alfonso, H., & Schutz, M. (2002). Enhanced evolutionary algorithm for single and multi-

    objective optimization in job shop scheduling problem. Knowledge-Based System, 15, 1325.Gier, B., & Thompson, G. L. (1960). Algorithm for solving production scheduling problems. Operations Research, 8, 487503.

  • Hu, X., Eberhart, R. C., & Shi, Y. (2003). Swarm intelligence for permutation optimization: A case study of N-queens problem.Proceedings of the IEEE Swarm Intelligence Symposium, 243246.

    Kacem, I., Hammadi, S., & Borne, P. (2002). Approach by localization and multi-objective evolutionary optimization for exible job shopscheduling problems. IEEE Transactions on Systems, Man and Cybernetics, Part C, 32(1), 113.

    Knowles, J. D., & Corne, D. W. (2000). Approximating the non-dominated front using the Pareto archive evolutionary strategy.Evolutionary Computation, 8(2), 149172.

    Lei, D., & Wu, Z. (2006). Crowding-measure-based multi-objective evolutionary algorithm for job shop scheduling. International Journalof Advanced Manufacturing Technology, 30(12), 112117.

    Ponnambalam, S. G., Ramkumar, V., & Jawahar, N. (2001). A multi-objective genetic algorithm for job shop scheduling. ProductionPlanning and Control, 12(8), 764774.

    Sakawa, M., & Kubota, R. (2000). Fuzzy programming for multi-objective job shop scheduling with fuzzy processing time and fuzzy duedate through genetic algorithm. European Journal of Operational Research, 120, 393407.

    Xia, W., & Wu, Z. (2005). An eective hybridization approach for multi-objective exible job-shop scheduling. Computers and IndustrialEngineering, 48(2), 409425.

    Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the strength Pareto evolutionary algorithm. Swiss Federal Institute ofTechnology, Lausanne, Switzerland, Tech. Rep. TIK-Rep, 103.

    Zitzler, E., & Thiele, L. (1999). Multi-objective evolutionary algorithms: A comparative case study and the strength Pareto approach.IEEE Transactions on Evolutionary Computation, 3(4), 257271.

    D. Lei / Computers & Industrial Engineering 54 (2008) 960971 971

    A Pareto archive particle swarm optimization for multi-objective job shop schedulingIntroductionBasic conceptsProblem formulationParticle swarm optimizationThe standard PSOHandling JSSP using PSO

    A Pareto archive particle swarm optimizationMain algorithmArchive maintenanceThe combination procedure of archive maintenance and {\vec{{\bi{G}}}}{}_{t} selectionInclusion of mutation

    Simulation resultsAlgorithm description and test problemsSensitivity analysesResults and discussions

    ConclusionsAcknowledgementsReferences