26
Accepted Manuscript Uniform parallel machine scheduling with resource consumption constraint Wei-Chang Yeh, Mei-Chi Chuang, Wen-Chiung Lee PII: S0307-904X(14)00484-3 DOI: http://dx.doi.org/10.1016/j.apm.2014.10.012 Reference: APM 10164 To appear in: Appl. Math. Modelling Received Date: 7 March 2012 Revised Date: 11 June 2014 Accepted Date: 2 October 2014 Please cite this article as: W-C. Yeh, M-C. Chuang, W-C. Lee, Uniform parallel machine scheduling with resource consumption constraint, Appl. Math. Modelling (2014), doi: http://dx.doi.org/10.1016/j.apm.2014.10.012 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Uniform parallel machine scheduling with resource consumption constraint

Embed Size (px)

Citation preview

Page 1: Uniform parallel machine scheduling with resource consumption constraint

Accepted Manuscript

Uniform parallel machine scheduling with resource consumption constraint

Wei-Chang Yeh, Mei-Chi Chuang, Wen-Chiung Lee

PII: S0307-904X(14)00484-3DOI: http://dx.doi.org/10.1016/j.apm.2014.10.012Reference: APM 10164

To appear in: Appl. Math. Modelling

Received Date: 7 March 2012Revised Date: 11 June 2014Accepted Date: 2 October 2014

Please cite this article as: W-C. Yeh, M-C. Chuang, W-C. Lee, Uniform parallel machine scheduling with resourceconsumption constraint, Appl. Math. Modelling (2014), doi: http://dx.doi.org/10.1016/j.apm.2014.10.012

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customerswe are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, andreview of the resulting proof before it is published in its final form. Please note that during the production processerrors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Page 2: Uniform parallel machine scheduling with resource consumption constraint

1

Uniform parallel machine scheduling with resource consumption constraint

Wei-Chang Yeha, Mei-Chi Chuang

a and Wen-Chiung Lee

b,*

aIntegration and Collaboration Laboratory

Department of Industrial Engineering and Engineering Management

National Tsing Hua University, Hsinchu, Taiwan bDepartment of Statistics, Feng Chia University, Taichung, Taiwan

March 07, 2012

Abstract:

We consider the makespan problem on uniform parallel machines, given that some

resource consumption cannot exceed a certain level. Several meta-heuristic methods

are proposed to generate approximate solutions. Computational results are also

provided to demonstrate the performance of the proposed heuristic algorithms.

Keywords: Scheduling, uniform parallel machines, makespan, genetic algorithm,

particle swarm optimization, simplified swarm optimization.

*Corresponding author: E-mail: [email protected]. Tel: 886-4-24517250x4016 Fax:

886-4-24517092.

Page 3: Uniform parallel machine scheduling with resource consumption constraint

2

1. Introduction

Pinedo [1] mentioned that parallel-machine problems are worth to be discussed from

both the mathematical and the applied aspects. In scientific literature, the makespan is

frequently studied for its many potential applications. Due to the global warming effect,

how to manage natural resources efficiently and reduce carbon emissions have become

important issues. Moreover, natural resources are excessively used in many situations.

For instance, there is an excessive usage of water in the liquid crystal display industry

and air pollution in the petrochemical industry. In high-tech manufacturing, the

processing speeds of newer developed machines are faster usually. Motivated by this,

we study the uniform parallel machine problem to minimize the makespan given a

bound of the resource consumption. The parallel machine scheduling has been given

considerable attention in the past decades. In this paper, we only survey the research on

the makespan minimization problem. It is a classical NP-hard problem proven by Garey

and Johnson [2]. Some of the earlier works were due to McNaughton [3] and since then

many researchers had devoted to this problem. Cheng and Sin [4] and Mokotoff [5]

surveyed the related problems. Recently, Lee et al. [6] proposed a simulated annealing

algorithm (SA). They used the longest processing time first (LPT) sequence as its initial

sequence.

Ji and Cheng [7] considered the makespan problem where jobs are from several

Page 4: Uniform parallel machine scheduling with resource consumption constraint

3

customers who are offered according to a grade of service. A polynomial-time

approximation algorithm is obtained if the number of machines is given. Koulamas and

Kyparisis [8] modified the longest processing time algorithm for the makespan problem

when the number of uniform parallel machines is two. Ji and Cheng [9] brought the

simple linear deterioration into parallel-machine scheduling. They showed several

problems are strongly NP-hard when the number of machines is not given and are

NP-hard in the ordinary sense when the number of machines is given. Fanjul-Peyro and

Ruiz [10] studied the problem on unrelated parallel machines and developed a set of

meta-heuristic algorithms based on simple iterated greedy local search. They also

showed that their algorithms are, most of the time, statistically better than the existing

methodologies. Janiak and Rudek [11] brought a concept of multi-ability learning into

parallel machines. They derived the optimal solutions for some special cases of the

makespan problem. Fanjul-Peyro and Ruiz [12] developed a set of metaheuristics

which utilized the size-reduction of the original assignment problem. The tests showed

their algorithms perform better in most of the benchmark of instances. Cheng et al. [13]

studied the case that jobs are in batches. They analyzed their computational complexity

for the total completion time and the makespan problems. Huang and Wang [14]

studied the case that jobs might deteriorate. The optimal solutions are offered by this

paper for two multiple-objective functions.

Page 5: Uniform parallel machine scheduling with resource consumption constraint

4

On the other hand, many researchers have devoted to scheduling problems with

resource allocation. Rudek and Rudek [15] considered the impact of resource allocation

and the aging effect. The objective is to minimize the time criteria under a given

resource consumption or to minimize the resource consumption under time criteria.

Wang and Wang [16] considered some single-machine problems in which the job

processing time is starting-time dependent and resource allocation dependent. They

considered two multi-objective criteria. Readers can refer to [17-20] about resource

allocation studies.

In this paper, we will utilize several heuristic algorithms to solve the uniform

parallel machine scheduling problem to minimize the makespan with the constraint that

the total resource consumption can not exceed a certain amount. The rest of the paper is

as follows. In section 2, we describe the problem. In Section 3, we describe several

heuristic algorithms. In section 4, we present the simulation experiments, and in the last

section, we present the conclusion.

2. Problem formulation

The considered problem is described as follows. There is n independent jobs

1{ ,..., }nJ J J= ready to be processed on m uniform parallel machines

1{ ,..., }

mM M M= . The processing time of jJ is jp , the speed of

iM is

is , and the

Page 6: Uniform parallel machine scheduling with resource consumption constraint

5

cost per unit time of i

M is i

β . Given that the total cost cannot exceed a certain level B ,

the objective is to find a schedule that minimizes the makespan. The proposed problem

can be formulated as follows:

Min max

C

subject to

11

m

ijix

==∑ for j= 1, …, n

max1/

n

j ij ijp x s C

=≤∑ for i= 1, …, m (1)

1 1/

n m

i j ij ij ip x s Bβ

= =≤∑ ∑

where ijx is 1 if job jJ is assigned to machine i

M , and 0 otherwise.

3. The algorithms

The parallel machine makespan problem is NP-hard even without any constraint

[2]. Meanwhile, meta-heuristic algorithms have served as methods to derive the

approximation solutions for many complex problems, including scheduling and

sequencing [21-27]. In this paper, we adopted three heuristic algorithms which are the

genetic algorithm, the particle swarm optimization algorithm and the simplified swarm

optimization algorithm.

3.1 The genetic algorithm

Page 7: Uniform parallel machine scheduling with resource consumption constraint

6

Goldberg [28] described a classical scheme of a genetic algorithm (GA). The key

operators are the crossover and mutation. Parents with better chromosomes have higher

probabilities to be selected and to produce offspring that share some features from each

parent. The mutation operator diversifies the population to prevent premature

convergence. The components are given as follows.

Step 3.1.1 Representation of the solution

The integer number coding method is used in this study. If an instance has n jobs,

then a chromosome has n random numbers ranged from 1 to m, where each gene

corresponds to the machine that the job is assigned to. For instance, the chromosome (2,

3, 1, 1, 2, 3) of a 6-job, 3-machine problem would represent the sequence where

machine 1 needs to deal with jobs 3 and 4, machine 2 needs to deal with jobs 1 and 5

and machine 3 needs to deal with jobs 2 and 6.

Step 3.1.2 Initialization

The size of initial population (N) is a key factor in the computation effort. For a large

population size, the benefit is it might be easier to have better solutions, but on the other

hand, it might takes more time.

Step 3.1.3. Fitness function

In GA, the fitness function is usually the reciprocal of the objective value to reflect

their relative superiority or inferiority if there are no further constraints. If the solutions

Page 8: Uniform parallel machine scheduling with resource consumption constraint

7

are infeasible, we add a penalty [29] and the objective function of chromosome k is

1

1

maxn

k i m j ij i

j

h p x s≤ ≤=

=

2

1 1

min 0, n m

i j ij i

j i

r B p x sβ= =

+ −

∑∑ (2)

where r is the penalty.

Step 3.1.4. Crossover

The one-point crossover is used for this problem. We select one point randomly and

exchange the genes of the parents to produce a new offspring. For instance, parents with

chromosomes (2, 1, 3, 2, 3) and (3, 2, 1, 3, 4) will generate an offspring of (2, 1, 1, 3, 4)

if the cut-point is 2. The crossover rate is denoted as cr .

Step 3.1.5. Mutation

In this study, a random gene between 1 to n is chosen and a random integer number

between 1 to m is assigned. For instance, a mutation of (2, 1, 1, 2, 3) to (2, 3, 1, 2, 3)

means that job 2 is moved from machine 1 to machine 3. The mutation rate is denoted

as mr .

Step 3.1.6. Selection

In this study, the standard roulette wheel approach is used. For each chromosome k ,

its probability kp is

∑=

=N

j

jkk ffp1

(3)

where the fitness value 1/k kf h= from Equation (2).

Page 9: Uniform parallel machine scheduling with resource consumption constraint

8

Step 3.1.7. Termination

The GA is stopped after Tn generations.

3.2 The particle swarm optimization algorithm

Kennedy and Eberhart [30] provided the particle swarm optimization (PSO)

technique based on the collective behaviors of animal societies. The implementations

of the PSO are as follows.

Step 3.2.1: Coding

We use the real number coding. That is, we generate n uniform random numbers

between 0 and m for a problem with n jobs, where the rounded integer represents the

machine in which the job is assigned to.

Step 3.2.2: Evaluate the fitness value

As in Step 3.1.3, the particle’s fitness value is provided in Equation (2) to

compromise the constraint.

Step 3.2.3: Update the local and the global best

In this step, the location of every particle and local best fitness value are updated.

Moreover, the location for population and the global best fitness value are also

modified.

Step 3.2.4: Calculate each particle’s velocity and the location

( ) ( )1 1 1 1 1 1 1

1 1 2 2

t t t t t t t t

ij ij ij ij j ijv wv c r p x c r g x− − − − − − −= + − + − (4)

Page 10: Uniform parallel machine scheduling with resource consumption constraint

9

t

ij

t

ij

t

ij vxx += −1 (5)

where w is the weight, 1c and 2c are the cognition learning factor, 1

1

tr − and 1

2

tr − are

random number uniformly distributed between 0 and 1 at iteration 1t − , 1−t

ijv is the jth

dimensional velocity of particle i at iteration 1t − , 1−t

ijx is the jth dimensional position

of particle i at iteration 1t − , 1−t

ijp is the jth dimensional best solution owned by particle

i up to iteration 1t − , 1−t

jg is the jth dimensional best solution discovered by any

particles up to iteration 1t − . However, the particle’s velocity and the location might

exceed the bound during the process. To overcome this problem, we set

max max

max max

if

if

t

ijt

ij t

ij

V v Vv

V v V

≥=

− ≤ − (6)

and regenerate t

ijx if it falls out of (0, )m .

Step 3.2.5: Stopping rule

It is stopped after Tn iterations, the same stopping rule as in GA.

3.3 The simplified swarm optimization algorithm

Recently, Yeh et al. [31] proposed a simplified swarm optimization algorithm

(SSO) and they had successfully applied the method to solve multiple multi-level

redundancy allocation and breast cancer pattern problems. Analog to PSO, each

particle changes its direction according to the current best location and the best location

in the entire population. However, a random movement is added in SSO to prevent

premature convergence. Thus, in SSO, the position of the particle depends on its current

position, its up-to-date best position, the global best position of the entire population,

Page 11: Uniform parallel machine scheduling with resource consumption constraint

10

and a random movement. The first four steps and the last step of the SSO are similar to

those of the PSO, thus, only Step 3.3.5 is described in details:

Step 3.3.1: Encoding

The encoding is similar to that of Step 3.1.1.

Step 3.3.2: Evaluate the fitness value

As in Step 3.1.3, the fitness value is provided in Equation (2) to compromise the

constraint.

Step 3.3.3: Update the local and the global best

Step 3.3.4: Update the position of each particle

The jth dimensional position of particle i is updated according to the following

equation:

1

1

1

if [0, )

if [ , )

g if [ , )

if [ ,1)

i

t t

ij ij w

t t

ij ij w pt

ij t t

ij p g

t

ij g

x r C

p r C Cx

r C C

x r C

∈=

(7)

where t

ijr is a random number between 0 and 1, and x is a random integer between 0

and m. In other words, there is a probability of wC that the particle will remain in the

same position, a probability of p wC C− that the particle will move to its personal

up-to-date best position, a probability of g pC C− that the particle will move to its

global best position of the entire population, and a probability of 1 gC− that the particle

will move to an arbitrary position.

Page 12: Uniform parallel machine scheduling with resource consumption constraint

11

Step 3.3.5: Stopping rule

It is stopped after Tn iterations, the same stopping rule as in GA.

4. Simulation experiments

A simulation experiment was given in this section. The parameter values for all three

algorithms were set at 70=N , 5.1=r , and 500=Tn . Moreover, %60=cr and

%2=mr for GA, 2.0=w , 1 2

2c c= = , and 100max =V for PSO, 0.15w

C = , 0.4pC = ,

and 0.75gC = for SSO. Ghomi and Ghazvini [32] had tried several distributions of the

job processing times and they found that the heuristics had the worst results when the

processing time was uniformly distributed. Gupta and Ruiz-Torres [33] identified

several uniform distribution as difficult in their study. In this paper, the computational

experiment consisted of three parts and we followed their designs of data generation in

last two parts of the experiments.

In the first part of the experiment, we compared the solutions from the algorithms

with the optimal results from LINGO software. When the number of machines (m) was

3, the number of jobs (n) was set at 8 and 10; when m=5, n was set at 10 and 12. The

speeds of the machines (i

s ) were generated from two continuous uniform distributions,

namely U(1, 3) and U(1, 5). The job processing times were generated from a discrete

uniform distribution between (1, 100). The unit costs (i

β ) were generated from U(1, 5)

Page 13: Uniform parallel machine scheduling with resource consumption constraint

12

and U(1, 10). The bound for the total cost (B) was uniformly distributed between

1 1min /

n

i m i j ijp sβ≤ ≤ =∑ and

1 1max /

n

i m i j ijp sβ≤ ≤ =∑ . 100 instances were used to test the

performance of the proposed algorithms. We report he mean and the maximum error

percentages for each heuristic where the error percentage of GA is calculated as

( ) %100×− OPTOPTGA

where GA is the value from the genetic algorithm, and OPT is the value from LINGO.

In addition, the numbers of times that the heuristics yield the optimal solutions are also

reported. Table 1 summarized the results. It was seen that SSO has the best performance

for all the cases, while PSO has the largest error percentage out of three algorithms. In

addition, SSO finds the optimal schedule for 50.31%, PSO finds the optimal schedule

for 30.83%; and GA finds the optimal schedule for 27.94%.

Next, the data was generated according to Gupta and Ruiz-Torres [33] framework.

The number of machines (m) was set at 5 and 10. The number of jobs (n) was set at

3m+1, 4m+1, and 5m+1. The processing times ( jp ) were uniform distribution from

U(1, 100), U(100, 200) and U(100, 800). The mean and the maximum relative

deviation percentages (RDP) were reported. It is calculated as

* *( ) / 100%iHA HA HA− ×

where * min{ , 1,...,3}iHA HA i= = is the smallest objective function value among the

three heuristics. The execution time is less than 1 seconds and was ignored from the

Page 14: Uniform parallel machine scheduling with resource consumption constraint

13

report. Tables 2 to 4 summarize the results. It was seen from the tables that the ranges of

the unit costs of the machines, of the processing times, and of the speeds of the

machines have no impact on the behaviors of the heuristics. Meanwhile, the relative

deviations of the heuristics grow if there are more machines, which implies that the

problem is harder if there are more machines. In addition, the relative deviations of the

heuristics grow as the number of jobs increases, however the trend is not as significant

as the number of machines. A closer look revealed that the SSO performs better than

GA and PSO when the number of machines is 5; however, GA has the best performance

when the number of machines is 10.

Finally, the algorithms were tested with n =50, 100, 200, 500 and 1000. The

processing times were generated from U(1, 100). The mean and the maximum relative

deviation, and the mean execution times (in seconds) were reported for each heuristic.

The results were reported in Tables 5. It was also observed that there is no clear

dominance relation between the heuristics since the mean relative deviations are all

greater than 0, however, it was seen from the mean relative deviations in Table 5 that

the performance of GA is better than the other two heuristics when the number of jobs

grows. In addition, the performance of GA is also more consistent since the maximum

relative deviation is smaller than those of the other two heuristics. Furthermore, SSO

has the shortest execution time due to its fast convergent property; however, its

Page 15: Uniform parallel machine scheduling with resource consumption constraint

14

performance is not good when the number of jobs increases. From Tables 2-5, it is

observed that SSO performs better for small-job-sized problems, but the performance is

worse for large-job-sized problems. This situation happens sometimes. The main

reason is that SSO tries to facilitate the search process by simplifying the PSO

algorithm. It works when the searching space is limiting, however, it fails when the

searching space is large. Meanwhile, the number of machines and the number of jobs

seem to have little influence on the performance of the heuristics as n changes.

Speaking overall, we recommend GA as the best method for solving resource

consumption scheduling problem.

5. Conclusion

In this paper, we proposed a scheduling problem on uniform parallel machines

where the objective is to minimize the makespan given that some resource consumption

cannot exceed a certain level. Three algorithms, GA, PSO, and SSO are proposed to

solve the problem. To the best of our knowledge, the parallel-machine makespan

problem with a constraint on the resource consumption has never been studied. In

experimental results, SSO yields better solutions than GA and PSO for problems with a

small number of jobs, and the GA approach could offer a better solution in a reasonable

time especially for large job-sized problems. This model can be implemented when the

Page 16: Uniform parallel machine scheduling with resource consumption constraint

15

resource is expensive and/or limited. In this paper, we assume that jobs can be

processed in any machine, but some jobs might be processed in certain machines due to

job complexity in reality. Footprint indicators have been discussed in many studies, and

integrating footprint indicators proposed by Fang et al. [34] or the learning effect by

Jaber [35. 36] into our model are worth of future discussion.

Acknowledgements: The authors are grateful to the editor, the associate editor, and four

referees, whose constructive comments have led to a substantial improvement in the

presentation of the paper. This work was supported by the NSC of Taiwan, under partly

NSC 101-2221-E-007-079-MY3 and partly under NSC 100-2221-E-035-029-MY3.

References

[1] M.L. Pinedo, Scheduling: theory, algorithms, and system. 3rd ed. New Jersey:

Prentice-Hall; 2008.

[2] M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of

NP Completeness. Freeman, San Francisco; 1979.

[3] R. McNaughton, Scheduling with deadlines and loss functions, Management

Science 6 (1959) 1-12.

[4] T.C.E. Cheng, C.C.S. Sin, A state-of-the-art review of parallel-machine scheduling

research, European Journal of Operational Research 47(3) (1990) 271-292.

[5] E. Mokotoff, Parallel machine scheduling problems: a survey, Asia-Pacific Journal

of Operational Research 18(2) (2001) 193-242.

Page 17: Uniform parallel machine scheduling with resource consumption constraint

16

[6] W.C. Lee, C.C. Wu, P. Chen, A simulated annealing approach to makespan

minimization on identical parallel machines, International Journal of advanced

Manufacturing Technology 31 (2006) 328-334.

[7] M. Ji, T.C.E. Cheng, An FPTAS for parallel-machine scheduling under a grade of

service provision to minimize makespan, Information Processing Letters 108

(2008) 171-174.

[8] C. Koulamas, G.J. Kyparisis, A modified LPT algorithm for the two uniform

parallel machine makespan minimization problem, European Journal of

Operational Research 196 (2009) 61-68.

[9] M. Ji, T.C.E. Cheng, Parallel-machine scheduling of simple linear deteriorating jobs,

Theoretical Computer Science 410 (2009) 3761-3768.

[10] L. Fanjul-Peyro, R. Ruiz, Iterated greedy local search methods for unrelated

parallel machine scheduling, European Journal of Operational Research 207 (2010)

55-69.

[11] A. Janiak, R. Rudek, A note on a makespan minimization problem with a

multi-ability learning effect, Omega, The International Journal of Management

Science 38 (2010) 213-217.

[12] L. Fanjul-Peyro, R. Ruiz, Size-reduction heuristics for the unrelated parallel

machines scheduling problem, Computers & Operations Research 38 (2011)

Page 18: Uniform parallel machine scheduling with resource consumption constraint

17

301-309.

[13] B.Y. Cheng, S.L. Yang, X.X. Hu, B. Chen, Minimizing makespan and total

completion time for parallel batch processing machines with non-identical job

sizes, Applied Mathematical Modelling 36 (2012) 3161-3167.

[14] X. Huang X, M.Z. Wang, Parallel identical machines scheduling with deteriorating

jobs and total absolute differences penalties, Applied Mathematical Modelling 35

(2011) 1349-1353.

[15] A. Rudek, R. Rudek, A note on optimization in deteriorating systems using

scheduling problems with the aging effect and resource allocation models,

Computers & Mathematics with Applications 62 (2011) 1870–1878.

[16] X.R. Wang, J.J. Wang, Single-machine scheduling with convex resource

dependent processing times and deteriorating jobs, Applied Mathematical

Modelling 37 (2013) 2388-2393.

[17] A. Rudek, R. Rudek, On flowshop scheduling problems with the aging effect and

resource allocation, International Journal of Advanced Manufacturing Technology

62 (2012) 135-145.

[18] R. Rudek, Computational complexity and solution algorithms for flowshop

scheduling problems with the learning effect, Computers & Industrial Engineering

61 (2011) 20-31.

Page 19: Uniform parallel machine scheduling with resource consumption constraint

18

[19] Y.K. Lin, Fast LP models and algorithms for identical jobs on uniform parallel

machines, Applied Mathematical Modelling 37 (2013) 3436–3448.

[20] C.M. Wei, J.B. Wang, P. Ji, Single-machine scheduling with time-and-resource-

dependent processing times, Applied Mathematical Modelling 36 (2012) 792-798.

[21] H. Nazif, L.S. Lee, Optimised crossover genetic algorithm for capacitated vehicle

routing problem, Applied Mathematical Modelling 36 (2012) 2110–2117.

[22] A. Azadeh, M.S. Sangari, A.S. Amiri, A particle swarm algorithm for inspection

optimization in serial multi-stage processes, Applied Mathematical Modelling 36

(2012) 1455–1464.

[23] M. Bashiri, M. Mirzaei, M. Randall, Modeling fuzzy capacitated p-hub center

problem and a genetic algorithm solution, Applied Mathematical Modelling 37

(2013) 3513–3525.

[24] R.J. Kuo, Y.S. Han, A hybrid of genetic algorithm and particle swarm optimization

for solving bi-level linear programming problem – A case study on supply chain

model, Applied Mathematical Modelling 35 (2011) 3905–3917.

[25] M. Soolaki, I. Mahdavi, N. Mahdavi-Amiri, R. Hassanzadeh, A. Aghajani, A new

linear programming approach and genetic algorithm for solving airline boarding

problem, Applied Mathematical Modelling 36 (2012) 4060-4072.

[26] N. Hamta, S.M.T. Fatemi Ghomi, F. Jolai, U. Bahalke, Bi-criteria assembly line

Page 20: Uniform parallel machine scheduling with resource consumption constraint

19

balancing by considering flexible operation times, Applied Mathematical

Modelling 35 (2011) 5592–5608.

[27] A. Azadeh, M.S. Sangari, A.S. Amiri, A particle swarm algorithm for inspection

optimization in serial multi-stage processes, Applied Mathematical Modelling 36

(2012) 1455–1464.

[28] D.E. Goldberg, Genetic Algorithms in Search Optimization and Machine Learning.

Addison-Wesley; 1989.

[29] A. Homaifar, C. Qi, S. Lai, Constrained optimization via genetic algorithms,

Simulation 36(4) (1994) 242-254.

[30] J. Kennedy, R. Eberhart, Particle swarm optimization, In: Proceedings of IEEE

international conference on neural networks 4 (1995) 1942-1948.

[31] W.C. Yeh, W.W. Chang, Y.Y. Chung, A new hybrid approach for mining breast

cancer pattern using discrete particle swarm optimization and statistical method,

Expert Systems with Applications 36(4) (2009) 8204-8211.

[32] S.M.T.F. Ghomi, F.J. Ghazivini, A pairwise interchange algorithm for parallel

machine scheduling, Production Planning and Control 9 (1998) 685-689.

[33] J.N.D. Gupta, J. Ruiz-Torres, A LISTFIT heuristic for minimizing makespan on

identical parallel machines, Production Planning and Control 12 (2001) 28-36.

[34] K. Fang, R. Heijungs, G.R.D. Snoo, Theoretical exploration for the combination of

Page 21: Uniform parallel machine scheduling with resource consumption constraint

20

the ecological, energy, carbon, and water footprints: Overview of a footprint

family, Ecological Indicators 36 (2014) 508– 518.

[35] M.Y. Jaber, Learning and forgetting models and their applications, Handbook of

industrial and systems engineering (2006) 30-31.

[36] M.Y. Jaber, Learning curves: Theory, models, and applications, Taylor & Francis

US (2011).

Page 22: Uniform parallel machine scheduling with resource consumption constraint

21

Table 1 The mean and maximum error percentages for the heuristics

GA PSO SSO

Error

percentages

Num. of

opt. sol.

Error

percentages

Num. of

opt. sol.

Error

percentages

Num. of

opt. sol.

i

s

m n mean max mean max mean max

U(1,5) U(1,3) 3 8 1.26 61.25 53 1.04 59.39 62 0.87 59.39 79 10 0.72 15.39 35 1.82 112.26 38 0.35 14.40 64

5 10 4.04 55.06 9 4.11 64.51 15 1.36 54.40 37 12 2.55 11.47 9 4.60 69.58 7 0.90 5.39 20 U(1,5) 3 8 0.73 5.69 56 0.54 4.41 63 0.24 3.76 80 10 0.56 9.90 40 1.49 78.84 37 0.28 8.10 65

5 10 2.93 16.85 14 4.95 155.70 15 0.24 7.95 42 12 2.97 9.49 10 5.56 122.63 7 1.19 5.86 24

U(1,10) U(1,3) 3 8 0.68 7.55 51 0.53 11.44 61 0.18 2.31 80

10 0.43 2.49 39 1.69 48.07 36 0.25 2.54 61 5 10 3.46 19.14 13 5.32 64.05 12 1.20 9.56 41 12 3.35 12.16 11 4.62 74.74 4 1.16 5.94 15

U(1,5) 3 8 0.89 7.60 49 0.54 5.66 64 0.30 4.09 75 10 0.51 4.35 38 1.66 114.75 50 0.22 1.98 61 5 10 4.13 26.00 11 7.64 174.98 16 1.22 8.30 40

12 3.74 43.98 9 7.09 106.53 6 1.79 40.53 21

Page 23: Uniform parallel machine scheduling with resource consumption constraint

22

Table 2 The mean and maximum RDPs for the heuristics when ~ (1,100)i

p U

GA PSO SSO

iβ i

s m n mean max mean max mean max

U(1,5) U(1,3) 5 16 1.15 4.78 1.55 22.63 0.03 0.79

21 0.82 3.43 0.78 6.80 0.11 1.13

26 0.87 9.39 1.08 26.65 0.48 6.64

10 31 3.23 9.74 2.70 69.84 3.79 58.28

41 2.40 7.17 5.99 118.10 6.10 87.71

51 1.95 6.36 8.16 178.01 8.63 116.56

U(1,5) 5 16 1.01 3.77 1.52 14.48 0.02 0.36

21 0.61 2.89 1.48 29.65 0.12 10.33 26 0.64 2.29 1.39 59.08 0.38 4.63

10 31 2.29 8.25 4.87 162.34 3.09 47.14

41 2.44 8.49 4.98 124.22 7.23 116.12

51 1.86 6.77 7.35 127.98 10.64 147.67

U(1,10) U(1,3) 5 16 1.07 3.79 2.08 23.08 0.05 0.89

21 1.03 2.57 0.92 32.26 0.11 1.06

26 0.99 2.93 0.27 7.30 0.39 1.32

10 31 2.29 7.14 2.39 60.62 2.37 31.50 41 2.48 7.68 5.74 199.84 5.13 89.61

51 2.79 7.21 3.17 78.33 6.96 117.91

U(1,5) 5 16 1.11 3.45 1.77 18.84 0.03 0.49

21 0.87 3.07 0.50 4.22 0.12 1.20

26 0.69 2.33 2.02 172.10 0.47 2.58

10 31 2.65 7.89 1.38 32.10 2.40 7.39

41 2.51 8.03 3.19 52.33 4.24 25.32

51 2.63 16.07 3.47 71.73 6.72 80.68

Page 24: Uniform parallel machine scheduling with resource consumption constraint

23

Table 3 The mean and maximum RDPs for the heuristics when ~ (100, 200)i

p U

GA PSO SSO

iβ i

s m n mean max mean max mean max

U(1,5) U(1,3) 5 16 1.51 4.19 1.60 27.44 0.02 0.40

21 0.87 2.32 0.79 14.32 0.08 1.02

26 0.94 4.91 0.31 5.77 0.45 5.74

10 31 1.37 5.95 2.07 40.16 1.96 10.85

41 0.99 4.55 4.53 69.29 5.07 66.75

51 1.22 5.20 5.43 79.70 6.47 87.51

U(1,5) 5 16 1.15 3.56 1.36 12.49 0.01 0.29

21 0.97 2.73 1.78 34.45 0.15 3.69 26 0.95 9.18 0.66 26.89 0.54 9.97

10 31 1.37 5.07 4.61 67.61 3.75 72.07

41 1.30 4.68 4.67 67.81 4.02 44.16

51 1.09 4.13 7.37 78.53 8.82 121.42

U(1,10) U(1,3) 5 16 1.33 3.78 1.68 18.74 0.03 0.58

21 0.86 2.54 1.52 29.64 0.13 1.39

26 0.93 4.21 0.74 29.07 0.38 1.34

10 31 1.50 5.13 3.95 57.06 2.34 38.15 41 1.16 4.29 5.50 113.15 6.90 97.59

51 1.09 4.10 3.43 63.25 5.97 124.73

U(1,5) 5 16 1.58 4.47 1.37 7.57 0.02 0.58

21 0.97 4.05 0.58 4.64 0.09 0.87

26 0.88 2.22 0.66 25.70 0.33 1.11

10 31 1.27 5.09 4.50 119.27 2.91 41.48

41 1.05 4.77 6.03 208.70 6.40 175.33

51 1.18 5.08 5.51 104.58 5.67 78.38

Page 25: Uniform parallel machine scheduling with resource consumption constraint

24

Table 4 The mean and maximum RDPs for the heuristics when ~ (100,800)i

p U

GA PSO SSO

iβ i

s m n mean max mean max mean max

U(1,5) U(1,3) 5 16 0.99 3.37 1.37 10.55 0.02 0.46

21 0.61 3.47 0.57 8.62 0.13 0.99

26 0.65 5.65 1.34 48.25 0.34 1.49

10 31 2.59 6.10 2.03 37.11 2.59 12.16

41 2.17 11.23 7.95 133.48 7.34 124.65

51 1.64 13.79 9.72 150.32 9.31 96.56

U(1,5) 5 16 0.86 2.51 2.10 62.68 0.05 0.77

21 0.80 3.12 0.62 8.85 0.18 1.29 26 0.60 2.01 0.73 2.46 0.43 1.48

10 31 2.08 6.91 4.64 90.56 4.03 56.04

41 2.13 6.84 5.68 115.05 7.77 60.00

51 1.75 7.63 4.54 130.20 7.35 134.93

U(1,10) U(1,3) 5 16 0.86 2.56 1.33 22.79 0.02 0.41

21 0.74 3.70 0.56 11.02 0.18 1.31

26 0.70 2.29 0.36 11.31 0.53 4.23

10 31 2.40 6.93 2.35 75.55 3.48 60.56 41 2.18 13.18 7.83 154.49 7.24 70.81

51 1.75 10.32 5.34 90.97 7.59 91.28

U(1,5) 5 16 1.09 2.81 1.14 4.14 0.01 0.49

21 0.66 2.92 0.71 7.86 0.16 0.98

26 0.63 2.68 1.81 65.67 0.40 1.36

10 31 2.55 7.19 5.99 232.87 6.13 151.22

41 1.76 6.18 6.39 160.10 7.25 154.85

51 1.90 5.42 3.71 61.74 6.97 57.56

Page 26: Uniform parallel machine scheduling with resource consumption constraint

25

Table 5 The mean and maximum RDPs and the mean and maximum CPU times for the heuristics ~ (1,100)i

p U

GA PSO SSO

RDP CPU RDP CPU RDP CPU

iβ is m n mean max mean mean max mean mean max mean

U(1,5) U(1,3) 5 50 1.38 14.70 1.7 0.36 12.74 2.1 2.15 13.31 1.0

100 1.83 8.84 3.2 1.43 60.58 4.7 5.60 54.19 2.5

200 4.73 29.52 6.3 4.10 175.80 9.1 11.02 175.80 4.7

500 8.17 29.37 16.5 5.21 119.65 21.7 16.85 169.07 11.5

1000 9.67 58.27 31.6 5.01 83.22 43.5 10.83 134.38 22.8

10 50 1.94 6.85 1.7 6.16 89.53 2.3 8.13 122.43 1.2

100 1.76 22.23 3.6 6.18 128.20 4.9 8.54 134.11 2.7

200 2.34 16.90 7.0 11.43 107.28 9.6 15.94 144.81 5.2

500 4.48 16.78 17.2 15.78 199.87 23.8 17.61 262.99 12.6

1000 6.27 20.60 35.5 15.95 95.22 47.4 17.58 233.01 25.1

U(1,5) 5 50 1.24 16.85 1.5 1.01 50.67 2.0 2.22 17.84 1.0

100 2.68 21.43 3.2 0.28 16.85 4.5 4.37 34.82 2.4

200 5.89 23.80 6.0 1.37 52.71 8.8 8.23 78.64 4.7

500 11.38 37.51 14.8 3.50 58.13 21.6 14.83 119.48 11.6

1000 13.41 64.67 29.8 8.53 113.17 45.3 18.14 179.19 22.8

10 50 2.35 7.10 1.7 4.82 96.85 2.3 7.33 96.85 1.2

100 1.64 7.01 3.6 6.14 144.17 4.9 7.41 114.07 2.7

200 2.70 16.38 7.0 13.10 174.03 9.6 15.03 238.58 5.2

500 7.02 34.51 16.7 17.13 150.49 24.8 21.60 212.56 13.5

1000 11.62 44.25 33.7 12.37 103.99 47.7 12.98 162.87 25.2

U(1,10) U(1,3) 5 50 1.56 7.76 1.5 0.31 24.82 2.0 1.87 10.69 1.0

100 2.44 23.52 3.2 3.44 114.01 4.5 9.04 207.43 2.4

200 4.99 32.02 6.1 1.43 68.15 8.8 9.02 171.88 4.7

500 8.95 29.06 15.5 4.45 52.37 22.9 12.50 116.25 11.6

1000 9.03 43.76 29.7 4.26 75.00 43.5 12.38 147.75 22.8

10 50 2.49 7.91 1.7 2.52 43.41 2.3 5.74 73.86 1.2

100 1.29 1.46 3.6 7.41 123.54 5.0 6.99 108.67 2.7

200 2.51 8.86 7.0 13.09 144.48 9.7 11.69 201.87 5.7

500 4.32 15.97 17.9 21.62 185.10 24.1 26.86 226.77 12.8

1000 7.30 20.98 33.5 16.03 151.81 47.6 15.55 253.93 25.2

U(1,5) 5 50 1.29 13.40 1.5 1.76 106.70 2.0 1.95 12.82 1.0

100 2.24 13.47 3.2 0.62 19.84 4.5 5.45 74.60 2.4

200 6.43 31.31 6.2 4.19 126.53 8.8 10.84 266.43 4.7

500 12.44 50.78 14.9 4.84 101.79 21.7 13.58 142.80 11.5

1000 11.21 49.74 29.8 5.03 62.62 43.4 12.10 93.61 23.3

10 50 2.35 6.82 1.8 2.66 122.18 2.3 5.75 175.58 1.2

100 1.51 1.76 3.7 5.11 98.17 5.1 8.80 142.74 2.8

200 3.15 21.65 7.1 13.48 230.10 9.8 17.36 265.12 5.2

500 6.86 31.45 17.1 16.19 135.94 23.7 20.73 309.79 12.6

1000 10.72 49.40 33.6 13.49 120.72 47.5 18.87 188.37 25.3