12
Bi-criteria single machine scheduling with a time-dependent learning effect and release times Fardin Ahmadizar , Leila Hosseini Department of Industrial Engineering, University of Kurdistan, Pasdaran Boulvard, Sanandaj, Iran article info Article history: Received 21 December 2010 Received in revised form 16 January 2012 Accepted 3 February 2012 Available online 13 February 2012 Keywords: Single-machine Learning effect Bi-criteria Dominance properties Ant colony algorithm abstract This paper deals with a bi-criteria single machine scheduling problem with a time- dependent learning effect and release times. The objective is to minimize the weighted sum of the makespan and the total completion time. The problem is NP-hard, thus a mixed integer non-linear programming formulation is presented, and a set of dominance properties are developed. To solve the problem efficiently, a procedure is then proposed by incorporating the dominance properties with an ant colony optimization algorithm. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. Then, the dominance properties are added to obtain better solutions. To evaluate the algorithm performance, computational experiments are conducted. Ó 2012 Elsevier Inc. All rights reserved. 1. Introduction In classical scheduling, job processing times are assumed to be fixed. However, considering a scheduling problem in such a situation may not be realistic. For example, operators’ skills may be improved by performing similar operations repeatedly. The existence of this phenomenon in manufacturing environments has been known as the learning effect and proven by many empirical studies [1–5]. Although the concept of learning has been applied to management science since its discovery in the aircraft industry by Wright [6], it has not been investigated in the context of scheduling until Biskup’s study [7]. He has proposed a position- based learning effect model in which a job processing time is a function of the job position in a sequence. Biskup [7] has proven that the single machine scheduling problem with the learning model remains polynomially solvable if the objective is to minimize the total completion time or to minimize the total deviations of job completion times from a common due date. Mosheiov and Sidney [8] have provided a job-dependent learning model by extending Biskup’s learning model, and proposed polynomial time algorithms for some scheduling problems with the learning model. Kuo and Yang [9,10] have provided a time-dependent learning model in which the processing times of the jobs already processed are considered. They have proposed polynomial time algorithms to minimize the makespan and the total comple- tion time in a single machine scheduling problem with the learning effect. Moreover, Kuo and Yang [11] have proposed two polynomial time algorithms for the makespan and the total completion time minimization in a single-machine group sched- uling problem. Wu and Lee [12] have presented a learning effect model in which the actual processing time of a job depends on both its position and the processing times of the jobs already processed. They have shown that the single machine makespan and total completion time minimization problems remain polynomially solvable. 0307-904X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.apm.2012.02.002 Corresponding author. Tel./fax: +98 871 6660073. E-mail addresses: [email protected] (F. Ahmadizar), [email protected] (L. Hosseini). Applied Mathematical Modelling 36 (2012) 6203–6214 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Bi-criteria single machine scheduling with a time-dependent learning effect and release times

  • Upload
    leila

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

Applied Mathematical Modelling 36 (2012) 6203–6214

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling

journal homepage: www.elsevier .com/locate /apm

Bi-criteria single machine scheduling with a time-dependent learningeffect and release times

Fardin Ahmadizar ⇑, Leila HosseiniDepartment of Industrial Engineering, University of Kurdistan, Pasdaran Boulvard, Sanandaj, Iran

a r t i c l e i n f o

Article history:Received 21 December 2010Received in revised form 16 January 2012Accepted 3 February 2012Available online 13 February 2012

Keywords:Single-machineLearning effectBi-criteriaDominance propertiesAnt colony algorithm

0307-904X/$ - see front matter � 2012 Elsevier Incdoi:10.1016/j.apm.2012.02.002

⇑ Corresponding author. Tel./fax: +98 871 666007E-mail addresses: [email protected] (F. Ahm

a b s t r a c t

This paper deals with a bi-criteria single machine scheduling problem with a time-dependent learning effect and release times. The objective is to minimize the weightedsum of the makespan and the total completion time. The problem is NP-hard, thus a mixedinteger non-linear programming formulation is presented, and a set of dominanceproperties are developed. To solve the problem efficiently, a procedure is then proposedby incorporating the dominance properties with an ant colony optimization algorithm. Inthe proposed algorithm, artificial ants construct solutions as orders of jobs based on theheuristic information as well as pheromone trails. Then, the dominance properties areadded to obtain better solutions. To evaluate the algorithm performance, computationalexperiments are conducted.

� 2012 Elsevier Inc. All rights reserved.

1. Introduction

In classical scheduling, job processing times are assumed to be fixed. However, considering a scheduling problem in sucha situation may not be realistic. For example, operators’ skills may be improved by performing similar operations repeatedly.The existence of this phenomenon in manufacturing environments has been known as the learning effect and proven bymany empirical studies [1–5].

Although the concept of learning has been applied to management science since its discovery in the aircraft industry byWright [6], it has not been investigated in the context of scheduling until Biskup’s study [7]. He has proposed a position-based learning effect model in which a job processing time is a function of the job position in a sequence. Biskup [7] hasproven that the single machine scheduling problem with the learning model remains polynomially solvable if the objectiveis to minimize the total completion time or to minimize the total deviations of job completion times from a common duedate. Mosheiov and Sidney [8] have provided a job-dependent learning model by extending Biskup’s learning model, andproposed polynomial time algorithms for some scheduling problems with the learning model.

Kuo and Yang [9,10] have provided a time-dependent learning model in which the processing times of the jobs alreadyprocessed are considered. They have proposed polynomial time algorithms to minimize the makespan and the total comple-tion time in a single machine scheduling problem with the learning effect. Moreover, Kuo and Yang [11] have proposed twopolynomial time algorithms for the makespan and the total completion time minimization in a single-machine group sched-uling problem.

Wu and Lee [12] have presented a learning effect model in which the actual processing time of a job depends on both itsposition and the processing times of the jobs already processed. They have shown that the single machine makespan andtotal completion time minimization problems remain polynomially solvable.

. All rights reserved.

3.adizar), [email protected] (L. Hosseini).

Page 2: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

6204 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

Although most research has assumed that all jobs are released at time zero, research with learning and unequal re-lease times has been implemented in recent years. Lee et al. [13] have studied a single machine scheduling problemwith Biskup’s learning model and release times where the objective is to minimize the sum of the makespan and totalcompletion time. They have proposed a branch-and-bound as well as a genetic algorithm to solve the problem. More-over, Lee et al. [14] have developed a branch-and-bound as well as a heuristic algorithm for the problem with the objec-tive of minimizing the makespan. Wu and Liu [15] have proposed a branch-and-bound algorithm and three heuristicalgorithms to minimize the makespan in a single machine scheduling problem with a time-dependent learning effectand release times.

In this paper, a bi-criteria single machine scheduling problem with a time-dependent learning effect and unequal re-lease times is considered. The objective is to minimize the weighted sum of the makespan and the total completion time.The problem is formulated as a mixed integer non-linear programming (MINLP) problem. Several dominance propertiesare developed and then, by incorporating them with an ant colony optimization (ACO) algorithm, an efficient hybrid algo-rithm is proposed to solve the problem. The performance of the proposed algorithm is compared with some alternativemethods.

The rest of the paper is organized as follows. In the next two sections, the problem is described and formulated, followedby Section 4 providing some dominance properties. In Section 5, the proposed algorithm is described. Section 6 gives com-putational results, and the paper is concluded in Section 7.

2. Problem description

There are n independent jobs to be scheduled on a single machine that is continuously available. The machine can processat most one job at any point of time. Moreover, preemption is not allowed. Associated with each job j is a normal processingtime pj and a release time rj. According to the learning model proposed by Kuo and Yang [10], the actual processing time ofjob j if scheduled in the kth position is given by

pj½k� ¼ pj 1þXk�1

l¼1

p½l�

!a

; j; k ¼ 1;2; . . . ;n ð1Þ

where a 6 0 is a constant learning index and p[l] the normal processing time of the job scheduled in position l. The objectiveis then to minimize the weighted sum of the makespan and the total completion time.

A special case of this problem without learning effects and with the weight of the makespan being 0 is known to bestrongly NP-hard [16]. Hence, the problem under consideration becomes strongly NP-hard as well.

3. MINLP model

The notations that are used in the mathematical model formulation of the problem are introduced below:

r[k]

release time of the job scheduled in position k s[k] starting time of the job scheduled in position k C[k] completion time of the job scheduled in position k w weight of the makespan 1 � w weight of the total completion time zjk 1, if job j is scheduled in position k; 0, otherwise

The problem can then be formulated as the following MINLP problem:

Minimize wC ½n� þ ð1�wÞXn

k¼1

C ½k�

subject to

Xn

j¼1

zjk ¼ 1; k ¼ 1; . . . ; n ð2Þ

Xn

k¼1

zjk ¼ 1; j ¼ 1; . . . ;n ð3Þ

p½k� ¼Xn

j¼1

zjkpj; k ¼ 1; . . . ;n ð4Þ

Page 3: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214 6205

r½k� ¼Xn

j¼1

zjkrj; k ¼ 1; . . . ; n ð5Þ

s½k� P r½k�; k ¼ 1; . . . ;n ð6Þ

s½k� P s½k�1� þ p½k�1� 1þXk�2

l¼1

p½l�

!a

; k ¼ 1; . . . ;n ð7Þ

C ½k� P s½k� þ p½k� 1þXk�1

l¼1

p½l�

!a

; k ¼ 1; . . . ; n ð8Þ

zjk 2 f0;1g; j; k ¼ 1; . . . ;n ð9Þ

The objective function minimizes the weighted sum of the makespan, i.e., the completion time of the job scheduled inposition n, and the total completion time. Constraint (2) ensures that only one job is scheduled in each position, and con-straint (3) ensures that each job is scheduled only once. Constraints (4) and (5) are, respectively, used to state the normalprocessing time and the release time of the job scheduled in each position k. Constraint (6) ensures that the processing ofeach job cannot start before its release time. Constraint (7) ensures that the processing of the job scheduled in each positionk cannot start before the processing of the job scheduled in position k � 1 is completed. Constraint (8) is used to calculate thecompletion time of the job scheduled in each position k. Finally, constraint (9) specifies that each decision variable zjk isbinary.

4. Dominance properties

In this section, several dominance properties providing necessary conditions for any given job sequence to be optimal arederived. These dominance properties give the precedence relationship between any two adjacent jobs in a job sequence.

Suppose that S1 denotes a sequence with job i in position k and job j in position k + 1. Let p be the set of jobs precedingjobs i and j, and p0 be the set of jobs following i and j. p or p0 may be empty. Let T be the completion time of the last job in p,i.e., T = C[k�1]. Assume that S2 denotes the same sequence in which adjacent jobs i and j are interchanged. In addition, let Cj(S1)and Cj(S2) denote the completion times of job j in S1 and S2, respectively. Taking the objective into account, if it is shown that(a) Cj(S1) < Ci(S2) and (b) Ci(S1) + Cj(S1) < Cj(S2) + Ci(S2), then S1 dominates S2. Part (a) guarantees that all jobs in set p0 (includ-ing the job scheduled in position n) under S1 have completion times less than their completion times under S2, and part (b)guarantees that the contribution to the total completion time of jobs i and j under S1 is less than their contribution under S2.

Before presenting the dominance properties developed based on a pairwise interchange of two adjacent jobs, the follow-ing lemma is introduced.

Lemma 1. If pi < pj, then

ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

� pj 1þ pi þXk�1

l¼1

p½l�

!a

> 0:

Proof. The reader is referred to the proof of Theorem 1 by Kuo and Yang [10]. h

Property 1. If max {ri, rj} 6 T and pi < pj, then S1 dominates S2.

Proof. Since max {ri, rj} 6 T, then h

CiðS1Þ ¼ T þ pi 1þXk�1

l¼1

p½l�

!a

ð10Þ

CjðS1Þ ¼ T þ pi 1þXk�1

l¼1

p½l�

!a

þ pj 1þ pi þXk�1

l¼1

p½l�

!a

ð11Þ

Page 4: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

6206 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

CjðS2Þ ¼ T þ pj 1þXk�1

l¼1

p½l�

!a

ð12Þ

and

CiðS2Þ ¼ T þ pj 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

ð13Þ

After taking the difference of (11) and (13), it is obtained that

CiðS2Þ � CjðS1Þ ¼ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

� pj 1þ pi þXk�1

l¼1

p½l�

!a

Since pi < pj, from lemma 1 Ci(S2) � Cj(S1) > 0. Thus, the proof of part (a) is completed.Furthermore, the proof of part (b) is given as follows. Considering (10)–(13), we have

CjðS2Þ þ CiðS2Þ � CiðS1Þ � CjðS1Þ ¼ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

� pj 1þ pi þXk�1

l¼1

p½l�

!a

Since pi < pj, the first term of the above equation is clearly positive, and from Lemma 1 the sum of the last three terms isalso positive. So, Cj(S2) + Ci(S2) � Ci(S1) � Cj(S1) > 0. This completes the proof. h

Property 2. If ri 6 T 6 rj 6 T þ pið1þPk�1

l¼1 p½l�Þa and pi < pj, then S1 dominates S2.

Proof. Since ri 6 T and T 6 rj 6 T þ pið1þPk�1

l¼1 p½l�Þa, the completion times of jobs i and j in S1 are calculated as (10) and (11),

respectively.Since T 6 rj 6 T þ pið1þ

Pk�1l¼1 p½l�Þ

a, clearly

CjðS2Þ ¼ rj þ pj 1þXk�1

l¼1

p½l�

!a

ð14Þ

and from ri 6 T 6 rj, we have

CiðS2Þ ¼ rj þ pj 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

ð15Þ

After taking the difference of (11) and (15), it is obtained that

CiðS2Þ � CjðS1Þ ¼ ðrj � TÞ þ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

� pj 1þ pi þXk�1

l¼1

p½l�

!a

Since T 6 rj, the first term of the above equation is non-negative, and from lemma 1 the sum of the last three terms ispositive. So, Ci(S2) � Cj(S1) > 0. Thus, the proof of part (a) is completed.

Furthermore, the proof of part (b) is given as follows. Considering (10), (11), (14), and (15), it is obtained that

CjðS2Þ þ CiðS2Þ � CiðS1Þ � CjðS1Þ ¼ 2ðrj � TÞ þ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ ðpj � piÞ 1þXk�1

l¼1

p½l�

!a

þ pi 1þ pj þXk�1

l¼1

p½l�

!a

� pj 1þ pi þXk�1

l¼1

p½l�

!a

The first term of the above equation is non-negative since T 6 rj, the second term is clearly positive since pi < pj, and fromlemma 1 the sum of the last three terms is also positive. So, Cj(S2) + Ci(S2) � Ci(S1) - Cj(S1) > 0, and this completes the proof. h

The proofs of Properties 3–8 are omitted since they are similar to those of Properties 1 and 2.

Property 3. If ri 6 T 6 rj 6 T þ pið1þPk�1

l¼1 p½l�Þa, pi > pj and

Page 5: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214 6207

T þ pi 1þXk�1

l¼1

p½l�

!a

< rj þ pj 1þXk�1

l¼1

p½l�

!a

� 1þ pi þXk�1

l¼1

p½l�

!a !;

then S1 dominates S2.

Property 4. If rj 6 T 6 ri 6 T þ pjð1þPk�1

l¼1 p½l�Þa and

ri þ pi 1þXk�1

l¼1

p½l�

!a

< T þ pj 1þXk�1

l¼1

p½l�

!a

� 1þ pi þXk�1

l¼1

p½l�

!a !;

then S1 dominates S2.

Property 5. If T 6 ri 6 rj 6 ri þ pið1þPk�1

l¼1 p½l�Þa and pi < pj, then S1 dominates S2.

Property 6. If T 6 ri 6 rj 6 ri þ pið1þPk�1

l¼1 p½l�Þa, pi > pj and

ri þ pi 1þXk�1

l¼1

p½l�

!a

< rj þ pj 1þXk�1

l¼1

p½l�

!a

� 1þ pi þXk�1

l¼1

p½l�

!a !;

then S1 dominates S2.

Property 7. If T 6 rj 6 ri 6 rj þ pjð1þPk�1

l¼1 p½l�Þa and

ri þ pi 1þXk�1

l¼1

p½l�

!a

< rj þ pj 1þXk�1

l¼1

p½l�

!a

� 1þ pi þXk�1

l¼1

p½l�

!a !;

then S1 dominates S2.

Property 8. If maxfT; rig þ pið1þPk�1

l¼1 p½l�Þa< rj, then S1 dominates S2.

5. Proposed algorithm

To solve the problem efficiently, a hybrid algorithm is proposed that integrates the dominance properties with an ACOalgorithm. ACO algorithms are constructive metaheuristics applied successfully to hard combinatorial optimization prob-lems. These algorithms are inspired by the foraging behavior of real ants in finding shortest paths from their nest to foodsources. Real ants are social insects which live in colonies. They have not visual cues but use a chemical substance, calledpheromone, deposited on their paths for communicating among each other. Ants that select shorter paths will get to the foodand back more quickly than ants that select longer paths. As a greater amount of pheromone is deposited on shorter paths,such paths will be chosen by following ants with higher probability.

ACO algorithms exploit simple agents, called artificial ants, which search for good solutions to a given combinatorial opti-mization problem. An artificial ant constructs a complete solution by starting with a null one and iteratively adding solutioncomponents. The solution construction process is guided by artificial pheromone trails and problem-specific heuristicinformation.

In the proposed algorithm, each artificial ant in the colony probabilistically constructs step by step a solution as an orderof jobs by completing at each step a partial solution. When constructing the solution, both the heuristic information andpheromone trails are used. The dominance properties are then applied to the solution to obtain a near-optimal solution. Fur-thermore, the pheromone trails are updated at run-time through local and global updating rules, and limited between lowerand upper bounds. While the stop condition is not met, this loop is executed.

5.1. Solution construction

Each ant constructs a sequence of jobs by starting with an empty solution and then appending unscheduled jobs one afteranother to the partial solution until all jobs are scheduled. At each step, an unscheduled job is chosen by applying a transitionrule so-called pseudo-random proportional rule [17]. Let sh(k, j) be the pheromone trail denoting the desire of placing job j inthe kth position of a sequence at iteration h of the algorithm, and let g(k, j) be the heuristic desirability of placing job j in thekth position of a sequence. The pseudo-random proportional rule is then as follows. With probability q0, a parameter be-tween 0 and 1 determining the relative importance of exploitation versus exploration, an ant at position k selects theunscheduled job j for which

Page 6: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

6208 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

j ¼ arg max½ðshðk; jÞÞaðgðk; jÞÞb�; ð16Þ

where a and b are two positive parameters that determine the relative importance of the pheromone trail versus the heu-ristic information. While with probability 1 � q0, the ant selects an unscheduled job j according to the following probabilitydistribution:

phðk; jÞ ¼ðshðk; jÞÞaðgðk; jÞÞbP

u2Uðshðk;uÞÞaðgðk;uÞÞb; ð17Þ

where U is the set of all unscheduled jobs.In this study, to calculate the heuristic information which is often needed for achieving a high algorithm performance

[18], two dispatching rules, namely the earliest possible starting time and the shortest processing time rules, are aggregatedin the following way:

gðk; jÞ ¼ 1=ð1þmaxfrj;C ½k�1�gÞpj ð18Þ

To avoid division by zero, 1 is added to maxfrj;C½k�1�g. As seen, the heuristic information is assigned to each unscheduledjob j depending on the partial solution, that is, some dynamic information is used.

5.2. Implementation of dominance properties

As mentioned, the dominance properties enhance the performance quality of a solution generated by an artificial ant.These dominance properties acting as an adjacent pairwise interchange technique test whether the solution satisfies the nec-essary optimal conditions. In a given sequence, two adjacent jobs j (in position k) and i (in position k + 1) are considered, andtemporarily interchanged, i.e., it is assumed that jobs i and j are in positions k and k + 1, respectively. Then, the propertythese adjacent jobs have to satisfy is given, depending on the values of ri, rj, T (=C[k�1]), pi and pj (the values of pi and pj onlymake a distinction between Properties 2 and 3 as well as between Properties 5 and 6). If the property is satisfied, the adjacentjobs i and j are essentially interchanged. Continuing in this manner, a solution is obtained that is better than the initial one,even though the resulting solution may not be optimal. The following pseudo code then describes the procedure, with atime-complexity of O(n2), proposed to improve a sequence (the job scheduled in position k of the sequence is denoted bySeq[k]).

For a certain number of iterations doFor k = 1 to n � 1

i = Seq[k + 1]l = k

j = Seq[l]While (l > 0 & one dominance property is satisfied when placing i before j)

Seq[l + 1] = jl = l � 1If (l > 0)

j = Seq[l]

End ifEnd whileSeq[l + 1] = i

End forEnd for

It is noteworthy that the above procedure can act as a standalone heuristic when the seed of each execution is a randomlygenerated sequence rather than an ant-sequence. However, following preliminary experiments, the number of iterations ofthe first loop is set to be 2 for both ant-sequences and sequences randomly generated as seeds.

5.3. Updating of pheromone trails

At the beginning of the proposed algorithm as well as at each pheromone trails resetting, a fixed value s0 is assigned to allpheromone trails. The trail intensities are then modified at run-time by applying local and global updating rules. Moreover,like the max–min ant system [19], to prevent the algorithm from convergence to local optima, the pheromone trails are al-ways limited between a lower bound smin and an upper bound smax, calculated as follows:

smax ¼ 1=qZ�;

smin ¼ G1smax;ð19Þ

Page 7: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

Table 1Results for all algorithms on small size instances.

EM MINLP DP ACO ACODP

n w k a Oba tb Ob t Best Avgc Best Avg Best Avg t

5 0.25 0.2 �0.3 88.45 0.02 88.45 2 88.45 88.45 88.45 88.45 88.45 88.45 0�0.2 137.32 0.03 137.32 6 137.32 137.32 137.32 137.32 137.32 137.32 0�0.1 241.34 0.03 241.34 4 241.34 241.34 241.34 241.34 241.34 241.34 0

1 �0.3 346.13 0.03 346.13 4 346.13 346.13 346.13 346.13 346.13 346.13 0�0.2 489.84 0.02 489.84 8 489.84 489.84 489.84 489.84 489.84 489.84 0�0.1 380.85 0.03 380.85 9 380.85 380.85 380.85 380.85 380.85 380.85 0

3 �0.3 1334.19 0.02 1334.19 8 1334.19 1334.19 1334.19 1334.19 1334.19 1334.19 0�0.2 838.90 0.02 838.90 6 838.90 838.90 838.90 838.90 838.90 838.90 0�0.1 905.03 0.04 905.03 6 905.03 905.03 905.03 905.03 905.03 905.03 0

0.5 0.2 �0.3 248.88 0.03 248.88 5 248.88 248.88 248.88 248.88 248.88 248.88 0.01�0.2 303.65 0.04 303.65 4 303.65 303.65 303.65 303.65 303.65 303.65 0.01�0.1 322.16 0.03 322.16 9 322.16 322.16 322.16 322.16 322.16 322.16 0.01

1 �0.3 675.73 0.03 675.73 5 675.73 675.73 675.73 675.73 675.73 675.73 0.01�0.2 835.33 0.02 835.33 9 835.33 835.33 835.33 835.33 835.33 835.33 0.01�0.1 511.85 0.02 511.85 5 511.85 511.85 511.85 511.85 511.85 511.85 0.01

3 �0.3 1349.81 0.03 1349.81 6 1349.81 1349.81 1349.81 1349.81 1349.81 1349.81 0.01�0.2 998.92 0.04 998.92 3 998.92 998.92 998.92 998.92 998.92 998.92 0.01�0.1 1534.14 0.03 1534.14 5 1534.14 1534.14 1534.14 1534.14 1534.14 1534.14 0.01

0.75 0.2 �0.3 221.23 0.04 221.23 9 221.23 221.23 221.23 221.23 221.23 221.23 0.01�0.2 337.82 0.03 337.82 7 337.82 337.82 337.82 337.82 337.82 337.82 0.01�0.1 219.42 0.04 219.42 5 219.42 219.42 219.42 219.42 219.42 219.42 0.01

1 �0.3 670.75 0.02 670.75 3 670.75 670.75 670.75 670.75 670.75 670.75 0.02�0.2 512.24 0.02 512.61 1 512.24 512.24 512.24 512.24 512.24 512.24 0.01�0.1 594.46 0.03 594.46 2 594.46 594.46 594.46 594.46 594.46 594.46 0.01

3 �0.3 2232.82 0.02 2232.82 2 2232.82 2232.82 2232.82 2232.82 2232.82 2232.82 0.01�0.2 1631.48 0.04 1631.48 6 1631.48 1631.48 1631.48 1631.48 1631.48 1631.48 0.01�0.1 1331.80 0.02 1331.80 9 1331.80 1331.80 1331.80 1331.80 1331.80 1331.80 0.01

7 0.25 0.2 �0.3 180.98 0.31 180.98 47 180.98 180.98 180.98 180.98 180.98 180.98 0.06�0.2 297.26 0.39 297.26 23 297.26 297.26 297.26 297.26 297.26 297.26 0.05�0.1 367.58 0.06 367.58 19 367.58 367.58 367.58 367.58 367.58 367.58 0.02

1 �0.3 564.71 0.95 564.71 44 564.71 564.71 564.71 564.71 564.71 564.71 0.02�0.2 699.99 0.33 699.99 29 699.99 699.99 699.99 699.99 699.99 699.99 0.03�0.1 815.45 0.88 815.45 14 815.45 815.45 815.45 815.45 815.45 815.45 0.05

3 �0.3 1679.48 0.83 1679.48 37 1679.48 1679.48 1679.48 1679.48 1679.48 1679.48 0.02�0.2 1613.73 0.05 1613.73 32 1613.73 1613.73 1613.73 1613.73 1613.73 1613.73 0.05�0.1 1709.94 0.53 1709.94 9 1709.94 1709.94 1709.94 1709.94 1709.94 1709.94 0.04

0.5 0.2 �0.3 183.40 0.96 183.40 30 183.40 183.40 183.40 183.40 183.40 183.40 0.05�0.2 276.49 0.39 276.49 46 276.49 276.49 276.49 276.49 276.49 276.49 0.05�0.1 349.42 0.50 349.42 23 349.42 349.42 349.42 349.42 349.42 349.42 0.05

1 �0.3 1005.65 0.90 1005.65 12 1005.65 1005.65 1005.65 1005.65 1005.65 1005.65 0.04�0.2 925.50 0.61 925.50 29 925.50 925.50 925.50 925.50 925.50 925.50 0.02�0.1 810.27 0.04 810.27 27 810.27 810.27 810.27 810.27 810.27 810.27 0.03

3 �0.3 2230.59 0.35 2230.59 18 2230.59 2230.59 2230.59 2230.59 2230.59 2230.59 0.02�0.2 2135.45 0.45 2135.45 27 2135.45 2135.45 2135.45 2135.45 2135.45 2135.45 0.05�0.1 1966.85 0.58 1966.85 31 1966.85 1966.85 1966.85 1966.85 1966.85 1966.85 0.02

0.75 0.2 �0.3 296.32 0.26 296.32 30 296.32 296.32 296.32 296.32 296.32 296.32 0.04�0.2 291.52 1.20 291.52 40 291.52 291.52 291.52 291.52 291.52 291.52 0.06�0.1 266.59 0.99 266.59 67 266.59 266.59 266.59 266.59 266.59 266.59 0.06

1 �0.3 1199.99 0.58 1199.99 51 1199.99 1199.99 1199.99 1199.99 1199.99 1199.99 0.07�0.2 926.23 0.59 926.23 25 926.23 926.23 926.23 926.23 926.23 926.23 0.03�0.1 1324.77 0.90 1324.77 20 1324.77 1324.77 1324.77 1324.77 1324.77 1324.77 0.03

3 �0.3 4503.20 0.42 4503.49 45 4503.20 4503.20 4503.20 4503.20 4503.20 4503.20 0.06�0.2 2307.03 0.36 2307.03 40 2307.03 2307.03 2307.03 2307.03 2307.03 2307.03 0.03�0.1 2821.77 0.81 2821.77 31 2821.77 2821.77 2821.77 2821.77 2821.77 2821.77 0.02

10 0.25 0.2 �0.3 313.46 145,200 313.46 540 313.46 313.46 313.46 313.46 313.46 313.46 0.05�0.2 324.61 152,400 324.61 480 324.61 324.61 324.61 324.61 324.61 324.61 0.12�0.1 235.65 156,000 235.65 120 235.65 235.65 235.65 235.65 235.65 235.65 0.11

1 �0.3 952.03 176,400 952.03 480 952.03 952.03 952.03 952.03 952.03 952.03 0.07�0.2 1052.66 180,000 1052.66 480 1052.66 1052.66 1052.66 1052.66 1052.66 1052.66 0.08

(continued on next page)

F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214 6209

Page 8: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

Table 1 (continued)

EM MINLP DP ACO ACODP

n w k a Oba tb Ob t Best Avgc Best Avg Best Avg t

�0.1 1235.65 180,000 1235.65 540 1235.65 1235.65 1235.65 1235.65 1235.65 1235.65 0.08

3 �0.3 1985.51 144,000 1985.61 720 1985.51 1985.51 1985.51 1985.51 1985.51 1985.51 0.10�0.2 2217.23 158,400 2217.23 240 2217.23 2217.23 2217.23 2217.23 2217.23 2217.23 0.10�0.1 2634.32 172,800 2634.32 180 2634.32 2634.32 2634.32 2634.32 2634.32 2634.32 0.09

0.5 0.2 �0.3 450.65 172,800 450.65 720 450.65 450.65 450.65 450.65 450.65 450.65 0.11�0.2 680.32 154,800 680.32 120 680.32 680.32 680.32 680.32 680.32 680.32 0.12�0.1 923.01 180,000 923.01 540 923.01 923.01 923.01 923.01 923.01 923.01 0.08

1 �0.3 1327.04 154,800 1327.04 180 1327.04 1327.04 1327.04 1327.04 1327.04 1327.04 0.10�0.2 1402.47 158,400 1402.47 540 1402.47 1402.47 1402.47 1402.47 1402.47 1402.47 0.09�0.1 2392.39 180,000 2392.39 360 2392.39 2392.39 2392.39 2392.39 2392.39 2392.39 0.12

3 �0.3 3606.01 172,800 3606.01 540 3606.01 3606.01 3606.01 3606.01 3606.01 3606.01 0.09�0.2 4182.55 177,600 4182.55 180 4182.55 4182.55 4182.55 4182.55 4182.55 4182.55 0.12�0.1 4247.22 169,200 4247.22 300 4247.22 4247.22 4247.22 4247.22 4247.22 4247.22 0.10

0.75 0.2 �0.3 791.20 152,400 791.20 600 791.20 791.20 791.20 791.20 791.20 791.20 0.08�0.2 942.69 146,400 942.69 120 942.69 942.69 942.69 942.69 942.69 942.69 0.11�0.1 1670.30 156,000 1670.37 300 1670.30 1670.30 1670.30 1670.30 1670.30 1670.30 0.09

1 �0.3 2468.58 172,800 2468.58 180 2468.58 2468.58 2468.58 2468.58 2468.58 2468.58 0.10�0.2 2198.87 180,000 2198.87 540 2198.87 2198.87 2198.87 2198.87 2198.87 2198.87 0.10�0.1 2861.93 172,800 2861.93 540 2861.93 2861.93 2861.93 2861.93 2861.93 2861.93 0.09

3 �0.3 7033.30 150,000 7033.30 600 7033.30 7033.30 7033.30 7033.30 7033.30 7033.30 0.12�0.2 5717.42 175,200 5717.42 360 5717.42 5717.42 5717.42 5717.42 5717.42 5717.42 0.12�0.1 6627.26 172,800 6627.26 300 6627.26 6627.26 6627.26 6627.26 6627.26 6627.26 0.09

15 0.25 0.2 �0.3 – – 456.52 7200 453.31 453.31 453.31 453.39 453.31 453.31 0.20�0.2 – – 489.56 7200 485.24 485.24 485.24 485.24 485.24 485.24 0.20�0.1 – – 585.13 7200 582.25 582.25 582.25 582.91 582.25 582.25 0.20

1 �0.3 – – 2192.09 7200 2107.15 2107.15 2107.15 2110.82 2107.15 2107.15 0.20�0.2 – – 2409.35 7200 2378.05 2378.05 2378.05 2382.25 2378.05 2378.05 0.20�0.1 – – 1821.17 7200 1798.34 1798.34 1798.34 1801.04 1798.34 1798.34 0.20

3 �0.3 – – 6191.02 7200 6124.20 6124.20 6124.20 6132.18 6124.20 6124.20 0.20�0.2 – – 6631.24 7200 6523.18 6523.18 6523.18 6524.02 6523.18 6523.18 0.20�0.1 – – 7890.21 7200 6878.75 6878.75 6878.75 6879.11 6878.75 6878.75 0.20

0.5 0.2 �0.3 – – 1164.16 7200 1145.43 1145.43 1145.43 1148.22 1145.43 1145.43 0.20�0.2 – – 1231.61 7200 1207.20 1207.20 1207.20 1209.76 1207.20 1207.20 0.21�0.1 – – 796.95 7200 730.78 730.78 730.78 735.09 730.78 730.78 0.20

1 �0.3 – – 2620.28 7200 2520.18 2520.18 2520.18 2520.18 2520.18 2520.18 0.20�0.2 – – 3779.38 7200 3732.83 3732.83 3732.83 3734.04 3732.83 3732.83 0.20�0.1 – – 3506.62 7200 3471.29 3471.29 3471.29 3475.11 3471.29 3471.29 0.21

3 �0.3 – – 11923.02 7200 11687.80 11687.80 11687.80 11688.16 11687.80 11687.80 0.20�0.2 – – 12445.46 7200 11328.16 11328.16 11328.16 11330.56 11328.16 11328.16 0.20�0.1 – – 10256.45 7200 10240.05 10240.05 10240.05 10254.12 10240.05 10240.05 0.20

0.75 0.2 �0.3 – – 1132.06 7200 1125.16 1125.16 1125.16 1127.24 1125.16 1125.16 0.20�0.2 – – 1105.28 7200 1034.68 1034.68 1034.68 1122.03 1034.68 1034.68 0.20�0.1 – – 1088.32 7200 998.89 998.89 998.89 1000.14 998.89 998.89 0.20

1 �0.3 – – 5098.43 7200 5012.66 5012.66 5012.66 5097.35 5012.66 5012.66 0.20�0.2 – – 4582.03 7200 4445.23 4445.23 4445.23 4561.19 4445.23 4445.23 0.20�0.1 – – 4569.24 7200 4544.19 4544.19 4544.19 4551.32 4544.19 4544.19 0.20

3 �0.3 – – 13890.12 7200 13864.70 13864.70 13864.70 13872.74 13864.70 13864.70 0.20�0.2 – – 14815.26 7200 14752.33 14752.33 14752.33 14802.46 14752.33 14752.33 0.20�0.1 – – 14124.02 7200 14095.11 14095.11 14095.11 14119.78 14095.11 14095.11 0.22

a Objective function value.b CPU time (in seconds).c Average.

6210 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

where q is the global pheromone trail evaporation rate (a parameter between 0 and 1), G1 a positive parameter less than 1,and Z� the objective value of the global best solution, i.e., the best solution found so far. Whenever an improvement is found,smax and smin are updated, and then all trail intensities are adjusted, that is, if a pheromone trail is smaller than the newestsmin or greater than the newest smax, it is set to smin or smax, respectively.

After enhancing the quality of a solution generated by an ant, the following local updating rule is applied to each pher-omone trail that corresponds to the improved solution:

Page 9: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

Table 2Results for ACODP, DP and ACO on large size instances.

DP ACO ACODP

n w k a Best Avga Best Avg Best Avg tb

40 0.25 0.2 �0.3 0.0007 0.0011 4.6050 5.4219 0 0 1.15�0.2 0.0006 0.0546 2.4089 3.3925 0 0 2.53�0.1 0 0 4.2067 5.7920 0 0 2.63

1 �0.3 0 0 4.4250 5.8342 0 0 2.46�0.2 0 0 4.8034 5.0127 0 0 1.12�0.1 0 0 3.6650 4.5436 0 0 2.83

3 �0.3 0 0 3.8970 4.2675 0 0 1.78�0.2 0 0 6.2672 6.3268 0 0 2.38�0.1 0 0 4.8128 5.4951 0 0 1.31

0.5 0.2 �0.3 0 0 2.8510 3.8360 0 0 1.58�0.2 0 0 3.4712 4.1286 0 0 2.24�0.1 0 0 4.4529 5.5670 0 0 1.82

1 �0.3 0 0 4.4019 4.6541 0 0 1.44�0.2 0 0 3.0015 3.9126 0 0 1.17�0.1 0.0312 0.0320 2.6021 3.3325 0 0 1.82

3 �0.3 0 0 4.6109 5.5455 0 0 1.92�0.2 0 0 4.6871 4.9567 0 0 1.15�0.1 0 0 4.2327 4.3677 0 0 2.84

0.75 0.2 �0.3 0 0 4.8901 5.3789 0 0 2.68�0.2 0 0 4.8007 5.3891 0 0 1.95�0.1 0 0 4.0234 4.2904 0 0 1.72

1 �0.3 0 0 3.9652 4.6122 0 0 2.89�0.2 0 0.0014 5.4012 5.8236 0 0 2.52�0.1 0 0 2.6015 2.8347 0 0 1.61

3 �0.3 0 0 4.2761 5.1456 0 0 2.73�0.2 0 0 5.8372 6.8565 0 0 1.74�0.1 0 0 5.0751 5.1679 0 0.0033 1.63

60 0.25 0.2 �0.3 0 0 5.0128 6.5785 0 0 3.54�0.2 0 0 12.6021 13.6894 0 0 4.48�0.1 0 0 9.0752 9.7902 0 0 4.17

1 �0.3 0 0 7.6012 7.8140 0 0 5.29�0.2 0 0 9.6128 9.7126 0 0 5.17�0.1 0 0.0004 9.4621 9.5237 0 0 6.69

3 �0.3 0 0 7.7219 8.4343 0 0 4.82�0.2 0 0 7.4109 8.5451 0 0 3.82�0.1 0 0 8.4762 9.3567 0 0 5.35

0.5 0.2 �0.3 0 0 7.2431 8.8671 0 0 3.46�0.2 0 0 8.5908 9.0782 0 0 6.59�0.1 0.0101 0.0145 11.4023 11.8894 0 0.0022 5.11

1 �0.3 0 0 7.0729 8.0909 0 0 4.79�0.2 0 0 10.2872 10.3120 0 0 3.33�0.1 0 0 9.0042 9.3233 0 0 5.54

3 �0.3 0 0 5.6421 6.5347 0 0 5.75�0.2 0 0 9.6231 10.5450 0 0 3.41�0.1 0 0 8.8341 9.7453 0 0 4.53

0.75 0.2 �0.3 0 0 11.8345 12.4566 0 0 6.29�0.2 0.0820 1.2278 8.2964 9.8678 0 0 6.99�0.1 0.0545 0.0712 9.6974 10.4789 0 0.0425 3.90

1 �0.3 0 0 6.0453 7.8897 0 0 4.78�0.2 0 0 9.6067 10.0902 0 0 5.43�0.1 0 0 7.7631 9.1013 0 0 3.84

3 �0.3 0 0 9.6234 10.4120 0 0 5.43�0.2 0.1217 0.3467 11.6021 12.0136 0 0 5.17�0.1 0 0 7.8092 8.2141 0 0 4.24

80 0.25 0.2 �0.3 0 0 11.4291 12.0157 0 0 8.75�0.2 0 0 13.2098 14.7160 0 0 8.16�0.1 0.0231 1.1290 13.9863 14.9177 0 0.0143 9.93

1 �0.3 0 0 16.8024 17.1180 0 0 14.91�0.2 0 0 13.6078 14.1219 0 0 12.97�0.1 0 0 15.4893 15.6223 0 0 10.72

3 �0.3 0 0 10.6069 10.9230 0 0.0917 7.43�0.2 0 0 15.2098 15.8247 0 0 15.89�0.1 0 0 17.2432 18.1254 0 0 14.99

(continued on next page)

F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214 6211

Page 10: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

Table 2 (continued)

DP ACO ACODP

n w k a Best Avga Best Avg Best Avg tb

0.5 0.2 �0.3 0 0 14.0765 15.0261 0 0 14.11�0.2 0.0123 0.0251 17.4963 18.7271 0 0 13.92�0.1 0 0.0124 16.2076 17.8342 0 0.0019 10.57

1 �0.3 0 0 14.4056 15.7453 0 0 14.10�0.2 0 0 16.0783 17.3564 0 0 12.08�0.1 0 0 15.8013 16.7676 0 0 7.49

3 �0.3 0 0 17.4084 18.1785 0 0 11.58�0.2 0 0 16.6124 17.9898 0 0 12.44�0.1 0 0 16.2094 17.6906 0 0 15.63

0.75 0.2 �0.3 0 0 13.0975 14.5116 0 0 9.06�0.2 0.0009 0.0034 14.4234 15.9128 0 0 9.97�0.1 0 0 10.8342 11.8135 0 0 15.93

1 �0.3 0 0 13.8241 14.7140 0 0 8.76�0.2 0 0 12.4784 14.1153 0 0 7.55�0.1 0 0 9.8087 11.9167 0 0 8.14

3 �0.3 0 0 11.8124 12.6233 0 0 11.26�0.2 1.1280 2.1892 12.7632 13.5345 0 1.7514 10.04�0.1 0 0 13.2011 14.6450 0 0 8.07

100 0.25 0.2 �0.3 0 0 12.2782 13.8565 0 0 14.04�0.2 1.2219 1.5698 16.6034 17.1671 0 0.1256 20.35�0.1 0 0 13.6125 14.0780 0 0 22.13

1 �0.3 0 0 13.6076 15.8897 0 0 12.95�0.2 0 0 18.4098 20.3902 0 0 12.14�0.1 0 0 17.4067 19.0015 0 0 16.42

3 �0.3 0 0 12.6431 14.6215 0 0 18.50�0.2 0 0 15.4765 16.5238 0 0 29.51�0.1 0.4351 0.5879 12.6964 14.7341 0 0 16.34

0.5 0.2 �0.3 0 0 11.6087 13.2457 0 0 14.83�0.2 0 0.0009 16.4076 16.5231 0 0 14.46�0.1 0.6812 0.9148 16.4045 17.3450 0 0.3218 12.82

1 �0.3 0 0 13.8054 14.4566 0 0 21.27�0.2 0 0 15.6321 16.3789 0 0 19.15�0.1 0 0 16.2568 17.5258 0 0 12.05

3 �0.3 0 0 16.8762 17.8367 0 0 17.31�0.2 0 0.0078 15.7621 16.2373 0 0.0078 20.58�0.1 0 0 14.4892 16.7389 0 0 21.46

0.75 0.2 �0.3 0 0.0041 15.6762 16.7987 0 0.0011 13.39�0.2 0.1509 0.3773 16.8021 17.8970 0 0.0749 12.75�0.1 0 0 17.6753 17.8148 0 0 13.24

1 �0.3 0 0 16.6521 16.9260 0 0 12.50�0.2 0 0 14.7821 15.7292 0 0 14.19�0.1 0.0397 0.1265 19.6321 20.4085 0 0 15.19

3 �0.3 0 0 17.2013 18.9045 0 0 16.15�0.2 0 0 16.9213 17.9374 0 0 24.63�0.1 0 0 14.2426 15.1678 0 0 13.79

Avg 0.0369 0.0805 10.6936 11.6006 0 0.0226

a Average.b CPU time (in seconds).

6212 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

shðk; jÞ ¼ ð1� q0Þshðk; jÞ þ q0smin ð20Þ

where q0 is the local pheromone trail evaporation rate (a parameter between 0 and 1). The effect of this updating rule is tomake choice of placing job j at position k less desirable for other ants in the colony to achieve diversification.

Furthermore, at the end of each iteration h, to make the search more directed, the following global updating rule is ap-plied to the pheromone trails:

shþ1ðk; jÞ ¼ ð1� qÞshðk; jÞ þ qG2=Z� ð21Þ

where G2 is a non-negative parameter employed to manage the change of the pheromone intensities.

6. Computational results

The ACO algorithm hybridized with the dominance properties, called ACODP, has been coded in visual C++ and run on aPentium 4, 2 GHz PC with 2 GB memory. A computational experiment has been conducted to evaluate the performance of

Page 11: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214 6213

ACODP. The experimental design follows Lee et al. [13] design. The job processing times are chosen from a uniform distri-bution between 1 and 100. The release times are drawn from a uniform distribution over the integers on (0, 50.5nk) where kis a control variable taking the values of 0.2, 1.0, and 3.0. Moreover, the weight w takes 0.25, 0.5, and 0.75. However, due tothe fact that the actual processing times would approach zero quickly when the learning index a is smaller than �0.3, we setthe learning index to take the values of �0.1, �0.2, and �0.3.

For setting the numeric parameters of ACODP, in the preliminary experiment various combinations of the parameter val-ues have been tested, where the following values have been superior and used for all further studies: 30 artificial ants in thecolony, q0 = 0.9, a = 1, b = 0.1, q = q0 = 0.1 and G1 = 0.01. The parameter G2 has been set to 10 for the pheromone trails corre-sponding to the global best solution, and to 0 for the others; accordingly, the pheromone trails not corresponding to the glo-bal best solution are only evaporated. The global updating rule then allows the intensification of the search in theneighborhood of the global best solution during the next iterations. In addition, s0 has been set to 1. If no improvementcan be made for 20 successive iterations, the pheromone trails are reset, and the algorithm terminates when the total num-ber of iterations reaches 120 or no improvement can be made for 30 successive iterations, depending on which condition issatisfied first.

The performance of ACODP is then compared with some alternative methods, including:

� EM: Enumeration method implemented using MATLAB programming language (which finds the best of all possiblesolutions).� MINLP: Proposed MINLP model solved by LINGO 8 software.� DP: Dominance properties as a standalone heuristic.� ACO: Proposed algorithm without the dominance properties.

To make a fair comparison, DP as well as ACO terminates when the time spent by ACODP is arrived. Moreover, each prob-lem instance has been solved by ACODP, DP as well as ACO for five independent runs, and the best and average objectivefunction values achieved are reported.

The computational results for some small size instances are shown in Table 1. It is noted that for n = 15, EM has not beenimplemented due to the limitation of MATLAB, and the time limit for MINLP has been set to 2 h.

Considering the results of EM shown in Table 1, it is seen that MINLP is able to find optimal solutions to all instances with5, 7, and 10 jobs except for 4 test problems (shown in italic, due to the limitation of LINGO); this demonstrates the validity ofthe proposed MINLP model. Moreover, ACODP, DP and ACO are able to obtain optimal solutions in all runs. As the number ofjobs increases, however, both EM and MINLP (EM in particular) are not able to solve problem instances within a reasonableamount of time (as for those with 10 and 15 jobs). But, as shown in Table 1, the other three algorithms can get optimal ornear-optimal solutions to such instances in a very short time.

Furthermore, Table 2 gives a comparison between DP, ACO and ACODP for a number of large size instances, where thenumber of jobs takes the values of 40, 60, 80, and 100. In Table 2, for each problem instance the solution quality is measuredby the percentage error as (Z � Z0)/Z0 � 100, where Z is the objective value of the solution, and Z0 the objective value of thebest solution among all 15 solutions yielded by the three algorithms.

Considering the results shown in Table 2, it is seen that ACODP performs better than ACO and DP (ACO in particular) for allinstances. The computational results suggest that ACODP is very fast and robust, and provides an efficient approach to obtain(near) optimal solutions with small computational requirements.

7. Conclusions

In this paper, a bi-criteria single machine scheduling problem with a time-dependent learning effect and release times isconsidered. The objective is to minimize the weighted sum of the makespan and the total completion time. A mixed integernon-linear programming formulation is presented, and several dominance properties providing necessary conditions for anygiven job sequence to be optimal are derived. By incorporating the dominance properties with an ant colony optimizationalgorithm, an efficient hybrid algorithm is then proposed to solve the problem. In the algorithm, the performance qualityof each solution constructed by an artificial ant is enhanced by means of the dominance properties acting as an adjacent pair-wise interchange technique. It is demonstrated, through computational experimentation, that the proposed hybrid algo-rithm performs very well when compared with the dominance properties or the ant colony optimization algorithm alone.

References

[1] R.W. Conway, A. Schultz, The manufacturing progress function, J. Ind. Eng. 10 (1959) 39–54.[2] E.B. Cochran, New concepts of the learning curve, J. Ind. Eng. 11 (1960) 317–327.[3] G.S. Day, D.B. Montgomery, Diagnosing the experience curve, J. Mark. 47 (1983) 44–58.[4] P. Ghemawat, Building strategy on the experience curve – a venerable management tool remains valuable – in the right circumstances, Harvard Bus

Rev. 63 (1985) 143–149.[5] G.K. Webb, Integrated circuit (IC) pricing, J. High Technol. Manag. Res. 5 (1994) 247–260.[6] T.P. Wright, Factors affecting the cost of airplanes, J. Aeronaut. Sci. 3 (1936) 122–128.[7] D. Biskup, Single-machine scheduling with learning considerations, Eur. J. Oper. Res. 115 (1999) 173–178.[8] G. Mosheiov, J.B. Sidney, Scheduling with general job-dependent learning curves, Eur. J. Oper. Res. 147 (2003) 665–670.

Page 12: Bi-criteria single machine scheduling with a time-dependent learning effect and release times

6214 F. Ahmadizar, L. Hosseini / Applied Mathematical Modelling 36 (2012) 6203–6214

[9] W.H. Kuo, D.L. Yang, Minimizing the makespan in a single machine scheduling problem with a time-based learning effect, Inform. Process. Lett. 97(2006) 64–67.

[10] W.H. Kuo, D.L. Yang, Minimizing the total completion time in a single-machine scheduling problem with a time-dependent learning effect, Eur. J. Oper.Res. 174 (2006) 1184–1190.

[11] W.H. Kuo, D.L. Yang, Single-machine group scheduling with a time-dependent learning effect, Comput. Oper. Res. 33 (2006) 2099–2112.[12] C.C. Wu, W.C. Lee, Single-machine scheduling problems with a learning effect, Appl. Math. Model. 32 (2008) 1191–1197.[13] W.H. Lee, C.C. Wu, M.F. Liu, A single-machine bi-criterion learning scheduling problem with release times, Expert Syst. Appl. 36 (2009) 10295–10303.[14] W.C. Lee, C.C. Wu, P.H. Hsu, A single-machine learning effect scheduling problem with release times, Omega – Int. J. Manage. Sci. 38 (2010) 3–11.[15] C.C. Wu, C.L. Liu, Minimizing the makespan on a single machine with learning and unequal release times, Comput. Ind. Eng. 59 (2010) 419–424.[16] P. Brucker, J.K. Lenstra, A.H.G. Rinnooy Kan, Complexity of machine scheduling problems, Ann. Discrete Math. 1 (1977) 343–362.[17] M. Dorigo, L.M. Gambardella, Ant colony system: a cooperative learning approach to the traveling salesman problem, IEEE Trans. Evol. Comput. 1

(1997) 53–66.[18] M. Dorigo, C. Blum, Ant colony optimization theory: a survey, Theor. Comput. Sci. 344 (2005) 243–278.[19] T. Stützle, H.H. Hoos, Max–min ant system, Future Gener. Comp. Syst. 16 (2000) 889–914.