Transcript
Page 1: A scheduling problem with job values given as a power function of their completion times

Available online at www.sciencedirect.com

www.elsevier.com/locate/ejor

European Journal of Operational Research 193 (2009) 836–848

A scheduling problem with job values given as a power functionof their completion times

Adam Janiak a,*, Tomasz Krysiak a, Costas P. Pappis b, Theodore G. Voutsinas b

a Institute of Computer Engineering, Control and Robotics, Wroclaw University of Technology, Janiszewskiego 11/17, 50-372 Wroclaw, Polandb Department of Industrial Engineering, University of Piraeus, 80 Karaoli and Dimitriou, 185 34 Piraeus, Greece

Available online 7 November 2007

Abstract

This paper deals with a problem of scheduling jobs on the identical parallel machines, where job values are given as a power functionof the job completion times. Minimization of the total loss of job values is considered as a criterion. We establish the computationalcomplexity of the problem – strong NP-hardness of its general version and NP-hardness of its single machine case. Moreover, we solvesome special cases of the problem in polynomial time. Finally, we construct and experimentally test branch and bound algorithm (alongwith some elimination properties improving its efficiency) and several heuristic algorithms for the general case of the problem.� 2007 Elsevier B.V. All rights reserved.

Keywords: Computational complexity; Job value; Branch and bound; Heuristic; Experimental analysis

1. Introduction

In order to solve the significant real-life problems, a lot of new scheduling problems have been recently formulated. Theproblems with resource allocation and the problems with deteriorating jobs are two examples of such kinds of schedulingproblems which have been considered most often by the scientists during last few years. In the problems with resource allo-cation, the processing times, the release dates, the starting times and other parameters may depend on some resources (seee.g., Janiak and Li, 1994; Cheng and Kovalyov, 1995; Kovalyov and Shafransky, 1998; Ng et al., 2003). In the problems ofscheduling deteriorating jobs, the processing times of the jobs increase over time (see e.g., Kovalyov and Kubiak, 1998;Bachman and Janiak, 2000; Mosheiov, 1994, 1995, 2002).

Another new kind of the scheduling problems is studied in this paper. Namely, the one in which processing times of allthe jobs are some values which are fixed in advance and constant during optimization process, but their values deteriorateover time. The precise description of the problem can be illustrated by an application example, which characterizes theutilization process of the components from some used up computers. Therefore, assume that there is given a set of someused up computers which cannot be used any more, because their further utilization is connected with high risk of a break-down, there are some applications which require faster processors, or simply some of their components are already broken.However, some of their components (e.g., monitors, floppy disks drives, network devices, power suppliers, etc.) can beutilized as spare parts in some other computers. Thus, the problem of disassembling computers into their componentsappears here. It can be shown that the values of the computer components decrease over time as a power function andthe speed of this decrease is different for the particular components (see Ferrer, 1997; Voutsinas and Pappis, 2002). Obvi-ously, in the disassembling process we are interested in the component values which are determined at the moments when

0377-2217/$ - see front matter � 2007 Elsevier B.V. All rights reserved.

doi:10.1016/j.ejor.2007.11.006

* Corresponding author. Tel.: +48 71 320 2107; fax: +48 71 321 2677.E-mail addresses: [email protected] (A. Janiak), [email protected] (T. Krysiak), [email protected] (C.P. Pappis), vutsinas@

unipi.gr (T.G. Voutsinas).

Page 2: A scheduling problem with job values given as a power function of their completion times

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 837

they are available for utilization. The component is ready to be reused after it is completely removed from the computerand its proper functionality is confirmed. Thus, the order in which the computers will be disassembled has a significantinfluence on the total profit, i.e., the sum of the component values. Therefore, maximization of the total component valuesis considered as an optimization criterion. The process of disassembling other highly technologically advanced products(such as cars, airplanes, etc.) into their components can be similarly analyzed.

Another application example describes a renovation process of some objects. Assume that some institution (e.g., officeor department) has to determine the order in which some objects (e.g., buildings, roads, bridges, historic monuments, etc.)will be renovated. Renovation of an object is a job which has to be executed. The cost required to renovate each objectincreases with respect to time from its last renovation, since such objects become ruined without renovation and they lossin value. Thus, the cost represents the loss of the object value. Financial resources, which the institution has at its disposalare constrained, both in amount and time. Therefore, here appears the problem how to find the order, in which the objects(jobs) will be renovated (executed) so that the sum of the costs (the total loss of job values) is minimized.

The aim of this paper is to describe and solve a scheduling problem, where job values (or losses of job values) changeduring their execution and are described as a power function of the job completion times. This work was motivated byresults presented by Ferrer (1997) and Voutsinas and Pappis (2002). Ferrer presented many aspects of PC remanufacturing.Based on them, Voutsinas and Pappis introduced a non-increasing power model of job value and formulated a schedulingproblem with maximization of the total job value. They constructed some heuristic algorithms solving the single machineproblem, however the computational complexity status of this problem is still open. In this paper, we introduce a new non-decreasing power model of loss of job value, and formulate a problem of minimization of the total job value loss which canbe used to solve the problem of establishing an order of object renovation. Moreover, based on the above model, we intro-duce a new model of job value, defined as a difference between initial job value and non-decreasing power function of loss.Then, we formulate a problem of maximization of the total job value, which can be used to solve the process of computerremanufacturing. Additionally, we assume a set of identical parallel machines is available to execute the jobs. The preciseformulation of our problem is given in the next section. In Section 3, we show that the general version of our problem isstrongly NP-hard. Moreover, unlike the problem of Voutsinas and Pappis, we prove there that the single machine versionof our problem is NP-hard. Additionally, some polynomially solvable cases of the general version of the problem are giventhere. In Sections 4 and 5 we present and experimentally test branch and bound algorithm and some heuristic algorithms,respectively, which have been constructed to solve the general version of the problem. Some concluding remarks are givenin Section 6.

2. Problem formulation

There are given: a set of m identical parallel machines M = {M1, . . ., Mm} and a set of n independent and non-preemp-tive jobs J = {J1, . . ., Jn} immediately available for processing at time 0. Each job Jj 2 J is characterized by its processingtime pj > 0 and a loss function lj(t) characterizing the loss of its value at time t. The loss function is defined by non-decreas-ing power function of time:

ljðtÞ ¼ wjtaj ; ð1Þ

where wj > 0 and aj > 0 denote proportional and exponential loss rate, respectively. Moreover, the loss function is calcu-lated only at the job completion time Cj.

Solution of the problem is represented by a set of permutations P = {p1, . . ., pm}, where pi is a sequence of jobs assignedto execute on a machine Mi, i = 1, . . ., m. Let ni denote a number of jobs processed on Mi (then

Pmi¼1ni ¼ n). The objective

is to find such a solution P, for which the sum of losses of job values, calculated at their completion times CpiðjÞ (if a job isprocessed in the jth position of pi), is minimal, i.e.,

TLVðPÞ ¼Xm

i¼1

Xni

j¼1

lpiðjÞðCpiðjÞÞ ¼Xm

i¼1

Xni

j¼1

wpiðjÞCapiðjÞpiðjÞ ! min: ð2Þ

Ferrer (1997) showed that the decrease of the computer component values can be modelled by the following time function:v(t) = xt�a, where the values of x and a were established experimentally for different kinds of the components (processors,RAM modules, hard disks, monitors, etc.). Therefore, the values of the computer components can be described by non-increasing convex power functions of time (i.e., the values of the computer components decrease fast at the beginning, whenthe components are quite new, and the values decrease slowly when the components are quite old). Thus, such situation canbe also modelled by the scheduling problem considered in the present paper, as follows. The job is to recover a given com-ponent from some used up computer. The value vj(t) of job Jj decreases over time and can be described by the followingexpression:

Page 3: A scheduling problem with job values given as a power function of their completion times

838 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

vjðtÞ ¼ v0j � ljðtÞ ¼ v0

j � wjtaj ; ð3Þ

where v0j > 0 is its initial value (since lj(0) = 0) and aj 2 (0,1) (the function vj(t) is convex for the concave function lj(t) given

by Expression (1)). The specific values of v0j , wj, and aj for particular components can be established experimentally, taking

into account the appropriate values of x and a. In such a case, we can also assume that the job value vj(t) is positive at anymoment t of the optimization process. Thus, the following condition should be satisfied vjðtÞ ¼ v0

j � wjtaj > 0 for

t 2 ½0;Pn

j¼1pj�. The problem is to find such a solution P that the sum of job values calculated at their completion times

is maximal, i.e.,

Xm

i¼1

Xni

j¼1

vpiðjÞðCpiðjÞÞ ¼Xm

i¼1

Xni

j¼1

v0piðjÞ �

Xm

i¼1

Xni

j¼1

lpiðjÞðCpiðjÞÞ !

! max :

SincePm

i¼1

Pnij¼1v0

piðjÞ ¼Pn

j¼1v0j is constant, thus, maximization of the sum of job values is equivalent to the minimization of

the total loss of jobs values (see Expression (2)).Consider now the process of renovation of the buildings and other objects. The cost required to renovate a given object

increases over time (if the object is not renovated). What is more, this cost increases slowly at the beginning and faster andfaster as time passes. Thus, it should be modelled by non-decreasing and convex power function of time t, given by Expres-sion (1) with aj P 1.

Therefore, it is enough (from the point of view of the presented practical applications) to analyze the problem of min-imization of the total loss of job values with non-decreasing power models of job value losses (formulated at the beginningof this section) for both aj 2 (0,1) and aj P 1. Using the three-field notation a|b|c for scheduling problems, Graham et al.(1979), the considered problem is denoted by Pk

PwjC

ajj :

For the single machine case, i.e., 1kP

wjCajj , the solution is described by a single permutation p, and the objective func-

tion value can be calculated as follows

TLVðpÞ ¼Xn

j¼1

lpðjÞðCpðjÞÞ ¼Xn

j¼1

wpðjÞCapðjÞpðjÞ ! min :

Note that certain classical scheduling problems in fact are some special cases of the problem formulated above, but theyhave never been considered as the problems of minimization of the total loss of job values. Namely, for aj = 1 for all thejobs, our problem becomes the problem of the total weighted completion time minimization, P jj

PwjCj, which is NP-hard

even for two machines (Bruno et al., 1974; Lenstra et al., 1977). Its single machine version is polynomially solvable bySWPT rule (Smith, 1956). For aj = 2 we obtain strongly NP-hard problem P jj

PwjC

2j (Cheng and Liu, 2004). However,

the computational complexity of its single machine version remains open, despite the extensive research (see e.g., Town-send, 1978; Szwarc et al., 1988; Della Croce et al., 1995).

3. Computational complexity

The computational complexity status of the general version of our problem, PkP

wjCajj , may be deduced directly from

the considerations given above.

Corollary 1. The problem PkP

wjCajj is strongly NP-hard as a more general than the strongly NP-hard problem Pk

PwjC2

j ,Cheng and Liu (2004).

It can be also proved that the single machine version of our problem remains NP-hard, even for the case with only twodifferent values of the exponential loss rates. We derive this results using the NP-complete PARTITION PROBLEM (Garey andJohnson, 1979).

PARTITION: Given a set X = {x1, x2, . . ., xq} of q positive integers for whichPq

j¼1xj ¼ 2B; does there exist a partition of

the set X into two disjoint subsets X1 and X2 such thatP

xj2X 1xj ¼

Pxj2X 2

xj ¼ B?

Theorem 1. The problem 1kP

wjCaj

j is NP-hard, even for two different values of the exponential loss rates.

Proof. Given an instance of PARTITION, construct the following instance of the scheduling problem. There are n = q + 1jobs. Among them, there are q partition jobs and a single extra job Je, with the following parameters:

pj ¼ xj; wj ¼ xj; aj ¼ 1; j ¼ 1; . . . ; q; pe ¼ B; we ¼ 1=4; ae ¼ 2:

We show that PARTITION has a solution if and only if there exists a solution to the constructed instance of 1kP

wjCajj with

the valueP

wjCajj 6 y ¼ Aþ 2B2, where A ¼

P16i6j6qxixj.

Page 4: A scheduling problem with job values given as a power function of their completion times

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 839

‘‘Only if’’. Assume that there exists such a partition of the set X into two disjoint subsets X1 and X2 for whichPxj2X 1

xj ¼P

xj2X 2xj ¼ B. Let J1 and J2 denote the subsets containing the jobs constructed on the basis of the elements

from the subsets X1 and X2, respectively. Let the subsets J1 and J2 contain k and q � k jobs (0 6 k 6 q), respectively. Thesolution for our scheduling problem is given as follows. The machine executes at first the jobs from the subset J1 in anarbitrary order, then the extra job Je and finally the jobs from the subset J2 in an arbitrary order, as well. The TLV(p) forany schedule p described above is equal to

TLVðpÞ ¼Xk

i¼1

ppðiÞ

Xk

j¼i

wpðjÞ þXn

i¼kþ2

ppðiÞ

Xn

j¼i

wpðjÞ þ we pe þXk

i¼1

ppðiÞ

!ae

þ pe þXk

i¼1

ppðiÞ

! Xn

i¼kþ2

wpðiÞ:

Since pj = xj and wj = xj, thus, the expression obtained above could be rewritten as:

TLVðpÞ ¼ Aþ weðpe þXxj2X 1

xjÞae þ pe

Xxj2X 2

xj: ð4Þ

SinceP

xj2X 1xj ¼

Pxj2X 2

xj ¼ B; thus TLV(p) = A + 2B2 = y, as required.

‘‘If’’. Assume now that any partition of the set X into two disjoint subsets X1 and X2 will giveP

xj2X 1xj–

Pxj2X 2

xj:Without loss of generality we can assume that

Pxj2X 1

xj ¼ B� k andP

xj2X 2xj ¼ Bþ k; where k > 0. Note that expression

(4) holds for any partition of the set X into X1 and X2 (even if one of them is empty). By (4), we have

TLVðpÞ ¼ Aþ 1

4ðBþ B� kÞ2 þ BðBþ kÞ ¼ y þ 1

4k2 > y:

Thus, it is shown that the decision version of our scheduling problem has a solution if and only if PARTITION has alsoone. h

Although the considered problem is NP-hard, there are some special cases, which can be solved optimally in polynomialtime. Two of them can be solved as follows.

Corollary 2. The problems 1kP

Caj and 1jpj ¼ pj

PCaj

j can be solved in O(nlogn) time by sequencing the jobs according to

SPT rule and in non-increasing order of aj respectively, since pj and aj are the only parameters distinguishing the jobs.

The characteristic of the next cases is that there is only one job which has different value of some parameter (the remain-ing jobs have the same value of this parameter). Such situations may occur in practice if there are some batches of similarjobs (e.g., recovering mostly only one type of the computer components).

Property 1. For the problem 1 aj ¼a; j 2 f1; . . . ; ng n feg;ae; j ¼ e;

� �������� P

Caj

j , in an optimal solution, which can be obtained in

O(nlogn) time, only job Je can be deferred from its position in the SPT sequence.

Proof. It follows from Corollary 2 that the jobs with the common loss rate a are scheduled according to SPT rule. We willshow that it holds if there is one job Je with ae 5 a. Assume that Je is placed in the jth position in some optimal permu-tation p and, contrary to the thesis of the property, pp(i) > pp(k)(i < j < k) for some jobs p(i) and p(k). Let a permutation p 0

be obtained from p by interchanging the jobs p(i) and p(k). The difference between the objective function values obtainedfor p and p 0 is given below

TLVðpÞ �TLVðp0Þ ¼Xk�1

l¼i

CpðlÞ �Xk�1

l¼i

Cp0ðlÞ ¼Xi�1

l¼1

ppðlÞ þ ppðiÞ

!a

þXj�1

g¼iþ1

Xi�1

l¼1

ppðlÞ þ ppðiÞ þXg

l¼iþ1

ppðlÞ

!a

þXi�1

l¼1

ppðlÞ þ ppðiÞ þXj�1

l¼iþ1

ppðlÞ þ pe

!ae

þXk�1

g¼jþ1

Xi�1

l¼1

ppðlÞ þ ppðiÞ þXj�1

l¼iþ1

ppðlÞ þ peþXg

l¼jþ1

ppðlÞ

!a

�Xi�1

l¼1

ppðlÞ þ ppðkÞ

!a

�Xj�1

g¼iþ1

Xi�1

l¼1

ppðlÞ þ ppðkÞ þXg

l¼iþ1

ppðlÞ

!a

�Xi�1

l¼1

ppðlÞ þ ppðkÞ þXj�1

l¼iþ1

ppðlÞ þ pe

!ae

�Xk�1

g¼jþ1

Xi�1

l¼1

ppðlÞ þ ppðkÞ þXj�1

l¼iþ1

ppðlÞ þ pe þXg

l¼jþ1

ppðlÞ

!a

:

Since pp(i) > pp(k), thus, the difference TLV(p) � TLV(p 0) is positive. Thus, the global SPT rule is optimal for the jobs withthe common exponential loss rate. The position of job Je can be found in O(n) steps by checking all possible positions in theschedule. h

Page 5: A scheduling problem with job values given as a power function of their completion times

840 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

In the next property, we prove that we know in advance a position of job Je in an optimal solution of the problem ana-lyzed in the previous property, if some additional specific conditions are satisfied (i.e., pj > 1 for all the jobs and a = 1).

Property 2. For the problem 1jpj > 1; aj ¼1 j 2 f1; . . . ; ng n feg;ae j ¼ e;

ae P 2jP

Cajj ;

�in an optimal permutation, determin-

able in O(nlog n) time, job Je precedes all the remaining jobs which are ordered according to SPT rule.

Proof. Note that for ae = 1 we obtain the problem considered in Corollary 2. Assume there is given an optimal permuta-tion p, in which job Je is placed, contrary to the thesis of Property 2, in the jth position (j 2 {2, . . ., n}). Let p 0 denote apermutation obtained from p by putting job Je to the first position. The difference between the objective function valuesobtained for both permutations is given below

TLVðpÞ � TLVðp0Þ ¼Xj�1

i¼1

ppðiÞ þ pe

!ae

� paee � ðj� 1Þpe:

It is easy to check that the difference TLV(p) � TLV(p 0) is positive for ae = 2. Thus, this result holds also for ae > 2. Hence,in the optimal solution, job Je occupies the first position and the remaining jobs are ordered according to SPT rule (fromCorollary 2). h

Property 3. For the problem 1 pj ¼p; j 2 f1; . . . ; ng n feg;pe; j ¼ e;

� �������� P

Cajj ; in an optimal solution, which can be obtained in

O(nlogn) time, only job Je can be deferred from its position in the job sequence scheduled in the non-increasing order of their

loss rates aj.

Proof. Similarly as in Property 1, we will show that the jobs with a common processing time are scheduled in thenon-increasing order of their loss rates aj (according to Corollary 2) if there is one job Je with different processing time.Assume there is given an optimal permutation p, in which job Je occupies the jth position and (contrary to the thesis ofProperty 3) the remaining jobs are not scheduled in the non-increasing order of their loss rates aj. Thus, ap(i) < ap(i) +k = ap(k)(i < j < k,k > 0) for some jobs p(i) and p(k). Let a permutation p 0, obtained from p by interchanging the jobs frompositions i and k, be also given. The difference between the objective function values equals

TLVðpÞ � TLVðp0Þ ¼ ðipÞapðiÞ þ ðkpþ peÞapðkÞ � ðipÞapðkÞ � ðkpþ peÞ

apðiÞ ¼ ðkpþ peÞapðiÞ ½ðkpþ peÞ

k � 1� � ðipÞapðiÞ ½ðipÞk � 1�:ð5Þ

Since k > 0 and k > i, hence we have ðkp þ peÞapðiÞ > ðipÞapðiÞ and [(kp + pe)

k � 1] > [(ip)k � 1]. Thus, the difference (5) is po-sitive and scheduling the jobs with the common processing time pj = p globally in the non-increasing order of their lossrates aj is optimal. The position of job Je can be determined in O(n) steps, similarly as in Property 1. h

Corollary 3. The problem Qjpj ¼ 1jP

wjCajj can be solved optimally in polynomial time O(n3), since it is a special case of the

polynomially solvable problem Qjpj ¼ 1jP

fj, Dessouky et al. (1990), where fj is an arbitrary function of the job completion

time.

4. Branch and bound algorithm

Since our problem P jjP

wjCajj is strongly NP-hard, thus, in order to obtain its optimal solutions, we construct an algo-

rithm based on the branch and bound approach. In order to simplify further considerations, let us introduce now the fol-lowing symbols.

Sj – a starting time of job Jj in solution P. Without loss of generality we can assume that Jj is processed as the kth one on

a machine Mi in P, i.e., pi(k) = j. Thus, Sj ¼ Cpiðk�1Þ ¼Pk�1

x¼1ppiðxÞ:

L(P) = hl1, . . ., lj,lj+1, . . ., lni – a list of jobs, i.e., job processing sequence in solution P. An element lj 2 L(P) denotes anindex of the job which starts to process as the jth in P, thus, Slj 6 Sljþ1

for j ¼ 1; . . . ; n� 1:

LST rule – denotes a rule constructing a solution PL, based on a given list L(PL), by putting the jobs, taken from L(PL)in sequence, as the last on the earliest available machine.

Note that all the solutions can be obtained if all possible lists of jobs, i.e., job processing sequences, are constructed.Moreover, note that several different solutions may be described by the same job processing sequence (i.e., L(P 0) = L(P00),where P 0 and P00 are some various solutions). In such cases, LST rule allows to construct the solution PL with the best

Page 6: A scheduling problem with job values given as a power function of their completion times

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 841

criterion value for a given list of jobs (i.e., if L(PL) = L(P) then TLV(PL) 6 TLV(P) for any P). Therefore, set of solu-tions searched by the branch and bound algorithm can be limited to a set of LST solutions, which are constructed on thebasis of all possible lists of jobs (there are n! different lists for n jobs). Note also that the lists in which there are the samejobs in the first m positions but in different order, deliver in fact the same solution (i.e., we obtain the same permutationsp1, . . ., pm but assigned to different machines which are identical). Thus, only one of them can be taken for further consid-eration and the remaining ones can be rejected.

Hence, in the considered algorithm, subset of solutions s, is represented by a list L, the number of already scheduled jobs– k (they occupy the k first positions in L and they are scheduled on the machines according to LST rule) and a set of non-scheduled jobs A = {x1, x2, . . ., xn�k}. The first stage of dividing the whole set of solutions is to find all lists with differentjobs in the first m positions (i.e., finding all possible combinations of m jobs from among n). In such a way we obtain thesubsets of solutions for further dividing. In the next stage, a lower bound of the objective function (2) is calculated for agiven subset s, as follows:

LBðsÞ ¼Xm

i¼1

Xni

j¼1

lpiðjÞðCpiðjÞÞ þXj2A

wjðT þ pjÞaj ; ð6Þ

where T ¼ min16i6mCpiðniÞ (remind that ni denotes a number of jobs scheduled on machine Mi). If the lower bound value (6)of the examined subset is larger than the value of the best solution (from among the considered ones), then this subset isexcluded from the further searching. Otherwise, the subset is divided into n � k subsets, such that the jobs x1, x2, . . ., xn�k

are placed in the (k + 1)th position of the list L, in sequence. The first feasible solution is obtained by use of the heuristicalgorithm A3LST described in the next section. The subsets of solutions are searched according to the depth-first strategy,because it requires small amount of memory (contrary to the breadth-first strategy).

Additionally, three properties which allow us to eliminate the subsets which do not contain the optimal solutions areproved, in order to improve the algorithm efficiency. Note that only one permutation is taken into account in theseproperties – pi (i.e., the permutation of the jobs processed on machine Mi) and the completion time of the last jobin pi is not modified there. Thus, in order to simplify the notation, we use p instead of pi in the proofs. The firsttwo properties use obvious facts that a given permutation is not optimal if changing the order of the first k jobs(k 6 n) or swapping only two jobs (such that the second one has smaller processing time), we obtain a newsubpermutation with smaller sum of the losses of their values. In other words, if we have two partial sequences whichcontain the same jobs and the sum of job value losses of one of them is larger than for the other, then it cannot beextended to the optimal permutation (similarly for the case with only two jobs placed in the opposite positions). Thelast property uses the fact that if, in a given permutation, the loss of value of some job increases faster than the lossof some preceding job with larger processing time, then we can improve the objective function value by exchanging thesetwo jobs.

Observe that Properties 4–6 are applicable to the problem in which the loss of the job value is described by an arbitrary

function lj(Cj), dependent on the job completion time Cj. The only condition is that the function has to be differential on the

interval ð0;Pn

j¼1pjÞ and non-decreasing. It is obvious that our problem P jjP

wjCajj is a special case of the one described

above, so the properties are also applicable.

Property 4. For some two permutations p and p 0, in which the same jobs are scheduled in the first k positions but in different

order, ifPk

j¼1lpðjÞðCpðjÞÞ >Pk

j¼1lp0ðjÞðCp0ðjÞÞ, then the permutation p is not the optimal one.

Proof. Suppose, there are given two permutations p and p 0, in which the same jobs are scheduled in the first k positions but

in different order, such thatPk

j¼1lpðjÞðCpðjÞÞ >Pk

j¼1lp0ðjÞðCp0ðjÞÞ. Contrary to the thesis of Property 4, assume that p is opti-

mal. Without loss of generality, we can assume that the jobs scheduled in positions k + 1 to n are executed in the sameorder in both permutations, such that they contribute to the objective function as small value as possible. Then,Cp(j) = Cp 0(j) for j = k + 1, . . ., n, and what follows: lp(j)(Cp(j)) = lp 0(j)(Cp 0(j)). Thus we have

Xn

j¼1

lpðjÞðCpðjÞÞ ¼Xk

j¼1

lpðjÞðCpðjÞÞ þXn

j¼kþ1

lpðjÞðCpðjÞÞ >Xk

j¼1

lp0ðjÞðCp0ðjÞÞ þXn

j¼kþ1

lpðjÞðCpðjÞÞ ¼Xn

j¼1

lp0ðjÞðCp0ðjÞÞ;

what contradicts the assumption of optimality of the permutation p. h

Property 5. If in a permutation p for j < k (j,k 2 {1, . . ., n}) the inequalities lp(j)(Cp(j)) + lp(k)(Cp(k)) > lp(k)(Cp(j�1) + pp(k)) +

lp(j)(Cp(k)) and pp(j) P pp(k) hold, then the permutation p is not the optimal one.

Page 7: A scheduling problem with job values given as a power function of their completion times

842 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

Proof. Assume a permutation p is given, which is optimal and the two following inequalities, based on the thesis of Prop-erty 5, hold for it:

ppðjÞ P ppðkÞ; ð7ÞlpðjÞðCpðjÞÞ þ lpðkÞðCpðkÞÞ > lpðkÞðCpðj�1Þ þ ppðkÞÞ þ lpðjÞðCpðkÞÞ; ð8Þ

where j < k (j,k 2 {1, . . ., n}). Based on p, let us construct a permutation p 0 by interchanging the jobs in positions j and k.Observe that only the completion times of the jobs in positions j to k � 1 are different in both permutations. From

assumption (7) follows that Cp(i) P Cp0(i) and lp(i)(Cp(i)) P lp(i)(Cp0(i)) = lp0(i)(Cp0(i)) for i = j + 1, . . ., k � 1. Thus,Pk�1i¼jþ1lpðiÞðCpðiÞÞP

Pk�1i¼jþ1lp0ðiÞðCp0ðiÞÞ. Moreover, from assumption (8) follows that lp(j)(Cp(j)) + lp(k)(Cp(k)) >

lp0(j)(Cp0(j)) + lp0(k)(Cp0(k)). Therefore, it is true that:

Xn

i¼1

lpðiÞðCpðiÞÞ �Xn

i¼1

lp0ðiÞðCp0ðiÞÞ ¼ lpðjÞðCpðjÞÞ þXk�1

i¼jþ1

lpðiÞðCpðiÞÞ þ lpðkÞðCpðkÞÞ � lp0ðjÞðCp0ðjÞÞ

�Xk�1

i¼jþ1

lp0ðiÞðCp0ðiÞÞ � lp0ðkÞðCp0ðkÞÞ > 0:

The above result contradicts the assumption of optimality of the permutation p. h

Property 6. If in a permutation p occurs pp(j) > pp(k) for j < k (j, k 2 {1, . . ., n}) and the following inequalitydðlpðjÞðtÞÞ

dt <dðlpðkÞðtÞÞ

dtholds for each t 2 (Cp(j�1) + pp(k), Cp(k)), then the permutation p is not the optimal one.

Proof. Assume, there is given an optimal permutation p, for which the following inequalities, based on the thesis of Prop-

erty 6, hold: (i) pp(j) > pp(k) and (ii)dðlpðjÞðtÞÞ

dt <dðlpðkÞðtÞÞ

dt for each t 2 (Cp(j�1) + pp(k),Cp(k)), where j < k (j,k 2 {1, . . ., n}). In order

to simplify notation, assume that l0pðiÞ ¼dðlpðiÞðtÞÞ

dt , where i 2 {j,k}. Suppose also that p 0 is such a permutation, which is

obtained from p by interchanging the jobs in positions j and k.Based on Newton–Leibniz Theorem, the following connection between the loss of value of the job scheduled in the jth

position in the permutation p and the loss of value of the same job but scheduled in the kth position in the permutation p 0,occurs:

lp0ðkÞðCp0ðkÞÞ ¼ lpðjÞðCpðjÞÞ þZ Cp0 ðkÞ

CpðjÞ

ðl0pðjÞÞdt: ð9Þ

And similarly for the job placed in the kth position in p and in the jth position in p 0:

lpðkÞðCpðkÞÞ ¼ lp0ðjÞðCp0ðjÞÞ þZ CpðkÞ

Cp0 ðjÞ

ðl0pðkÞÞdt: ð10Þ

Since it is obvious that Cp(k) = Cp 0(k), thus, based on the Eqs. (9) and (10), we have

lpðjÞðCpðjÞÞ þ lpðkÞðCpðkÞÞ ¼ lp0ðjÞðCp0ðjÞÞ þ lp0ðkÞðCp0ðkÞÞ þZ Cp0 ðkÞ

Cp0 ðjÞ

ðl0pðkÞÞdt �Z Cp0ðkÞ

CpðjÞ

ðl0pðjÞÞdt:

From the assumption l0pðkÞ > l0pðjÞ follows thatR Cp0ðkÞ

Cp0 ðjÞðl0pðkÞÞdt �

R Cp0 ðkÞCp0ðjÞðl0pðjÞÞdt > 0, and from the assumption pp(j) > pp(k)

follows that Cp(j) > Cp0(j). Thus, since l0pðjÞ > 0 (lp(j) is non-decreasing), thereforeR Cp0 ðkÞ

CpðjÞðl0pðjÞÞdt <

R Cp0 ðkÞCp0ðjÞðl0pðjÞÞdt. Hence,

the following inequality holds:R Cp0ðkÞ

Cp0 ðjÞðl0pðkÞÞdt �

R Cp0 ðkÞCpðjÞðl0pðjÞÞdt > 0, what denotes that lp(j)(Cp(j)) + lp(k)(Cp(k)) > lp 0(j)(Cp 0(j)) +

lp0(k)(Cp 0(k)). Thus, the permutation p is not the optimal one. h

The efficiency of the described branch and bound algorithm has been verified experimentally. The algorithm has beentested for n = 10, 15, 20, and 25 jobs. Numbers of machines have been chosen in proportion to the number of jobs: m = 1,b0.1nc, b0.25nc, b0.5nc, and b0.75nc. The problem parameters have been randomly generated according to the uniform dis-tribution from the following intervals: pj 2 (0,10] and (0, 100] (small and large differences between the job processing times,respectively); aj 2 (0, 1) and (2,8] (concave and convex function lj(t) – see Section 2); wj 2 (0, 10] and (0, 100] (insignificantand significant proportional loss rates). Thus, 144 different tests have been performed in total (since for n = 10 and 15, andm = 1 and b0.1nc we obtain the same number of machines), and 100 different instances have been generated for each ofthem. The tests have been made on the PC with Windows XP, AMD Athlon 64 3000+, 512 MB RAM. We have analyzed

Page 8: A scheduling problem with job values given as a power function of their completion times

Table 2Dependency of the running times of the branch and bound algorithm on the intervals of the problem parameters (for all values of the number of jobs andmachines)

Interval for tavg (seconds) tmin (seconds) tmax (seconds) tdev (seconds) cavg cmin cmax cdev

pj aj wj

(0,10] (0,1) (0,10] 41.1 0.00 5128.8 241.2 11,140,368 123 1,365,760,284 62,503,217(0,100] 38.4 0.00 7959.8 379.0 9,028,408 149 1,956,646,437 93,351,155

(2,8] (0,10] 12.0 0.00 5952.4 222.1 2,122,766 67 1,080,584,326 34,334,825(0,100] 3.0 0.00 265.7 21.4 784,702 103 69,954,734 5,775,793

(0,100] (0,1) (0,10] 22.1 0.00 15,325.9 427.4 3,549,350 175 2,153,874,079 60,982,653(0,100] 15.8 0.00 3054.6 133.3 3,433,857 94 833,094,797 32,779,642

(2,8) (0,10] 7.5 0.00 3932.1 109.2 1,448,843 55 595,149,759 17,226,156(0,100] 3.4 0.00 366.1 25.7 785,846 65 67,737,706 5,762,676

tavg, tmin, tmax, tdev – an average, minimal, maximal value, and standard deviation of the running times of the algorithm, respectively; cavg, cmin, cmax, cdev –an average, minimal, maximal value, and standard deviation of the numbers of subsets checked by the algorithm, respectively.

Table 1Dependency of the running times of the branch and bound algorithm on the instance sizes (for all combinations of the problem parameters intervals)

n m tavg (seconds) tmin (seconds) tmax (seconds) tdev (seconds) cavg cmin cmax cdev

10 1 0.0007 0.00 0.016 0.003 239 55 1852 2152 0.0020 0.00 0.016 0.005 789 388 5852 6465 0.0038 0.00 0.016 0.007 1345 1269 2918 1357 0.0013 0.00 0.016 0.004 371 362 408 9

15 1 0.015 0.00 0.28 0.029 2895 236 54,692 52903 0.261 0.00 21.19 1.072 87,402 5535 8,521,775 422,2347 0.185 0.13 3.42 0.129 58,876 51,480 897,675 34,956

11 0.022 0.00 0.06 0.010 5045 1368 9294 1410

20 1 0.48 0.00 12.00 1.27 66,409 539 1,617,195 174,2802 34.12 0.02 4045.61 190.86 9,859,586 3802 1,365,760,284 62,705,3635 28.36 0.72 1158.08 110.93 8,310,640 232,811 476,024,639 34,910,361

10 7.62 3.13 398.52 15.28 2,074,876 832,342 102,583,086 3,956,73915 0.37 0.30 0.80 0.04 78,209 77,520 117,840 2455

25 1 10.88 0.00 315.08 31.54 1,195,809 843 37,994,042 3,534,3542 560.65 0.17 15,325.91 1890.73 89,675,329 29,571 2,153,874,079 272,578,4266 874.25 12.56 7959.80 1787.51 195,614,676 3,364,900 1,956,646,437 402,526,097

12 375.98 234.50 4228.63 448.78 92,666,400 67,603,912 1,268,460,476 135,721,92018 16.48 13.09 21.14 1.68 3,399,191 3,364,900 5,007,940 191,763

tavg, tmin, tmax, tdev – an average, minimal, maximal value, and standard deviation of the running times of the algorithm, respectively; cavg, cmin, cmax, cdev –an average, minimal, maximal value, and standard deviation of the numbers of subsets checked by the algorithm, respectively.

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 843

running times of the algorithm and numbers of checked subsets. The results are summarized in Tables 1 and 2, where tavg,tmin, tmax, and tdev denote an average, minimal, maximal value, and standard deviation of running times of the algorithm,respectively, and cavg, cmin, cmax, and cdev – an average, minimal, maximal value, and standard deviation of numbers ofsubsets checked by the algorithm, respectively. Table 1 presents the dependency of the running times of the algorithmon the number of jobs and machines, where the results concern all analyzed combinations of the problem parameters inter-vals. Conversely, Table 2 presents the dependency of the running times of the algorithm on the values of the problemparameters, where the results deal with all considered values of the number of jobs and machines.

From the results given in Table 1 follows that the running times of the branch and bound algorithm strongly depend onthe number of machines with reference to the number of jobs – the algorithm is the fastest for one machine, however it is theslowest for few machines, i.e., if the number of machines is about 10–25% of the number of jobs (even for m = 2). Next, thealgorithm becomes faster as number of machines increases, and finally, if number of machines is 75% of number of jobs,the algorithm is almost as fast as for m = 1. What is more, the standard deviation and maximum running time are smallerfor m = b0.75nc than for m = 1. Thus, we can deduce that for the latter case the running times are usually small howeversometimes may appear an instance which would be calculated definitely longer, whereas for the former case typical runningtimes are somewhat larger but there is smaller probability of appearing a time-consuming instance. It can be justified by twofacts: (i) the lower bound (6) and the elimination properties are more efficient if there are more jobs in an analyzed jobsequence (i.e., there is larger probability of eliminating some subset) and it occurs if less machines are given (particularlyfor m = 1); (ii) the algorithm is more efficient for large number of machines since in its first stage it eliminates more solutions

Page 9: A scheduling problem with job values given as a power function of their completion times

844 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

then (let us remind that it eliminates in this stage the subsets in which the same jobs are in the first m positions of an input listbut in different order). Finally, it is worth to notice that the considered algorithm solves the problem in a few seconds on theaverage for the instances of 20 jobs, and the instances of 25 jobs are solved in a few minutes on the average.

Based on the results given in Table 2 we can state that the algorithm is faster if the problem parameters are generatedfrom larger intervals (it concerns to the same extent pj, aj as well as wj). Consequently, the algorithm is the fastest forpj 2 (0, 100], aj 2 (2,8), and wj 2 (0, 100] and in this case it solves the problem in a few seconds on the average and in afew minutes to the max, whereas for pj 2 (0, 10], aj 2 (0, 1), and wj 2 (0,10] the algorithm is the slowest (more than 1 hourto the max). In the former case the differences between the parameters for particular jobs are larger, and thus, there is largerprobability of rejecting some subsets of non-optimal solutions with the aid of Properties 4–6. Hence, the branch and boundalgorithm is more useful for the instances for which the values of job parameters are more different.

The analysis of the number of checked subsets in the both tables confirms all the above conclusions, what means thatthere is relatively not many such situations that the algorithm goes down in the search tree without reaching a new com-plete solution.

5. Heuristic algorithms

Since the branch and bound algorithm is inefficient to solve large instances of our problem, thus, we construct andexperimentally test a number of heuristic algorithms which deliver approximate solutions in very short time. In this sectionwe describe the most efficient algorithms and the results of the performed experimental analysis. All the algorithmsA1LST–A3LST are based on the list strategy but they use three different priority dispatching rules to obtain an orderof jobs in an input list. They can be described as follows.

Algorithm AiLST

Step 1. Construct the list of jobs, L, according to the algorithm Ai, i 2 {1,2,3}.Step 2. Assign the jobs from the list L to the machines according to LST rule.

Algorithms A1 and A2 construct solutions by sequencing all the jobs in the non-increasing order of aj and (aj Æ wj/pj),respectively.

Algorithm A3

Step 1. Set p = ;, U = {1, . . ., n}, and C :¼Pn

j¼1pj:Step 2. Find a job j* 2 U with the minimal value of wjC

aj and move it from U at the beginning of p. Set C :¼ C � pj� .Step 3. If U 5 ; then go to Step 2, otherwise STOP – the job order is given by p.

The computational complexity of algorithms A1 and A2 is equal to O(nlogn), while the complexity of algorithm A3 isequal to O(n2). Finally, in order to improve the job sequences on the machines, the following procedure is launched whenthe jobs are scheduled (for each sequence pi, i = 1, . . ., m).

Algorithm DoubleSwap

Step 1. For j = 1, . . ., n � 1 swap jobs in positions j and j + 1 if

lpiðjÞðCpiðjÞÞ þ lpiðjþ1ÞðCpiðjþ1ÞÞ > lpiðjþ1ÞðCpiðj�1Þ þ ppiðjþ1ÞÞ þ lpiðjÞðCpiðjþ1ÞÞ:

Step 2. For j = n, . . ., 2 swap jobs in positions j and j � 1 if

lpiðjÞðCpiðjÞÞ þ lpiðj�1ÞðCpiðj�1ÞÞ > lpiðj�1ÞðCpiðjÞÞ þ lpiðjÞðCpiðj�2Þ þ ppiðjÞÞ:

Preliminary tests showed that DoubleSwap procedure do not improve the schedules in all cases. Thus, in the sequel we willadd a suffix ‘‘+DS’’ to the name of the algorithm which uses DoubleSwap. On the other hand, algorithm A2 withoutDoubleSwap is completely inefficient and we will not analyze it. Hence, we will consider five heuristic algorithms:A1LST, A1LST+DS, A2LST+DS with the computational complexity equal to O(nlogn) and A3LST, A3LST+DS withthe computational complexity O(n2). These computational complexities are directly determined by Step 1 of the procedureAiLST, since LST rule in Step 2 takes O(nlogm) time and DoubleSwap – O(n).

The algorithms have been tested for n = 10, 15, 20, 25, 50, 100, and 500 jobs with the same experimental settings as inSection 4. Hence, 264 tests were performed in total. For each algorithm A (A 2 {A1LST, A1LST + DS, A2LST + DS,

Page 10: A scheduling problem with job values given as a power function of their completion times

Table 3Dependency of the efficiency of the heuristic algorithms on the instance sizes, given in the form avg(]b,]w), where avg is an average performance ratio qA

[%] of a given alg. A, ]b and ]w are numbers of times where the alg. delivered the best and the worst solution, respectively

n m A1LST A1LST + DS A2LST + DS A3LST A3LST + DS

10 1 46.704 1.078 2706.365 5.224 0.099

(36,463) (622,10) (379,237) (194,124) (741,10)2 14.050 102.498 7.624 39.500 8006.310

(335,12) (236,34) (421,44) (144,277) (52,491)5 0.989 16.536 0.359 1.438 79.477

(322,160) (294,186) (592,85) (261,359) (221,468)7 0.280 1.538 0.065 0.361 5.057

(405,403) (497,363) (637,279) (387,505) (463,462)

15 1 50.166 1.799 6 561.610 7.512 0.294

(4,353) (422,0) (142,382) (99,66) (624,0)3 23.594 174.724 6.754 90.859 7702.882

(234,3) (189,9) (440,23) (78,295) (23,482)7 1.394 19.133 0.436 1.986 121.843

(259,93) (207,134) (587,42) (212,346) (146,473)11 0.161 1.981 0.036 0.208 5.301

(390,433) (514,368) (608,269) (375,540) (483,454)

20 1 67.708 3.554 63,744.219 8.723 0.584

(0,338) (295,0) (95,399) (38,63) (624,0)2 28.717 2800.841 8.962 90.245 116 723.687

(233,0) (235,0) (346,22) (7,286) (0,492)5 11.761 152.611 4.334 17.067 1688.125

(185,3) (138,8) (481,10) (79,243) (23,506)10 0.791 13.786 0.260 1.136 69.673

(261,88) (210,132) (614,51) (219,344) (137,486)15 0.412 1.124 0.111 1.022 2.635

(388,378) (476,367) (612,266) (375,550) (391,461)

25 1 78.476 6.477 46,937.941 8.311 0.900

(0,312) (254,0) (93,415) (13,52) (617,0)2 45.896 1726.128 18.887 228.123 61 938.660

(288,0) (197,0) (271,0) (0,276) (0,486)6 10.118 304.372 3.610 13.206 2 722.839

(131,0) (33,20) (475,0) (49,115) (0,480)12 0.644 14.293 0.288 0.987 138.186

(292,89) (222,109) (577,40) (241,294) (129,532)18 0.155 0.876 0.033 0.196 2.533

(374,270) (430,340) (583,173) (314,457) (369,470)

50 1 85.374 15.109 1 184 492.770 10.678 0.142

(0,346) (38,0) (32,442) (0,12) (730,0)5 26.392 2125.401 2.481 54.101 54 821.574

(164,0) (209,0) (427,0) (6,318) (0,482)12 3.718 136.882 0.321 6.499 1278.706

(68,0) (154,0) (591,3) (29,288) (5,509)25 0.470 7.212 0.036 0.690 34.757

(230,14) (146,44) (684,6) (210,289) (74,517)37 0.100 0.332 0.009 0.122 0.833

(378,302) (396,305) (649,194) (365,529) (344,459)

100 1 116.594 39.858 10,092,128.366 11.083 0.066

(0,324) (6,0) (10,476) (0,0) (784,0)10 27.837 2 453.552 1.311 54.756 49 884.074

(89,0) (255,0) (456,0) (0,333) (0,467)25 2.338 122.307 0.172 4.342 810.012

(41,0) (132,0) (635,0) (18,285) (0,515)50 0.475 5.088 0.028 0.706 21.571

(204,0) (122,8) (693,0) (204,282) (48,518)75 0.094 0.199 0.005 0.109 0.551

(397,250) (375,280) (639,182) (395,509) (338,469)

500 1 145.406 103.230 31,763,726.507 6.644 0.000

(0,20) (0,0) (0,780) (0,0) (800,0)50 24.779 2 937.544 0.921 41.013 43 120.085

(continued on next page)

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 845

Page 11: A scheduling problem with job values given as a power function of their completion times

Table 3 (continued)

n m A1LST A1LST + DS A2LST + DS A3LST A3LST + DS

(57,0) (239,0) (504,0) (0,333) (0,467)125 1.699 113.007 0.239 3.488 615.347

(1,0) (198,0) (601,0) (0,277) (0,523)250 0.430 4.005 0.034 0.645 15.521

(200,0) (152,0) (648,0) (200,279) (9,521)375 0.086 0.089 0.003 0.100 0.220

(400,147) (312,204) (657,146) (400,470) (290,476)

846 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

A3LST, A3LST + DS}), the following performance ratio was calculated: qA = (FA/Fbest � 1)100%, where FA is anobjective function value delivered by algorithm A, Fbest is the best known criterion value (optimal for n = 10, 15, 20,25, obtained by the branch and bound algorithm).

Results of our experiments are given in Tables 3 and 4 in the form: avg(]b, ]w), where avg is an average performanceratio qA [%] of a given algorithm A, ]b and ]w are numbers of times where the algorithm delivered the best and the worstsolution, respectively. Table 3 presents the dependency of the efficiency of the algorithms on the instance size (first for smallinstances, i.e., for n = 10, 15, 20, 25, for which the solutions of the algorithms were compared to the optimal ones, and nextfor large instances, i.e., for n = 50, 100, 500, for which the solutions were compared to the best known ones). Table 4describes the dependency of the efficiency of the algorithms on the problem parameters values (for the small and the large

Table 4Dependency of the efficiency of the heuristic algorithms on the intervals of the problem parameters, given in the form avg(]b, ]w), where avg is an averageperformance ratio qA [%] of a given alg. A, ]b and ]w are numbers of times where the alg. delivered the best and the worst solution, respectively

Interval for A1LST A1LST + DS A2LST + DS A3LST A3LST + DS

pj aj wj

For small instances (comparison to optimal solutions)

(0,10] (0,1) (0,10] 5.738 1.109 1.684 7.930 1.355(118,407) (731,39) (510,58) (19,936) (525,70)

(0,100] 5.945 1.346 1.729 6.625 1.447(185,451) (677,79) (478,137) (77,826) (544,133)

(2,8) (0,10] 28.861 95.927 1608.909 46.843 5 762.536(363,264) (458,255) (955,388) (325,293) (522,892)

(0,100] 29.506 223.217 7698.693 15.395 6 201.036(502,202) (390,272) (930,374) (465,168) (437,989)

(0,100] (0,1) (0,10] 6.086 1.704 1.761 6.877 3.350(198,380) (497,103) (702,62) (40,708) (429,353)

(0,100] 6.371 2.178 1.933 5.926 3.578(283,393) (388,160) (641,70) (82,549) (380,491)

(2,8) (0,10] 34.590 288.763 17,135.144 63.777 16 805.513(717,391) (693,408) (972,588) (714,370) (649,1014)

(0,100] 36.753 1442.813 19,616.477 17.646 54,427.084(795,307) (615,342) (966,496) (808,267) (552,1039)

For large instances (comparison to the best known solutions)

(0,10] (0,1) (0,10] 4.882 1.786 3.063 6.100 0.785

(14,228) (833,0) (366,103) (0,1197) (524,3)(0,100] 5.040 2.174 3.098 5.547 0.905

(23,300) (860,6) (338,97) (0,1181) (505,16)(2,8) (0,10] 44.832 552.663 2,510,826.415 28.570 8656.726

(295,185) (198,269) (1194,481) (284,189) (480,1192)(0,100] 49.187 888.071 2,893,584.750 15.268 11 029.835

(326,5) (15,79) (1174,305) (307,5) (301,1200)(0,100] (0,1) (0,10] 4.654 2.693 4.653 5.350 3.281

(105,155) (224,18) (915,163) (0,884) (375,317)(0,100] 4.837 3.064 4.594 4.470 3.357

(188,193) (118,86) (906,143) (2,411) (309,795)(2,8) (0,10] 52.689 1247.853 8,826,330.290 21.649 21,415.703

(625,274) (333,303) (1187,574) (613,274) (569,1200)(0,100] 66.300 1602.399 8,724,098.180 17.033 39,211.253

(653,63) (153,80) (1146,363) (621,63) (359,1200)

Page 12: A scheduling problem with job values given as a power function of their completion times

A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848 847

instances separately). The best average performance ratio for each case is bolded in the both tables (i.e., the smallest valuein each row).

Based on the results given in Tables 3 and 4, the average behavior of the heuristics on the generated instances can besummarized as follows. Algorithm A2LST + DS gave the solutions which were very far to the optimum for m = 1 howeverit had always the best average performance ratio for m > 1 (a few percent worse than the optimum, and even below 1% forthe cases with number of machines larger than half of the number of jobs). It was also the most frequently indicated as thebest for the multi-machine cases. A1LST and A3LST gave also the solutions near to the best ones for the multi-machinecases (especially for large number of machines). For the single-machine cases, the best average performance ratio achievedA3LST + DS and it was always below 1%. What is more, A3LST + DS was the most frequently indicated as the best forthese cases. For the remaining cases its average performance ratio was always the worst. Algorithm A1LST + DS turnedout to be also good for m = 1 (especially for small number of jobs). Therefore, we can divide the analyzed algorithms intotwo groups: the ones which are good for the single-machine case (A1LST + DS and A3LST + DS) and the ones which arepreferred for the multi-machine case (A1LST, A2LST + DS, and A3LST).

Taking into account the results presented in Table 4, we can notice that all the algorithms gave worse solutions if theproblem parameters were generated from larger intervals (it concerned to the same extent pj, aj as well as wj). The onlyexception was A3LST which delivered better results if the values of wj were generated from larger intervals. Moreover,the accuracies of the algorithms which use DoubleSwap procedure strongly depended on the problem parameters contraryto those which do not use DoubleSwap.

Therefore, in order to choose the most appropriate algorithm to solve the considered problem, the number of jobs andmachines as well as the values of the problem parameters (pj, aj, and wj) should be taken into account. The running times ofall the analyzed algorithms did not exceed 20–30 milliseconds in the performed tests.

6. Conclusions

In this paper, the parallel machine scheduling problem with time dependent losses of job values has been investigated.The losses of job values have been given by power function dependent on the job completion time. The total loss of jobvalues has been minimized as the objective function. It has been shown that the general version of the problem is stronglyNP-hard and its single machine case is NP-hard. Therefore, in order to solve the problem to optimality, we have con-structed the branch and bound algorithm, supported by some elimination properties which improve its efficiency. It deliv-ers optimal solutions of the instances of 25 jobs in a few minutes on the average (and even in a few seconds for m = 1machine). Next, in order to solve larger problem instances in a sensible time (e.g., the instances of 100 or 500 jobs in lessthan 1 second), we have constructed some heuristic algorithms with low computational complexity – O(nlogn) and O(n2).The experimental analysis has revealed, that their efficiencies strongly depend on the instance size and the values of theproblem parameters. However, the analysis has delivered also a manner of choosing an appropriate algorithm in orderto obtain near optimal solutions (i.e., a few percent worse than the optimum).

Future research on this problem focuses on two main directions. For the single machine case, the final statement if it isNP-hard in the ordinary sense or in the strong sense would allow us to choose the most suitable methods to solve the prob-lem. For the general case, better elimination properties would make the branch and bound algorithm more efficient (i.e.,faster), and a construction of new kinds of heuristic algorithms would deliver the solutions even closer to the optimum thanthe obtained ones.

References

Bachman, A., Janiak, A., 2000. Minimizing maximum lateness under linear deterioration. European Journal of Operational Research 126, 557–566.Bruno, J.L., Coffman, E.G., Sethi, R., 1974. Scheduling independent tasks to reduce mean finishing time. Communication of the ACM 17, 382–387.Cheng, T.C.E., Kovalyov, M.Y., 1995. Single machine batch scheduling with deadlines and resource dependent processing times. Operations Research

Letters 17, 243–249.Cheng, T.C.E., Liu, Z., 2004. Parallel machine scheduling to minimize the sum of quadratic completion times. IIE Transactions 36, 11–17.Della Croce, F., Szwarc, W., Tadei, R., Baracco, P., Di Tullio, R., 1995. Minimizing the weighted sum of quadratic completion times on a single machine.

Naval Research Logistics 42, 1263–1270.Dessouky, M.I., Lageweg, B.J., van de Velde, S.L., 1990. Scheduling identical jobs on uniform parallel machines. Statistica Neerlandica 44, 115–123.Ferrer, G., 1997. The economics of personal computer remanufacturing. Resources Conservation and Recycling 21, 79–108.Garey, M.R., Johnson, D.S., 1979. Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman, San Francisco.Graham, R.L., Lawler, E.L., Lenstra, J.K., Rinnooy Kan, A.H.G., 1979. Optimization and approximation in deterministic sequencing and scheduling: A

survey. Annals of Discrete Mathematics 3, 287–326.Janiak, A., Li, C.-L., 1994. Scheduling to minimize the total weighted completion time with a constraint on the release time resource consumption.

Mathematical and Computer Modelling 20, 53–58.Kovalyov, M.Y., Kubiak, W., 1998. A fully polynomial approximation scheme for minimizing makespan of deteriorating jobs. Journal of Heuristics 3,

287–297.

Page 13: A scheduling problem with job values given as a power function of their completion times

848 A. Janiak et al. / European Journal of Operational Research 193 (2009) 836–848

Kovalyov, M.Y., Shafransky, Y.M., 1998. Uniform machine scheduling of unit-time jobs subject to resource constraints. Discrete Applied Mathematics84, 253–257.

Lenstra, J.K., Rinnooy Kan, A.H.G., Brucker, P., 1977. Complexity of machine scheduling problems. Annals of Discrete Mathematics 1, 343–362.Mosheiov, G., 1994. Scheduling jobs under simple linear deterioration. Computers and Operations Research 21, 653–659.Mosheiov, G., 1995. Scheduling jobs with step-deterioration: Minimizing makespan on a single and multi-machine. Computers and Industrial Engineering

28, 869–879.Mosheiov, G., 2002. Complexity analysis of job-shop scheduling with deteriorating jobs. Discrete Applied Mathematics 117, 195–209.Ng, C.T.D., Cheng, T.C.E., Kovalyov, M.Y., Lam, S.S., 2003. Single machine scheduling with a variable common due date and resource-dependent

processing times. Computers and Operations Research 30, 1173–1185.Smith, W.E., 1956. Various optimizers for single-stage production. Naval Research Logistics Quarterly 3, 59–66.Szwarc, W., Posner, M.E., Liu, J.J., 1988. The single machine problem with a quadratic cost function of completion times. Management Science 34, 1480–

1488.Townsend, W., 1978. The single machine problem with quadratic penalty function of completion times: A branch and bound solution. Management

Science 24, 530–534.Voutsinas, T.G., Pappis, C.P., 2002. Scheduling jobs with values exponentially deteriorating over time. International Journal of Production Economics 79,

163–169.


Recommended