22
Decision Aiding Sequencing games with controllable processing times Bas van Velzen CentER and Department of Econometrics and Operations Research, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands Received 9 March 2004; accepted 29 September 2004 Available online 26 November 2004 Abstract In this paper we study a class of cooperative sequencing games that arise from sequencing situations in which the processing times are not fixed. We obtain two core elements that depend only on the optimal schedule for the grand coalition. Furthermore we show that, although these games are not convex in general, many marginal vectors are core elements. We also consider convexity for special instances of the sequencing situation. Ó 2004 Elsevier B.V. All rights reserved. Keywords: Game theory; Sequencing situations; Controllable processing times 1. Introduction In one-machine sequencing situations each agent has one job that has to be processed on a single ma- chine. Each job is specified by a processing time, and costs are assumed to be linear in the completion time. Furthermore, an initial order on the jobs is given. The objective is to find a processing order that minimizes total cost. Once this optimal order is obtained, the question arises how to allocate the total cost savings to the agents. Curiel et al. [4] consider this problem by introducing cooperative sequencing games. They show that these games are convex, and hence possess core allocations. Furthermore they characterize an alloca- tion rule that divides the cost savings obtained by complete cooperation. Nowadays, a variety of sequencing games exist in cooperative game theory. Many of these sequencing games arise from extended classes of one-machine sequencing situations. Hamers et al. [7] study convexity of sequencing games that arise from situations in which ready times are imposed on the jobs. Similarly, Borm et al. [2] investigate convexity in case due dates are included and Hamers et al. [9] show that if chain 0377-2217/$ - see front matter Ó 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2004.09.037 E-mail address: [email protected] European Journal of Operational Research 172 (2006) 64–85 www.elsevier.com/locate/ejor

Sequencing games with controllable processing times

Embed Size (px)

Citation preview

Page 1: Sequencing games with controllable processing times

European Journal of Operational Research 172 (2006) 64–85

www.elsevier.com/locate/ejor

Decision Aiding

Sequencing games with controllable processing times

Bas van Velzen

CentER and Department of Econometrics and Operations Research, Tilburg University, P.O. Box 90153,

5000 LE Tilburg, The Netherlands

Received 9 March 2004; accepted 29 September 2004Available online 26 November 2004

Abstract

In this paper we study a class of cooperative sequencing games that arise from sequencing situations in which theprocessing times are not fixed. We obtain two core elements that depend only on the optimal schedule for the grandcoalition. Furthermore we show that, although these games are not convex in general, many marginal vectors are coreelements. We also consider convexity for special instances of the sequencing situation.� 2004 Elsevier B.V. All rights reserved.

Keywords: Game theory; Sequencing situations; Controllable processing times

1. Introduction

In one-machine sequencing situations each agent has one job that has to be processed on a single ma-chine. Each job is specified by a processing time, and costs are assumed to be linear in the completion time.Furthermore, an initial order on the jobs is given. The objective is to find a processing order that minimizestotal cost. Once this optimal order is obtained, the question arises how to allocate the total cost savings tothe agents. Curiel et al. [4] consider this problem by introducing cooperative sequencing games. They showthat these games are convex, and hence possess core allocations. Furthermore they characterize an alloca-tion rule that divides the cost savings obtained by complete cooperation.

Nowadays, a variety of sequencing games exist in cooperative game theory. Many of these sequencinggames arise from extended classes of one-machine sequencing situations. Hamers et al. [7] study convexityof sequencing games that arise from situations in which ready times are imposed on the jobs. Similarly,Borm et al. [2] investigate convexity in case due dates are included and Hamers et al. [9] show that if chain

0377-2217/$ - see front matter � 2004 Elsevier B.V. All rights reserved.doi:10.1016/j.ejor.2004.09.037

E-mail address: [email protected]

Page 2: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 65

precedences are imposed on the jobs, and the initial order is a concatenation of these chains, then the cor-responding sequencing game is convex.

A different approach in extending the class of sequencing situations of Curiel et al. [4] is taken by Vanden Nouweland et al. [14], Hamers et al. [8] and Calleja et al. [3]. They investigate nonemptiness of the coreof sequencing games arising from situations with multiple machines.

In this paper we treat sequencing situations in which the processing times are not fixed, so calledsequencing situations with controllable processing times. In reality processing jobs does not only requirea machine, but also additional resources such as manpower, funds, etc. This implies that jobs can be pro-cessed in shorter or longer durations by increasing or decreasing these additional resources. Of course,deploying these additional resources entails extra costs, but these extra costs might be compensated bythe gains from job completion at an earlier time. Sequencing situations with controllable processing timesare investigated in, among others, Vickson [16,17], and Alidaee and Ahmadian [1]. An overview of litera-ture on sequencing situations with controllable processing times is given in Nowicki and Zdrzalka [11].

In this paper we consider cooperative games arising from sequencing situations with controllable pro-cessing times. We obtain two core elements that depend only on the optimal schedule for the grand coali-tion. Furthermore we show that many marginal vectors are core elements, in spite of the fact that thesegames are not convex in general. We also consider convexity for some special cases.

In Section 2 we recall notions from cooperative game theory. Furthermore we formally introducesequencing situations with controllable processing times and the cooperative games that arise from thesesituations. In Section 3 we focus on the core of these sequencing games and in Section 4 we consider con-vexity for some special cases.

2. Preliminaries

In this section we will recall some basic notions of cooperative game theory. Then we formally introducesequencing situations and games with controllable processing times.

2.1. Cooperative game theory

A cooperative game is a pair (N,v) where N is a finite (player-)set and v, the characteristic function, is amap v : 2N ! R with v(;) = 0. The map v assigns to each subset S � N, called a coalition, a real number v(S)called the worth of S. The core of a game (N,v) is the set

CðvÞ ¼ fx 2 RN jxðSÞ P vðSÞ for every S � N ; xðNÞ ¼ vðNÞg;

where xðSÞ ¼

Pi2Sxi. Intuitively the core is the set of payoff vectors for which no coalition has an incentive

to split off from the grand coalition. The core can be empty. A cooperative game (N,v) is called superad-

ditive if

vðSÞ þ vðT Þ 6 vðS [ T Þ for all S; T � N with S \ T ¼ ;;

and convex if for all i, j 2 N and S � Nn{i, j} it holds

vðS [ figÞ þ vðS [ fjgÞ 6 vðS [ fi; jgÞ þ vðSÞ: ð1Þ

For convex games the marginal contribution of a player to a coalition exceeds its marginal contribution toa smaller coalition. Note that convex games have a nonempty core ([12]). Now we will introduce marginalvectors. Consider r : {1, . . .,n}! N, which describes an order on the player set. Define[r(i),r] = {r(j) : j 6 i} for each i 2 {1, . . .,n}. That is, [r(i),r] is the set of players preceding player r(i) withrespect to r. The marginal vector mr(v) is defined by

Page 3: Sequencing games with controllable processing times

66 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

mrrðiÞðvÞ ¼ vð½rðiÞ; r�Þ � vð½rðiÞ; r� n frðiÞgÞ for each i 2 f1; . . . ; ng:

That is, each player receives his marginal contribution to the coalition this player joins. We note that agame is convex if and only if all marginal vectors are core elements (cf. [12] and [10]).

A concept that is closely related to convexity and to marginal vectors is permutationally convexity (cf.[6]). An order r : {1, . . .,n} ! N is a permutationally convex order if for all 0 6 i 6 k and S � Nn[r(k),r] itholds that

vð½rðiÞ; r� [ SÞ þ vð½rðkÞ; r�Þ 6 vð½rðkÞ; r� [ SÞ þ vð½rðiÞ; r�Þ: ð2Þ

Here we abuse our notation by defining [r(0),r] = ;. Granot and Huberman [6] show that ifr : {1, . . .,n}! N is a permutationally convex order, then the corresponding marginal vector mr(v) is a coreelement. The converse of this statement is not true in general.

We end this section with the definition of connected coalitions. Let r : {1, . . .,n}! N be an order on theplayer set. A coalition S � N is called connected with respect to r if there are i, j 2 {1, . . .,n} with i 6 j suchthat S = {r(i), . . .,r(j)}. If a coalition is not connected, then it consists of several connected components.

2.2. Sequencing situations with controllable processing times

In a sequencing situation with controllable processing times, or cps situation for short, there is a queue ofagents, each with one job, in front of the machine. Each agent has to process his job on themachine. The set ofagents is denoted by N = {1, . . .,n}. We assume there is an initial processing order on the jobs denoted byr0 : {1, . . .,n}! N. For notational simplicity we assume throughout the paper that r0(i) = i for alli 2 {1, . . .,n}. The job of agent i 2 N has an initial processing time pi P 0. This initial processing time can bereduced to at most �pi, the crashed processing time of job i. The amount of time by which the initial processingtime is reduced is the crash time. We assume that 0 6 �pi 6 pi for all i 2 N. The cost of each job is linear in thecompletion time aswell as in the crash time. That is, if t is the completion time and y the crash time of job i, then

ciðt; yÞ ¼ ait þ biy;

where ai and bi are positive constants expressing the cost of one time unit waiting in the queue and one timeunit crashing job i, respectively. Since crashing a job requires additional resources, we assume that ai 6 bifor all i 2 N. That is, the reduction of the processing time of a job by one time unit costs more than theprocessing of that job by one time unit. A cps situation is now formally defined by the 6-tupleðN ; r0; a; b; p; �pÞ.

Since the processing times are not fixed, a processing schedule consists of a pair (r,x) wherer : {1, . . .,n}! N denotes the processing order, and x the vector of processing times. The cost of agenti 2 N at processing schedule (r,x) is now equal to

Ciððr; xÞÞ ¼ aiX

j2f1;...;ng:j6r�1ðiÞxrðjÞ

0@

1Aþ biðpi � xiÞ;

sinceP

j2f1;...;ng:j6r�1ðiÞxrðjÞ is the completion time of job i with respect to the schedule (r,x), and pi � xiequals the crash time of job i. A processing schedule (r,x) is called optimal if it minimizes the sum ofthe costs of the agents, i.e., if

X

i2NCiððr; xÞÞ 6

Xi2N

Ciððs; yÞÞ for any processing schedule ðs; yÞ:

Finding an optimal schedule for a cps situation falls into the class of NP-hard problems. The difficulty ofthis problem lies in finding the optimal processing times. Once the optimal processing times are known,then it is straightforward to find the optimal processing order by using the Smith-rule [13]. The Smith-rule

Page 4: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 67

states that the jobs should be processed in order of decreasing urgency index, where the urgency index ui ofa job i 2 N is the quotient between the completion time cost coefficient and the processing time xi, i.e.ui ¼ ai

xi.

Although finding the optimal processing schedule is difficult, the following lemma, due to Vickson [16],makes the problem a little easier. This lemma states that there is an optimal schedule such that the process-ing time of each job is either equal to its initial processing time, or its crashed processing time. We note thatthis result easily follows from the linearity of the cost function.

Lemma 2.1 [16]. Let ðN ; r0; a; b; p; �pÞ be a cps situation. There exists an optimal schedule (r,x) such that

xi 2 fpi; �pig for all i 2 N.

From Lemma 2.1 it follows that the optimal processing schedule can be found by considering all 2n pos-sibilities for the processing times and applying the Smith-rule for each of these possibilities. Without loss ofgenerality we will assume throughout the paper that optimal schedules satisfy the property of Lemma 2.1,i.e. if (r,x) is an optimal schedule, then it holds that xi 2 fpi; �pig for all i 2 N.

2.3. Sequencing games with controllable processing times

In this section we define sequencing games arising from cps situations, or cps-games for short. LetðN ; r0; a; b; p; �pÞ be a cps situation. The characteristic function of our game will express the cost savingsthat each coalition can obtain. For this we have to agree upon which schedules are admissible for a coa-lition. We will call a processing schedule admissible for a coalition if it satisfies three properties. First, theprocessing times of players outside the coalition should remain unchanged. Second, the processing timesof the players belonging to the coalition should be feasible, i.e. in between the crashed processing timeand the initial processing time. And finally, the schedule should be such that the jobs outside thecoalition remain in their initial position and no jumps take place over players outside the coalition.Let AS denote the set of admissible schedules for coalition S � N. Mathematically, (r,x) 2 AS if it holdsthat

(P1) xi = pi for all i 2 NnS,(P2) xi 2 ½�pi; pi� for all i 2 S,

(P3) fj 2 N : r�1ðjÞ 6 r�1ðiÞg ¼ fj 2 N : r�10 ðjÞ 6 r�1

0 ðiÞg for all i 2 N n S.

Note that (P3) is the admissibility requirement from Curiel et al. [4]. For almost all sequencing games thisadmissibility requirement is used. Now we define the cps game (N,v) by

vðSÞ ¼Xi2S

Ciððr0; pÞÞ � minðr;xÞ2AS

Xi2S

Ciððr; xÞÞ:

That is, the worth of each coalition is the maximum cost savings they can obtain by means of an admissibleprocessing schedule. It can easily be seen that cps games are superadditive.

Similarly to Lemma 2.1 it is straightforward to see that for each coalition S � N, there is an optimalschedule such that the processing time of each job is either equal to its initial processing time, or to itscrashed processing time. Therefore we assume throughout the paper that optimal schedules satisfy thisproperty. We now give an example of a cps game.

Example 2.1. Let ðN ; r0; a; b; p; �pÞ be given by N = {1,2,3,4}, a = (1,1,1,1), b = (2,2,2,2), p =(10,4,3,15), �p ¼ ð4; 3; 2; 5Þ. Now consider for instance coalition {1,2,3}. The optimal schedule for this

Page 5: Sequencing games with controllable processing times

Table 1A cps game

S {1,2} {1,3} {1,4} {2,3} {2,4} {3,4} {1,2,3} {1,2,4} {1,3,4} {2,3,4} N

v(S) 6 0 0 1 0 0 15 7 6 2 17

68 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

coalition is given by (r,x) with r = 3214 and x = (10,4,2,15). This yields cost savings v({1,2,3}) =41 � 26 = 15. The cps game (N,v) is displayed in Table 1, where we note that v({i}) = 0 for all i 2 N.

From Example 2.1 it immediately follows that cps games are in general not r0-component additive games(cf. Curiel et al. [5]). A game is called r0-component additive if it is superadditive, and if the worth of eachdisconnected coalition with respect to r0 is equal to the sum of the worths of its connected components. Inthe cps game of Example 2.1 it holds that {1,3,4} is a disconnected coalition with respect to r0. It consists ofconnected components {1} and {3,4}. Since it holds that v({1}) + v({3,4}) < v({1,3,4}) we conclude that theworth of {1,3,4} is larger than the sum of the worths of its connected components. Therefore the game is notr0-component additive. So in spite of the fact that we use a similar admissibility requirement as Curiel et al.[4] we lose r0-component additivity. Note that many sequencing games are proven to have a nonempty coreby means of r0-component additivity (e.g. Borm et al. [2] and Hamers et al. [7]). Example 2.1 also shows thatcps games need not be convex, contrary to the sequencing games of Curiel et al. [4].

Example 2.2. Consider the cps situation from Example 2.1, and the corresponding game. Considercoalition S = {1,3}, i = 2 and j = 4. Since it holds that v({1,2,3}) + v({1,3,4}) = 21 > 17 = v(N) + v({1,3})we conclude that (N,v) is not convex.

3. The core of cps games

In this section we prove nonemptiness of the core of cps games. In particular, we provide two core ele-ments that depend only on the optimal schedule for the grand coalition. Furthermore we show that manymarginal vectors are core elements.

In the first part of this section we will provide two core elements that depend on the optimal schedule forthe grand coalition. For the perception of these two core elements it is important to note that the optimalprocessing schedule for the grand coalition can be reached from the initial processing schedule in severalways. For example one could first reduce the initial processing times of the jobs to the optimal processingtimes, and then rearrange the jobs. But one could also first rearrange the jobs and then reduce the initialprocessing times to the optimal processing times. We emphasize these two possibilities since the construc-tion of our two core elements depends on them.

Let ðN ; r0; a; b; p; �pÞ be a cps situation. Let (r,x) 2 AN be an optimal processing schedule for the grandcoalition. For our first core element, denoted by c(x), we will reach this optimal processing schedule by firstcrashing jobs, and then rearranging them. Let c(x) be obtained as follows. First give the cost savings (orcosts) obtained by the crashing of a job to the job that crashes and secondly give the cost savings obtainedby interchanging two jobs to the back job. Or to put it in a formula:

ciðxÞ ¼X

j2N :jPi

aj

!� bi

!ðpi � xiÞ þ

Xj2N :j<i

ðaixj � ajxiÞþ for all i 2 N ;

where ððP

j2N :jPiajÞ � biÞðpi � xiÞ are the cost savings obtained by crashing job i by (pi � xi) time. Also ob-serve that if agent j is directly in front of agent i, then the cost savings obtainable for j and i by switching

Page 6: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 69

their order equal (aixj � ajxi)+ = max{(aixj � ajxi), 0}.1 Therefore

Pj2N :j<iðaixj � ajxiÞþ expresses the cost

savings that are obtained by moving job i up in the queue.For our second core element, denoted by d((r,x)), we reach the optimal schedule (r,x) by first inter-

changing jobs to the optimal order, and then crashing them. Let d((r,x)) be the following allocation. Firstgive the possibly negative profit of each neighbour switch to the back job, and secondly give the profit ofeach crashing to the job that crashes. That is,

1 Npositio

diððr; xÞÞ ¼X

j2N :j<i and r�1ðjÞ>r�1ðiÞðaipj � ajpiÞ þ

Xk2N :r�1ðkÞPr�1ðiÞ

ak

0@

1A� bi

0@

1Aðpi � xiÞ for all i 2 N :

Because of notational convenience we will write c and d instead of c(x) and d((r,x)) if there can be no con-fusion about the optimal order that is used.

Before we show that c and d are core elements, we need two lemmas. In the first we obtain anotherexpression for d which we will use in the proof of Theorem 3.1. The proof of this lemma can be foundin Appendix A. The second lemma is a technical but straightforward lemma which we will use throughoutthe paper.

Lemma 3.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (r,x) 2 AN be an optimal processing schedule. For

all i 2 N it holds that

di ¼ ci �X

j2N :j>i and r�1ðjÞ<r�1ðiÞajðpi � xiÞ þ

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ:

Lemma 3.2. Let a1,a2,q1,�q1, q2 P 0 with q1 P �q1. It holds that

a2ðq1 � �q1Þ þ ða2�q1 � a1q2Þþ P ða2q1 � a1q2Þþ:

Proof. If (a2q1 � a1q2)+ = 0 then the inequality is trivially satisfied, so suppose that (a2q1 � a1q2)+ > 0. Thisimplies that a2q1 � a1q2 > 0. Straightforwardly it follows that

a2ðq1 � �q1Þ þ ða2�q1 � a1q2Þþ P a2ðq1 � �q1Þ þ ða2�q1 � a1q2Þ ¼ a2q1 � a1q2 ¼ ða2q1 � a1q2Þþ: �

Now we are ready to prove that c 2 C(v) and d 2 C(v).

Theorem 3.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. Let (r,x) 2 AN

be an optimal processing schedule. It holds that c 2 C(v) and d 2 C(v).

Proof. First we show efficiency of c and d. Note that

Xi2N

ci ¼Xi2N

Xj2N :jPi

aj

!� bi

!ðpi � xiÞ þ

Xj2N :j<i

ðaixj � ajxiÞþ

" #

¼Xi2N

Xj2N :jPi

aj

!� bi

!ðpi � xiÞ ð3Þ

þXi2N

Xj2N :j<i

ðaixj � ajxiÞþ: ð4Þ

ote that aixj � ajxi > 0 if and only if ui ¼ aixi>

ajxj¼ uj. That is, aixj � ajxi > 0 if and only if in the optimal order player i is

ned in front of player j.

Page 7: Sequencing games with controllable processing times

70 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

Obviously, (3) corresponds to the cost savings that are obtained by reducing the initial processing times tothe optimal processing times, while the jobs remain at their initial positions. Expression (4) expresses thetotal cost savings that are be obtained by interchanging the jobs, after the initial processing times are re-duced to the optimal processing times. We conclude that the sum of (3) and (4) coincides with the total costsavings v(N). So c is efficient.

For the efficiency of d observe that

Xi2N

di ¼Xi2N

ci �X

j2N :j>i and r�1ðjÞ<r�1ðiÞajðpi � xiÞ þ

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ

24

35

¼Xi2N

ci �Xi2N

Xj2N :j>i and r�1ðjÞ<r�1ðiÞ

ajðpi � xiÞ þXi2N

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ ¼Xi2N

ci ¼ vðNÞ:

The first equality holds by virtue of Lemma 3.1. The third equality is satisfied since the two double sum-mations cancel out against each other, and the last equality is satisfied by efficiency of c.

Now we show coalitional rationality. Let T � N. We need to show thatP

i2T ci P vðT Þ andPi2T di P vðT Þ. We will equivalently show that

Pi2NnT ci þ vðT Þ 6 vðNÞ and

Pi2NnTdi þ vðT Þ 6 vðNÞ by

constructing a suboptimal schedule (rsubopt, psubopt) 2 AN that depends on an optimal processing scheduleof T. We show that the total cost savings obtained in this suboptimal schedule exceed both

Pi2NnT ci þ vðT Þ

andP

i2NnTdi þ vðT Þ. Obviously the cost savings in the suboptimal schedule are at most v(N).We first find expressions for

Pi2NnT ci and

Pi2NnTdi. Note that

Xi2NnT

ci ¼Xi2NnT

Xj2N :jPi

aj

!� bi

!ðpi � xiÞ ð5Þ

þX

i;j2NnT :j<i

ðaixj � ajxiÞþ ð6Þ

þX

j2T ;i2NnT :j<i

ðaixj � ajxiÞþ; ð7Þ

and that

Xi2NnT

di ¼Xi2NnT

ci

�Xi2NnT

Xj2N :j>i and r�1ðjÞ<r�1ðiÞ

ajðpi � xiÞ ð8Þ

þXi2NnT

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ ð9Þ

¼Xi2NnT

ci ð10Þ

�Xi2NnT

Xj2T :j>i and r�1ðjÞ<r�1ðiÞ

ajðpi � xiÞ ð11Þ

þXi2NnT

Xj2T :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ; ð12Þ

where the first equality holds because of Lemma 3.1. The second equality is satisfied because for eachpair i, j 2 NnT with j > i and r�1(j) < r�1(i) every term in (8) also appears in (9) but with the oppositesign.

Page 8: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 71

Let (rT,pT) 2 AT be an optimal schedule for coalition T. Consider the suboptimal schedule

(rsubopt,psubopt) 2 AN, where psuboptj ¼ pTj if j 2 T, psuboptj ¼ xj if j 2 NnT, and rsubopt is the order obtainedby applying the Smith-rule using the suboptimal processing times. The total cost savings for the grandcoalition at this suboptimal schedule equal

P ¼Xj2T

Xi2T :iPj

ai

!� bj

!ðpj � psuboptj Þ ð13Þ

þXj2T

Xi2NnT :i>j

ai

!ðpj � psuboptj Þ ð14Þ

þXi2NnT

Xj2N :jPi

aj

!� bi

!ðpi � psubopti Þ ð15Þ

þX

j;i2NnT :j<i

ðaipsuboptj � ajpsubopti Þþ ð16Þ

þX

j2NnT ;i2T :j<i

ðaipsuboptj � ajpsubopti Þþ ð17Þ

þX

j2T ;i2NnT :j<i

ðaipsuboptj � ajpsubopti Þþ ð18Þ

þX

j;i2T :j<i

ðaipsuboptj � ajpsubopti Þþ: ð19Þ

Expressions (13) and (14) are the cost savings obtained by crashing the jobs of T, and expression (15) thecost savings obtained by crashing the jobs of NnT. The cost savings obtained by rearranging the jobs areequal to the sum of expressions (16)–(19).

Now we will show that the sum of (5)–(7), (12) and v(T) is exceeded by P. Since (12) is nonnegative, thisshows that

Pi2NnT ci þ vðT Þ is exceeded by P. Furthermore, since (11) is nonpositive, it also shows thatP

i2NnTdi þ vðT Þ is exceeded by P.First note that the sum of expressions (13) and (19) exceeds v(T) because psubopti ¼ pTi for all i 2 T. It also

holds that (15) coincides with (5) as well as (16) coincides with (6) because psuboptj ¼ xj for all j 2 NnT.Finally, note that expression (17) is nonnegative. Hence, for showing that the sum of expressions (5)–(7),(12) and v(T) is exceeded by P it is sufficient to show that the sum of (7) and (12) is exceeded by the sum of(14) and (18). We will show that this holds by comparing the sums term by term.

So let j 2 T, i 2 NnT such that j < i. Note that it now holds that psubopti ¼ xi and that psuboptj ¼ pTj . Wedistinguish between two cases.

Case 1: r�1(j) 6 r�1(i).In this case it holds that i and j do not have a corresponding term in (12). Therefore we only need to

compare the corresponding term in (7) with the corresponding terms in (14) and (18).If it holds that psuboptj P xj, then we have that ðaipsuboptj � ajp

subopti Þþ ¼ ðaipsuboptj � ajxiÞþ P

ðaixj � ajxiÞþ. Hence, the term in (7) is exceeded by the corresponding term in (18). Since the correspondingterm in (14) is nonnegative we conclude that the term in (7) is exceeded by the sum of the correspondingterms in (14) and (18).

So assume that psuboptj < xj. Since optimal processing times only take two values by assumption it holdsthat psuboptj ¼ �pj and xj = pj. Since j < i it follows that there is a term aiðpj � psuboptj Þ in expression (14). Now

according to Lemma 3.2 using a1 = aj, a2 = ai, q1 = pj, �q1 ¼ psuboptj and q2 = pi, it holds thataiðpj � psuboptj Þ þ ðaipsuboptj � ajp

subopti Þþ P ðaipj � ajp

subopti Þþ ¼ ðaixj � ajxiÞþ, where the equality holds

because pj = xj and psubopti ¼ xi.

Page 9: Sequencing games with controllable processing times

72 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

So in this case it holds that the corresponding terms in (14) and (18) exceed the terms in (7).

Case 2: r�1(j) > r�1(i).

Since r�1(j) > r�1(i) it necessarily holds aixiP aj

xjand thus that aixj � ajxi P 0. Straightforwardly we

obtain that

aiðpj � psuboptj Þ þ ðaipsuboptj � ajpsubopti Þþ P aiðpj � psuboptj Þ þ ðaipsuboptj � ajp

subopti Þ ¼ aipj � ajp

subopti

¼ aiðpj � xjÞ þ ðaixj � ajpsubopti Þ ¼ aiðpj � xjÞ þ ðaixj � ajxiÞþ;

where the last equality holds since psubopti ¼ xi and aixj � ajxi P 0. We conclude that the correspondingterms of expressions (14) and (18) exceed the corresponding terms of expressions (7) and (12). This com-pletes the proof. h

Note that if the optimal schedule is not unique, then c and d are not uniquely determined, since c and dclearly depend on the optimal schedule that is used.

In the upcoming part of this section we study marginal vectors that provide core elements. In the nexttheorem we show that many marginal vectors are core elements by showing that the corresponding ordersare permutationally convex orders. In particular we show that the orders that put the jobs in reverse order,with respect to r0, but have job 1 at an arbitrary position are permutationally convex.

Theorem 3.2. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. Let 1 6 j 6 n

and let r : {1, . . ., n}! N be such that r(i) = n + 1 � i for all 1 6 i < j, r(i) = n + 2 � i for all j < i 6 n and

r(j) = 1. Then it holds that r is a permutationally convex order. In particular it holds that mr(v) 2 C(v).

Since the proof is rather cumbersome, we have put it in Appendix A. The orders in Theorem 3.2 arenot the only permutationally convex orders of cps games. In Van Velzen [15] it is shown that for any TU-game (N,v) it holds that if r : {1, . . .,n} ! N is a permutationally convex order, then rn�1 is a permuta-tionally convex order as well, where rn�1 is obtained by interchanging the players in the (n � 1)st and nthposition of r. In the proceeding we will show for cps games that if r : {1, . . .,n}! N is a permutationallyconvex order, then r1 is a permutationally convex order as well. That is, the order obtained from r byinterchanging the first and second position is also permutationally convex. We first need the followinglemma.

Lemma 3.3. Let ðN ; r0; a; b; p; �pÞ be a sequencing situation with controllable processing times and let (N,v) be

the corresponding game. Let S,T � N with jS \ Tj = 1. Then it holds that v(S) + v(T) 6 v(S [ T).

The proof of this lemma is also recorded in Appendix A. Superadditivity of cps games together withLemma 3.3 implies the following corollary.

Corollary 3.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation with jNj = 3. Let (N,v) be the corresponding game.

Then (N,v) is convex.

As promised, we now show that if r : {1, . . .,n} ! N is a permutationally convex order, then the orderobtained from r by interchanging the players in the first and second position, r1, is a permutationally con-vex order as well.

Theorem 3.3. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. If

r : {1, . . ., n}! N is a permutationally convex order, then r1 is a permutationally convex order as well. In

particular it holds that mr1 2 CðvÞ.

Proof. Suppose that r : {1, . . .,n} ! N is a permutationally convex order. We need to show that for all0 6 i 6 k and all S � Nn[r1(k),r1] it holds that

Page 10: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 73

vð½r1ðiÞ; r1� [ SÞ þ vð½r1ðkÞ; r1�Þ 6 vð½r1ðkÞ; r1� [ SÞ þ vð½r1ðiÞ; r1�Þ:

If i = 0, then the inequality is trivially satisfied because (N,v) satisfies superadditivity. If i P 2, then thisinequality is satisfied since in that case [r1(i),r1] = [r(i),r], [r1(k),r1] = [r(k),r] and our assumption thatr is a permutationally convex order. So let i = 1 and k > i. Since ([r1(i),r1] [ S) \ [r1(k), r1] = {r1(1)},the inequality holds by Lemma 3.3. h

The final theorem of this section shows a way to alter a core element slightly, such that the new alloca-tion is still in the core.

Theorem 3.4. Let ðN ; r0; a; b; p; �pÞ be a cps situation and (N,v) be the corresponding game. Let x 2 C(v) and

let j,k 2 N. Furthermore let k P 0 be such that k 6 (v({j,k}) � xk)+ and let �x be such that �xi ¼ xi for alli 2 Nn{j, k}, �xj ¼ xj � k and �xk ¼ xk þ k. Then it holds that �x 2 CðvÞ.

Proof. If k = 0, then it follows that �x ¼ x, and trivially we have that �x 2 CðvÞ. So suppose that k > 0. It nowfollows by definition of k that xk + k 6 v({j,k}).

Showing that �x 2 CðvÞ boils down to showing that for each S � Nn{k} with j 2 S it holds that�xðSÞ P vðSÞ. Let S � Nn{k} such that j 2 S. Now note that

X

i2S[fkgxi P vðS [ fkgÞ P vðSÞ þ vðfj; kgÞ; ð20Þ

where the first inequality holds because x 2 C(v) and the second follows by Lemma 3.3. Thus

Xi2S

�xi ¼Xi2S

xi � k ¼X

i2S[fkgxi � xk � k P vðSÞ þ vðfj; kgÞ � xk � k P vðSÞ;

where the first inequality follows by expression (20), and the second because xk + k 6 v({j,k}). h

Theorem 3.4 enables us to show a nice feature of cps games. Let ðN ; r0; a; b; p; �pÞ be a cps situation andlet (N,v) be the corresponding game. Let r : {1, . . .,n} ! N be such that mr(v) 2 C(v). If k and j are the firstand second player according to r, respectively, then it holds that mr

k ðvÞ ¼ 0 and mrj ðvÞ ¼ vðfj; kgÞ. Let

k = v({j,k}). According to Theorem 3.4 it holds that �x 2 CðvÞ, where �x is given by�xk ¼ mr

k ðvÞ þ k ¼ vðfj; kgÞ, �xj ¼ mrj ðvÞ � k ¼ 0 and �xi ¼ mr

i ðvÞ for all i 2 Nn{j,k}. Now note that�x ¼ mr1ðvÞ, and thus that mr1ðvÞ 2 CðvÞ. Therefore we have the following proposition.

Proposition 3.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. Ifr : {1, . . ., n} ! N is such that mr(v) 2 C(v), then mr1ðvÞ 2 CðvÞ. In particular, the number of marginal vectors

in the core is even.

4. Convexity of cps games

In this section we investigate convexity of cps games. In particular we show for situations where all com-pletion time cost coefficients, all crash time cost coefficients and all maximum crash times are equal, thenthe corresponding game is convex. Furthermore we show that relaxing this condition might lead to noncon-vex cps games.

The next theorem shows that a specific class of cps situations has corresponding convex games.

Theorem 4.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. If ai = aj,bi = bj and pi � �pi ¼ pj � �pj for all i, j 2 N, then (N,v) is convex.

Page 11: Sequencing games with controllable processing times

74 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

Proof. Let ðN ; r0; a; b; p; �pÞ be a cps situation with ai = aj, bi = bj and pi � �pi ¼ pj � �pj for all i, j 2 N and let(N,v) be the corresponding game. For notational convenience we say that ai = a, bi = b and pi � �pi ¼ q forall i 2 N, where a, b and q here denote scalars and not vectors. We assume that q > 0. If q = 0 then no crash-ing is possible and the cps situation is nothing more than a standard sequencing situation. The resultinggame will be convex by Curiel et al. [4].

First we show that the optimal schedule is easy to determine in this case. Let S � N. Obviously S consistsof several, say m P 1, connected components with respect to r0. Denote these connected components byS1, . . .,Sm. We will reach the optimal schedule for S by first rearranging the jobs according to the Smith-ruleand then crashing them. Note that, since ai = a and bi = b for all i 2 S, it is easy to determine the position ofthe jobs that are crashed in the optimal schedule. In particular, if k 2 S is the lth job of S in the optimalprocessing order, then there are jSj � l jobs of S in the queue behind k. Hence, if job k crashes, then thisyields cost savings of ((jSj � l + 1)a � b)q. This term is nonnegative only if (jSj � l + 1)a � b isnonnegative. Furthermore, observe that these cost savings do not depend on k. We conclude that thecost savings due to crashing jobs will equal q

PjSjk¼1ððj S j �k þ 1Þa� bÞþ. These cost savings only depend on

the size of S and not on the jobs of S or on the order in which the jobs are processed.Because the cost savings obtained from optimally crashing jobs are independent of the order in which the

jobs are processed it follows that the total cost savings are maximized if the cost savings obtained frominterchanging the jobs are maximized. In particular, the total cost savings are maximized if the jobs arelined up in order of decreasing urgencies. Therefore

vðSÞ ¼Xml¼1

Xi;j2Sl:i<j

ðajpi � aipjÞþ þ qXjSjk¼1

ððjSj � k þ 1Þa� bÞþ:

Since (N,z) with zðSÞ ¼Pm

l¼1

Pi;j2Sl:i<jðajpi � aipjÞþ is convex (Curiel et al. [4]), it is sufficient for convexity

of (N,v) to show that (N,w) with wðSÞ ¼ qPjSj

k¼1ððj S j �k þ 1Þa� bÞþ is convex. So let i, j 2 N and S �Nn{i, j}. We distinguish between three cases in order to show that w(S [ {i}) + w(S [ {j}) 6 w(S [ {i, j}) +w(S).

Case 1: w(S [ {i}) = w(S [ {j}) = 0.Trivially it follows that w(S [ {i}) + w(S [ {j}) = 0 6 w(S [ {i, j}) + w(S).

Case 2: w(S [ {i}) = w(S [ {j}) > 0 and w(S) = 0.Since w(S) = 0 and q > 0, it holds that jSja � b 6 0. Therefore it holds that w(S [ {i}) =

w(S [ {j}) = ((jSj + 1)a � b)q. Hence,

wðS [ figÞ þ wðS [ fjgÞ ¼ 2ððjSj þ 1Þa� bÞq 6 ððjSj þ 1Þa� bÞqþ ððjSj þ 2Þa� bÞq ¼ wðS [ fi; jgÞ¼ wðS [ fi; jgÞ þ wðSÞ:

Case 3: w(S) > 0.Because w(S) > 0, it holds that w(S [ {i}) = w(S) + ((jSj + 1)a � b)q. Furthermore we have that

w(S [ {i, j}) = w(S [ {j}) + ((jSj + 2)a � b)q. Therefore,

wðS [ figÞ þ wðS [ fjgÞ ¼ wðSÞ þ ððjSj þ 1Þa� bÞqþ wðS [ fjgÞ6 wðSÞ þ ððjSj þ 2Þa� bÞqþ wðS [ fjgÞ ¼ wðSÞ þ wðS [ fi; jgÞ: �

Let ðN ; r0; a; b; p; �pÞ be a cps situation with ai = aj, bi = bj and pi � �pi ¼ pj � �pj for all i, j 2 N. Let (N,v)be the corresponding game. According to the proof of Theorem 4.1 it holds that (N,v) is the sum of a sym-

Page 12: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 75

metric game and a classical sequencing game a la Curiel et al. [4]. Hence, the Shapley value of (N,v) is equalto the sum of the Shapley value of the symmetric game and the Shapley value of the classical sequencinggame. Since both can be computed easily it follows that the Shapley value of cps games arising from thesespecial cps situations can be computed easily.

The following examples show that by relaxing the condition of Theorem 4.1 convexity might belost.

Example 4.1 (ai = aj, bi = bj and �pi ¼ �pj for all i, j 2 N). Let ðN ; r0; a; b; p; �pÞ be given by N = {1,2,3,4},ai = 2, bi = 5 and �pi ¼ 2 for all i 2 N. Furthermore, let p = (7,3,3,7). Let (N,v) be the corresponding game.Since 2a < b, it follows for coalition {1,3} no cost savings can be obtained by crashing job 1. Since also nocost savings can be obtained by rearranging the jobs, it follows that v({1, 3}) = 0. Because 2a < b and3a > b it follows that for the grand coalition it is optimal to crash exactly two jobs, namely the jobs that arein the first and second position of the optimal processing order. By considering all possibilities of crashingtwo of the four jobs, it is straightforward to see that the schedule (2314, (7,2,2,7)) 2 AN is optimal. As aresult we have that v(N) = 20. Now observe that (2314, (7,2,3,7)) 2 A{1,2,3} and that (1234, (2,3,3,7)) 2 A{1,3,4}. These schedules yield cost savings of 17 and 5 for coalitions {1,2,3} and {1,3,4}respectively. Therefore v({1,2,3}) P 17 and v({1,3,4}) P 5. We conclude that v({1,2,3}) +v({1,3,4}) P 22 > 20 = v(N) + v({1,3}), and thus that (N,v) is not convex.

Example 4.2 (ai = aj, pi = pj and �pi ¼ �pj for all i, j 2 N). Let ðN ; r0; a; b; p; �pÞ be given by N = {1,2,3,4,5},ai = 2, pi = 2 and �pi ¼ 1 for all i 2 N. Furthermore, let b = (6,6,3,3,6). Let (N,v) be the correspondinggame. For each coalition, the optimal order can be reached by first interchanging jobs, and then crashingthem. Since ai = 2 and pi = 2 for all i 2 N it follows that ðaipj � ajpiÞ ¼ 0 for all i, j 2 N. That is, first rear-ranging jobs yields no cost savings and no extra costs. Hence, the cost savings for each coalition consist ofcost savings due to crashing only. Because ai = 2 and pi � �pi ¼ 1 for all i 2 N, it is optimal for each coalitionto put the jobs with lowest bi to the front as much as possible. In particular, for coalitions {1,3,4} and N

the optimal schedules are (12345, (2,2,1,2,2)) 2 A{1,3,4} and (34125, (2,2,1,1,2)) 2 AN respectively. Thisyields cost savings of v({1,3,4}) = 1 and v(N) = 12. For coalitions {1,2,3,4} and {1,3,4,5} the optimalschedules are given by (34125, (2,2,1,1,2)) 2 A{1,2,3,4} and (12345, (1,2,1,1,2)) 2 A{1,3,4,5}, respectively,with cost savings v({1,2,3,4}) = 8 and v({1,3,4,5}) = 6. We conclude that v({1,2,3,4}) +v({1,3,4,5}) = 14 > 13 = v(N) + v({1,3,4}), and thus that (N,v) is not convex.

Example 4.3 (bi = bj, pi = pj and �pi ¼ �pj for all i, j 2 N). Let ðN ; r0; a; b; p; �pÞ be given by N = {1,2,3,4},bi = 5, pi = 2 and �pi ¼ 1 for all i 2 N. Furthermore, let a = (1,1,5,1). Let (N,v) be the corresponding game.Since coalition {1,3} is a disconnected coalition, the only cost savings it can obtain is by crashing job 1.Hence, v({1,3}) = 1. For the grand coalition the schedule (3124, (2,2,1,2)) 2 AN is optimal, with cost sav-ings v(N) = 19. Furthermore it holds that (3124, (2,2,1,2)) 2 A{1,2,3} and that (1234, (1,2,1,2)) 2 A{1,3,4}.These schedule lead to cost savings of 18 and 3 for coalitions {1,2,3} and {1,3,4} respectively. We concludethat v({1,2,3}) + v({1,3,4}) P 21 > 20 = v(N) + v({1,3}), and thus that (N,v) is not convex.

As a final remark to this paper we conjecture that for a cps situation ðN ; r0; a; b; p; �pÞ the correspondinggame is convex if it holds that ai = aj, bi = bj and pi = pj for all i, j 2 N.

Appendix A

Proof of Lemma 3.1. Let ðN ; r0; a; b; p; �pÞ be a cps situation. Let (r,x) 2 AN be an optimal schedule for thegrand coalition. Let i 2 N. Recall that

Page 13: Sequencing games with controllable processing times

76 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

di ¼X

j2N :j<i and r�1ðjÞ>r�1ðiÞðaipj � ajpiÞ ðA:1Þ

þX

k2N :r�1ðkÞPr�1ðiÞak

0@

1A� bi

0@

1Aðpi � xiÞ: ðA:2Þ

First note for expression (A.1) that

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

ðaipj � ajpiÞ ¼X

j2N :j<i and r�1ðjÞ>r�1ðiÞðaixj � ajpiÞ ðA:3Þ

þX

j2N :j<i and r�1ðjÞ>r�1ðiÞaiðpj � xjÞ: ðA:4Þ

Furthermore, note for expression (A.2) that

Xk2N :r�1ðkÞPr�1ðiÞ

ak

0@

1A� bi

0@

1Aðpi � xiÞ ¼

Xk2N :kPi

ak

!� bi

!ðpi � xiÞ ðA:5Þ

�X

j2N :j>i and r�1ðjÞ<r�1ðiÞajðpi � xiÞ ðA:6Þ

þX

j2N :j<i and r�1ðjÞ>r�1ðiÞajðpi � xiÞ: ðA:7Þ

Now adding expressions (A.3) and (A.7) yields

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

ðaixj � ajpiÞ þX

j2N :j<i and r�1ðjÞ>r�1ðiÞajðpi � xiÞ

¼X

j2N :j<i and r�1ðjÞ>r�1ðiÞðaixj � ajxiÞ ¼

Xj2N :j<i

ðaixj � ajxiÞþ: ðA:8Þ

The last equality is satisfied because according to the Smith-rule it holds that r�1(j) > r�1(i) if and only ifaixj � ajxi P 0. Observe that the sum of expressions (A.5) and (A.8) coincides with ci.

We conclude that

di ¼ ci �X

j2N :j>i and r�1ðjÞ<r�1ðiÞajðpi � xiÞ þ

Xj2N :j<i and r�1ðjÞ>r�1ðiÞ

aiðpj � xjÞ: �

The following lemma states that if a coalition S � N consists of several components, then the optimalschedule of S is also optimal for the last component. This result is logical since the jobs from other com-ponents do not benefit from the crashing of a job of the last component.

Lemma A.1. Let ðN ; r0; a; b; p; �pÞ be a sequencing situation with controllable processing times. Let S � N

consist of t P 2 components. Let St be the last component. Then every optimal schedule for S restricted to St is

optimal for St.

The proof is omitted since it is trivial.

Proof of Theorem 3.2. Let ðN ; r0; a; b; p; �pÞ be a cps situation and let (N,v) be the corresponding game. Let1 6 j 6 n and let r : {1, . . .,n}! N be such that r(i) = n + 1 � i for all 1 6 i < j,r(i) = n + 2 � i for allj < i 6 n and r(j) = 1. We need to show that for all 0 6 i 6 k and all S � Nn[r(k),r] expression (2) holds.

So let 0 6 i 6 k and let S � Nn[r(k),r]. We assume that 1 6 i < k and that S5 ;, since for i = 0expression (2) trivially holds because of superadditivity. Note that since S5 ;, it holds that k 5 n.

Page 14: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 77

Because of the structure of r, it holds that [r(k),r] and [r(i),r] both consist of at most two connectedcomponents. In particular, they consist of possibly job 1 and a tail of r0. If 1 2 [r(k),r], then we will denotethe first component by K1 = {1} and the second by K2 = [r(k),r]n{1}. If 1 62 [r(k),r], then there is onecomponent which we will denote by K2 and we define K1 = ;. Similarly, if 1 2 [r(i),r], then I1 = {1} andI2 = [r(i),r]n{1}. If 1 62 [r(i),r], then there is one component which we will denote by I2 and we defineI1 = ;.

Denote the optimal processing schedule of [r(k),r] by (s[r(k),r], p[r(k),r]), and the optimal processingschedule of [r(i),r] [ S by (s[r(i),r] [ S, p[r(i),r] [ S). We will create suboptimal schedules for coalitions[r(k),r] [ S and [r(i),r], depending on the optimal schedules of coalitions [r(i),r] [ S and [r(k),r]. Inparticular we ’’allocate’’ the processing times of coalitions [r(i),r] [ S and [r(k),r] to coalitions [r(i),r] and[r(k),r] [ S. In this way we obtain suboptimal processing schedules. The profit obtained at thesesuboptimal processing schedules give lowerbounds for v([r(k),r] [ S) and v([r(i),r]). We will show that thesum of these lowerbounds exceeds the sum of v([r(i),r] [ S) and v([r(k),r]). We distinguish between twocases.

Case 1: I2 ( K2.Before we will obtain our suboptimal schedules, we first derive expressions for v([r(k),r]) and

v([r(i),r] [ S). It holds that

vð½rðkÞ; r�Þ ¼Xl2K1

Xm2½rðkÞ;r�

am

!� bl

!pl � p½rðkÞ;r�l

� �ðA:9Þ

þ vðK2Þ: ðA:10Þ

The cost savings of [r(k),r] can be divided into two parts. The cost savings obtained by a possible crash ofjob 1 equal (A.9). Note that if 1 62 [r(k),r], then K1 = ; and expression (A.9) is zero by definition. The othercost savings that can be obtained by interchanging and crashing jobs of K2 equal (A.10) according to Lem-ma A.1.

Since I2 and K2 are both tails of r0, I2 ( K2 and S \ K2 = ;, we conclude that S is not connected to I2.Denote the components of [r(i),r] [ S by S1, . . .,St+1, for tP 1. So St+1 = I2 and if 1 2 [r(i),r], then 1 2 S1.Therefore

vð½rðiÞ; r� [ SÞ ¼Xl2I1

Xm2½rðiÞ;r�[S

am

!� bl

!pl � p½rðiÞ;r�[Sl

� �ðA:11Þ

þXl2S

Xm2S[I2:mPl

am

!� bl

!pl � p½rðiÞ;r�[Sl

� �ðA:12Þ

þX

l;m2S1:l<m

amp½rðiÞ;r�[Sl � alp½rðiÞ;r�[Sm

� �þ

ðA:13Þ

þXt

a¼2

Xl;m2Sa:l<m

ðamp½rðiÞ;r�[Sl � alp½rðiÞ;r�[Sm Þþ

!ðA:14Þ

þ vðI2Þ: ðA:15Þ

Expression (A.11) denotes the cost savings obtained by a possible crash of job 1, if 1 2 [r(i),r]. The costsavings obtained by crashing the jobs in S equal (A.12). Expressions (A.13) and (A.14) are the cost savingsobtained by switching the jobs in S [ I1. Because of Lemma A.1 the cost savings obtained by crashing andrearranging jobs in I2 can be expressed as (A.15).

Page 15: Sequencing games with controllable processing times

78 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

Now we will create suboptimal schedules (p[r(i),r],p[r(i),r]) and (p[r(k),r][S,p[r(k),r][S) for coalitions [r(i),r]and [r(k),r] [ S respectively. Let

p½rðiÞ;r�l ¼p½rðiÞ;r�[Sl ; if l 2 I2;

maxfp½rðiÞ;r�[Sl ; p½rðkÞ;r�l g; if l ¼ 1;

pl; if l 2 N n ½rðiÞ; r�; l 6¼ 1:

8><>:

Note that if 1 62 [r(i),r], then it necessarily holds that 1 62 [r(k),r] or that 1 62 [r(i), r] [ S, because[r(k),r] \ S = ;. This implies that if 1 62 [r(i),r], then p½rðkÞ;r�1 ¼ p1 or that p

½rðiÞ;r�[S1 ¼ p1, and therefore that

p½rðiÞ;r�1 ¼ p1. We conclude that the processing times p[r(i),r] satisfy the admissibility constraints (P1) and (P2).Furthermore let p[r(i),r] be obtained from r0 by rearranging the jobs of [r(i),r] according to the Smith-

rule using processing times p[r(i),r], taking of course into account the admissibility constraint (P3). Note thatthis schedule restricted to I2 is an optimal schedule for I2 according to Lemma A.1. This yields

vð½rðiÞ; r�Þ PXl2I1

Xm2½rðiÞ;r�

am

!� bl

!pl � p½rðiÞ;r�l

� �ðA:16Þ

þ vðI2Þ: ðA:17Þ

Similarly, let

p½rðkÞ;r�[Sl ¼

p½rðkÞ;r�l ; if l 2 K2;

p½rðiÞ;r�[Sl ; if l 2 S; l 6¼ 1;

minfp½rðiÞ;r�[Sl ; p½rðkÞ;r�l g; if l ¼ 1;

pl; if l 2 N n ð½rðkÞ; r� [ SÞ; l 6¼ 1:

8>>>><>>>>:

Observe that if 1 62 [r(k),r] [ S, then 1 62 [r(i),r] [ S and 1 62 [r(k),r]. Hence, if 1 62 [r(k),r] [ S, then it

holds that p½rðkÞ;r�1 ¼ p1 and p½rðiÞ;r�[S1 ¼ p1, and therefore that p½rðkÞ;r�[S1 ¼ p1. We conclude that the processing

times p[r(k),r][S satisfy the admissibility constraints (P1) and (P2).

Now using these processing times, let p[r(k),r][S be the order obtained from r0 by interchanging the jobsaccording to the Smith-rule, while of course taking into account the admissibility constraint (P3). However,only interchange two jobs if both jobs are in K2, or both jobs are in S [ I1. This last condition is a technicaldetail in order to keep the number of terms of our lowerbound for the cost savings of coalition [r(k),r] [ S

more manageable. Observe that this restriction only lowers our lowerbound of v([r(k),r] [ S). Again note,by Lemma A.1, that this processing schedule restricted to K2 is optimal for K2. This yields

vð½rðkÞ; r� [ SÞ PXl2K1

Xm2½rðkÞ;r�[S

am

!� bl

!pl � p½rðkÞ;r�[Sl

� �ðA:18Þ

þXl2S

Xm2S[K2:mPl

am

!� bl

!pl � p½rðkÞ;r�[Sl

� �ðA:19Þ

þX

l;m2S1:l<m

amp½rðkÞ;r�[Sl � alp½rðkÞ;r�[Sm

� �þ

ðA:20Þ

þXt

a¼2

Xl;m2Sa:l<m

ðamp½rðkÞ;r�[Sl � alp½rðkÞ;r�[Sm Þþ

!ðA:21Þ

þ vðK2Þ: ðA:22Þ

Page 16: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 79

Now first observe that expressions (A.17) and (A.15) coincide, as well as expressions (A.22) and (A.10).Furthermore expression (A.19) exceeds expression (A.12) since I2 � K2 and p½rðkÞ;r�[Sl ¼ p½rðiÞ;r�[Sl for alll 2 S. It also holds that expression (A.14) coincides with expression (A.21) because p½rðkÞ;r�[Sl ¼ p½rðiÞ;r�[Sl

for all l 2 S. We conclude that showing (2) boils down to showing that the sum of expressions (A.16),(A.18) and (A.20) exceeds the sum of expressions (A.9), (A.11) and (A.13). We now distinguish betweenthree subcases.

Subcase 1a: 1 62 [r(k),r].This implies that K1 = ;, and thus that I1 = ;. Therefore it holds that expressions (A.9), (A.11), (A.16)

and (A.18) all are equal to zero. Hence, it is sufficient to show that (A.20) exceeds (A.13) in order to showthat (2) holds. Because I1 = ; it follows that 1 62 S1. Therefore it holds that p½rðiÞ;r�[Sj ¼ p½rðkÞ;r�[Sj for allj 2 S1. We conclude that (A.20) and (A.13) coincide.

Subcase 1b: 1 2 [r(k),r] and p½rðkÞ;r�[S1 ¼ p½rðiÞ;r�[S1 .

Because p½rðkÞ;r�[S1 ¼ p½rðiÞ;r�[S1 it follows from the definition of p½rðkÞ;r�[S1 that p½rðiÞ;r�[S1 6 p½rðkÞ;r�1 . We

conclude that it must hold that p½rðiÞ;r�1 ¼ p½rðkÞ;r�1 . Note that it now holds that (A.20) and (A.13) coincide,

since p½rðiÞ;r�[Sj ¼ p½rðkÞ;r�[Sj for all j 2 S1. So showing that (2) is satisfied, boils down to showing that the sumof expressions (A.16) and (A.18) exceeds the sum of expressions (A.9) and (A.11).

Now first suppose that p½rðkÞ;r�1 ¼ p1. Then it holds that expressions (A.9) is equal to zero. Since

p½rðiÞ;r�1 ¼ p½rðkÞ;r�1 it follows that expression (A.16) is equal to zero as well. Also we have that expression

(A.18) exceeds expression (A.11), since p½rðkÞ;r�[S1 ¼ p½rðiÞ;r�[S1 and [r(i), r] � [r(k),r]. So expression (2) issatisfied in this case.

Secondly, suppose that p½rðkÞ;r�1 < p1. Since optimal processing times can only take two values it holds

that p½rðkÞ;r�1 ¼ �p1. Since by assumption of subcase 1b it holds that p½rðkÞ;r�[S1 ¼ minfp½rðkÞ;r�1 ; p½rðiÞ;r�[S1 g ¼

p½rðiÞ;r�[S1 it follows that p½rðiÞ;r�[S1 ¼ �p1. We conclude that p½rðiÞ;r�1 ¼ maxfp½rðkÞ;r�1 ; p½rðiÞ;r�[S1 g ¼

maxf�p1; �p1g ¼ �p1.

Observe that because 1 2 [r(k),r], that 1 62 S. Since p½rðiÞ;r�[S1 < p1, this implies that 1 2 [r(i),r]. Thus,K1 = I1 = {1}. Hence it follows for the sum of expressions (A.9) and (A.11) that

Xl2K1

Xm2½rðkÞ;r�

am

!� bl

!ðpl � p½rðkÞ;r�l Þ þ

Xl2I1

Xm2½rðiÞ;r�[S

am

!� bl

!pl � p½rðiÞ;r�[Sl

� �

¼X

m2½rðkÞ;r�am

!� b1

!ðp1 � �p1Þ þ

Xm2½rðiÞ;r�[S

am

!� b1

!ðp1 � �p1Þ

¼X

m2½rðkÞ;r�[Sam

!� b1

!ðp1 � �p1Þ þ

Xm2½rðiÞ;r�

am

!� b1

!ðp1 � �p1Þ;

where the first equality is satisfied since K1 = I1 = {1} and p½rðkÞ;r�1 ¼ p½rðiÞ;r[S�1 ¼ �p1. Note that this last expres-

sion is equal to the sum of expressions (A.16) and (A.18) since p½rðiÞ;r�1 ¼ p½rðkÞ;r�[S1 ¼ �p1 and K1 = I1 = {1}.We conclude that (2) is satisfied.

Subcase 1c: 1 2 [r(k),r] and p½rðkÞ;r�[S1 < p½rðiÞ;r�[S1 .We now necessarily have that p½rðkÞ;r�[S1 ¼ p½rðkÞ;r�1 and that p½rðiÞ;r�1 ¼ p½rðiÞ;r�[S1 . Since optimal processing

times can only take two values it follows that p½rðkÞ;r�[S1 ¼ p½rðkÞ;r�1 ¼ �p1 and that p½rðiÞ;r�1 ¼ p½rðiÞ;r�[S1 ¼ p1.

Page 17: Sequencing games with controllable processing times

80 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

Therefore expressions (A.11) and (A.16) both are equal to zero. So showing that (2) holds boils down toshowing that the sum of expressions (A.18) and (A.20) exceeds the sum of expressions (A.9) and (A.13).First observe for expression (A.18) that

Xl2K1

Xm2½rðkÞ;r�[S

am

!� bl

!ðpl � p½rðkÞ;r�[Sl Þ ¼

Xm2½rðkÞ;r�[S

am

!� b1

!ðp1 � �p1Þ

PX

m2½rðkÞ;r�am

!� b1

!ðp1 � �p1Þ ðA:23Þ

þX

m2S1nf1gam

!ðp1 � �p1Þ; ðA:24Þ

where the equality holds because K1 = {1} and because p½rðkÞ;r�[S1 ¼ �p1. The inequality holds since(S1n{1}) � S. Also observe for expression (A.20) that

X

l;m2S1:l<m

ðamp½rðkÞ;r�[Sl � alp½rðkÞ;r�[Sm Þþ ¼X

l;m2S1nf1g:l<m

ðamp½rðkÞ;r�[Sl � alp½rðkÞ;r�[Sm Þþ ðA:25Þ

þX

m2S1nf1gðam�p1 � a1p½rðkÞ;r�[Sm Þþ: ðA:26Þ

For the equality we have used that p½rðkÞ;r�[S1 ¼ �p1. Since K1 = {1} and p½rðkÞ;r�1 ¼ �p1, it follows that expression(A.23) coincides with expression (A.9). Therefore it is now sufficient to show that the sum of expressions(A.24)–(A.26) exceeds expression (A.13). If it holds that 1 62 S1, then it holds that expression (A.25) coin-

cides with expression (A.13) since it holds that p½rðkÞ;r�[Sm ¼ p½rðiÞ;r�[Sm for all m 2 S1n{1}. Therefore assume that

1 2 S1. For the sum of expressions (A.24) and (A.26) we have that

Xm2S1nf1g

am

!ðp1 � �p1Þ þ

Xm2S1nf1g

ðam�p1 � a1p½rðkÞ;r�[Sm Þþ PX

m2S1nf1gðamp1 � a1p½rðkÞ;r�[Sm Þþ

¼X

m2S1nf1gðamp½rðiÞ;r�[S1 � a1p½rðiÞ;r�[Sm Þþ: ðA:27Þ

The inequality holds because of Lemma 3.2 by taking a1 = a1, a2 = am, q1 = p1, �q1 ¼ �p1 and q2 ¼ p½rðkÞ;r�[Sm .

The equality is satisfied because p½rðiÞ;r�[S1 ¼ p1 and p½rðkÞ;r�[Sm ¼ p½rðiÞ;r�[Sm for all m 2 S1n{1}. Now observe that

the sum of expressions (A.25) and (A.27) coincides with expression (A.13) since p½rðkÞ;r�[Sm ¼ p½rðiÞ;r�[Sm for allm 2 S1n{1} and our assumption that 1 2 S1. We conclude that (2) is satisfied.

Case 2: I2 = K2.Since we have assumed that i < k and hence that [r(i),r] 5 [r(k),r] we conclude, using the structure of r,

that I1 = ; and K1 = {1}. Denote the components of [r(i),r] [ S by S1, . . .,St, with t P 1. Note that it mighthold that I2 ( St. We have that

vð½rðkÞ; r�Þ ¼X

m2½rðkÞ;r�am

!� b1

!p1 � p½rðkÞ;r�1

� �ðA:28Þ

þ vðK2Þ; ðA:29Þ

Page 18: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 81

and that

vð½rðiÞ; r� [ SÞ ¼X

l2½rðiÞ;r�[S

Xm2½rðiÞ;r�[S:mPl

am

!� bl

!pl � p½rðiÞ;r�[Sl

� �ðA:30Þ

þXt

a¼1

Xl;m2Sa:l<m

ðamp½rðiÞ;r�[Sl � alp½rðiÞ;r�[Sm Þþ

!: ðA:31Þ

First we note, because I1 = ;, that v([r(i),r]) = v(I2) = v(K2). So showing that (2) is satisfied boils down toshowing that the sum of expressions (A.28), (A.30) and (A.31) is exceeded by v([r(k),r] [ S). We will obtaina lowerbound of v([r(k),r] [ S) by creating a suboptimal schedule for coalition [r(k),r] [ S, which we will

denote by (p[r(k),r] [ S, p[r(k),r] [ S). Let the processing times be given by

p½rðkÞ;r�[Sl ¼p½rðiÞ;r�[Sl ; if l 2 ½rðiÞ; r� [ S;

p½rðkÞ;r�l ; if l ¼ 1;

pl; if l 2 N n ð½rðkÞ; r� [ SÞ:

8>><>>:

Let p[r(k),r][S be constructed by rearranging the jobs of coalition [r(k),r] [ S using the Smith-rule and oursuboptimal processing times. However, only switch jobs if both are in [r(i),r] [ S. We can conclude for thecost savings of [r(k),r] [ S that

vð½rðkÞ; r� [ SÞ PX

m2½rðkÞ;r�[Sam

!� b1

!p1 � p½rðkÞ;r�[S1

� �ðA:32Þ

þX

l2ð½rðkÞ;r�[SÞnf1g

Xm2½rðkÞ;r�[S:mPl

am

!� bl

!pl � p½rðkÞ;r�[Sl

� �ðA:33Þ

þXt

a¼1

Xl;m2Sa:l<m

ðamp½rðkÞ;r�[Sl � alp½rðkÞ;r�[Sm Þþ

!: ðA:34Þ

Expression (A.28) is exceeded by expression (A.32) since p½rðkÞ;r�[S1 ¼ p½rðkÞ;r�1 . Note that expression (A.33)

coincides with expression (A.30) since ([r(k),r] [ S)n{1} = [r(i),r] [ S and because p½rðkÞ;r�[Sl ¼ p½rðiÞ;r�[Sl

for all l 2 [r(i),r] [ S. Furthermore we have that expressions (A.31) and (A.34) coincide becausep½rðiÞ;r�[Sl ¼ p½rðkÞ;r�[Sl for all l 2 [r(i),r] [ S. We conclude that the sum of v([r(k),r] [ S) and v([r(i), r]) ex-ceeds the sum of v([r(i),r] [ S) and v([r(k),r]). This ends the proof. h

Proof of Lemma 3.3. Let S,T � N with S \ T = {i} for some i 2 N. We first introduce some notation.Denote the connected components of S by Sk,k = 1, . . .,a, and those of T by Tl, l = 1, . . .,b. Let Sc,1 6 c 6 a, be such that i 2 Sc, and let Td, 1 6 d 6 b, be such that i 2 Td. Finally letS1c ¼ fm 2 Sc : m < i}, S2

c ¼ fm 2 Sc : m > ig, T 1d ¼ fm 2 T d : m < ig and T 2

d ¼ fm 2 T d : m > ig. Note thatsince S \ T = {i} it holds that S1

c \ T 1d ¼ ; and that S2

c \ T 2d ¼ ;. Let the connected components of S [ T be

denoted by Bk,k = 1, . . .,e. Now let (rS,pS) 2 AS be an optimal processing schedule for coalition S and(rT,pT) 2 AT be an optimal processing schedule of coalition T. It holds that

vðSÞ ¼Xj2S

Xk2S:kPj

ak

!� bj

!ðpj � pSj Þ ðA:35Þ

þXak¼1

Xl;m2Sk :l<m

ðampSl � alpSmÞþ

!: ðA:36Þ

Page 19: Sequencing games with controllable processing times

82 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

The cost savings for coalition S can be split into two components. The first component, (A.35), are the (pos-sibly negative) cost savings that are obtained by crashing some of the jobs. Expression (A.36) contains thecost savings obtained by interchanging the jobs. Similarly it holds that

vðT Þ ¼Xj2T

Xk2T :kPj

ak

!� bj

!ðpj � pTj Þ ðA:37Þ

þXbk¼1

Xl;m2T k :l<m

ðampTl � alpTmÞþ

!: ðA:38Þ

We will now construct a suboptimal schedule for coalition S [ T. We will show that the cost savings ob-tained at this suboptimal schedule exceed the sum of v(S) and v(T).

Consider the suboptimal processing schedule (rS [ T,pS [ T) 2 AS [ T for coalition S [ T, where

pS[Tj ¼

pSj ; if j 2 S n fig;pTj ; if j 2 T n fig;minfpSj ; pTj g; if j ¼ i;

pj; if j 2 N n ðS [ T Þ:

8>>>><>>>>:

Furthermore, rS[T is the order obtained by rearranging the jobs of S [ T according to the Smith-rule usingas processing times pS[T and taking into account the admissibility constraint (P3). This yields

vðS [ T Þ PXj2S[T

Xk2S[T :kPj

ak

!� bj

!ðpj � pS[Tj Þ

þXek¼1

Xl;m2Bk :l<m

ðampS[Tl � alpS[Tm Þþ

!

¼X

j2Snfig

Xk2S[T :kPj

ak

!� bj

!ðpj � pSj Þ ðA:39Þ

þX

j2Tnfig

Xk2S[T :kPj

ak

!� bj

!ðpj � pTj Þ ðA:40Þ

þX

k2S[T :kPi

ak

!� bi

!ðpi � pS[Ti Þ ðA:41Þ

þXek¼1

Xl;m2Bk :l<m

ðampS[Tl � alpS[Tm Þþ

!: ðA:42Þ

We will now distinguish between three cases in order to prove our inequality.

Case 1: pSi ¼ pTi ¼ pi.First note that it holds that pS[Tj ¼ pSj for all j 2 S and that pS[Tj ¼ pTj for all j 2 T. It holds that

expression (A.41) is equal to zero because pS[Ti ¼ pi. Furthermore, expression (A.39) exceeds expression(A.35), as well as expression (A.40) exceeds expression (A.37), since pSi ¼ pTi ¼ pi. So showing thatv(S) + v(T) 6 v(S [ T) boils down to showing that expression (A.42) exceeds the sum of expressions (A.36)and (A.38). Let j,h 2 Sk for some 1 6 k 6 a, i.e. let j,h appear in a connected component of S. It followsthat they also appear in a connected component of S [ T, i.e. there is a 1 6 g 6 e with j,h 2 Bg. We

Page 20: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 83

conclude that the term in (A.36) dealing with j and h also appears in (A.42). Similarly, each pair j,h 2 Tl, forsome 1 6 l 6 b, appears in a connected component of S [ T. Therefore we also have that the term in (A.38)dealing with j,h also appears in (A.42). Observe that each pair in S is not in T, and that each pair in T is notin S, because jS \ Tj = 1. Thus it holds, since pS[Tj ¼ pSj for all j 2 S, pS[Tj ¼ pTj for all j 2 T and thenonnegativity of every term in (A.42), that (A.42) exceeds the sum of (A.36) and (A.38).

Case 2: pSi ¼ pTi ¼ �pi.Note that it holds by definition of pS[Ti that pS[Ti ¼ pSi ¼ pTi . Observe that this implies that pS[Tj ¼ pSj for

all j 2 S and pS[Tj ¼ pTj for all j 2 T. First we will develop a lowerbound for (A.41):

Xk2S[T :kPi

ak

!� bi

!ðpi � pS[Ti Þ ¼

Xk2S:kPi

ak

!� bi

!ðpi � pS[Ti Þ

þX

k2T :kPi

ak

!� bi

!ðpi � pS[Ti Þ

þ ðbi � aiÞðpi � pS[Ti Þ

PX

k2S:kPi

ak

!� bi

!ðpi � pSi Þ ðA:43Þ

þX

k2T :kPi

ak

!� bi

!ðpi � pTi Þ; ðA:44Þ

where the inequality is satisfied since bi P ai, and by assumption pS[Ti ¼ pSi ¼ pTi . Observe that the sum ofexpressions (A.39) and (A.43) exceeds expression (A.35) and that the sum of expressions (A.40) and (A.44)exceeds expression (A.37). Hence, showing that v(S) + v(T) 6 v(S [ T) boils down to showing that expres-sion (A.42) exceeds the sum of expressions (A.36) and (A.38). For this last statement we refer to Case 1,where we already showed this inequality. Note that we can refer to Case 1, since in both cases we have thatpS[Tj ¼ pSj for all j 2 S and pS[Tj ¼ pTj for all j 2 T.

Case 3: pSi 6¼ pTi .Without loss of generality assume that pSi < pTi , or equivalently that pSi ¼ �pi and pTi ¼ pi. Hence we have,

by definition of pS[Ti , that pS[Ti ¼ pSi ¼ �pi. Observe that we can obtain the following lowerbound for (A.41):

Xk2S[T :kPi

ak

!� bi

!ðpi � pS[Ti Þ ¼

Xk2S:kPi

ak

!� bi

!ðpi � pSi Þ

þX

k2T :k>i

ak

!ðpi � pSi Þ

PX

k2S:kPi

ak

!� bi

!ðpi � pSi Þ ðA:45Þ

þXm2T 2

d

am

0@

1Aðpi � pSi Þ; ðA:46Þ

where the equality holds because pS[Ti ¼ pSi and the inequality since T 2d � T . Observe that it holds that

expression (A.40) exceeds expression (A.37), since pTi ¼ pi. Furthermore we have that the sum of expres-sions (A.39) and (A.45) exceeds expression (A.35). Therefore, showing that v(S) + v(T) 6 v(S [ T) boils

Page 21: Sequencing games with controllable processing times

84 B. van Velzen / European Journal of Operational Research 172 (2006) 64–85

down to showing that the sum of expressions (A.42) and (A.46) exceeds the sum of expressions (A.36) and(A.38). Note that because jS \ Tj = 1 we have the following lowerbound for (A.42):

Xek¼1

Xl;m2Bk :l<m

ðampS[Tl � alpS[Tm Þþ

!PXak¼1

Xl;m2Sk :l<m

ðampS[Tl � alpS[Tm Þþ

!ðA:47Þ

þXbk¼1

Xl;m2T k :l<m

ðampS[Tl � alpS[Tm Þþ

!: ðA:48Þ

Now it holds that (A.47) coincides with (A.36), since pS[Tl ¼ pSl for all l 2 S. So we only need to show thatthe sum of expressions (A.48) and (A.46) exceeds expression (A.38). Observe that for (A.48) we have that

Xbk¼1

Xl;m2T k :l<m

ðampS[Tl � alpS[Tm Þþ

!¼Xbk¼1

Xl;m2T knfig:l<m

ðampS[Tl � alpS[Tm Þþ

!

þXl2T 1

d

ðaipS[Tl � alpS[Ti Þþ þXm2T 2

d

ðampS[Ti � aipS[Tm Þþ

PXbk¼1

Xl;m2T knfig:l<m

ðampTl � alpTmÞþ

!þXl2T 1

d

ðaipTl � alpTi Þþ ðA:49Þ

þXm2T 2

d

ðampSi � aipTmÞþ; ðA:50Þ

where the inequality holds since pS[Tj ¼ pTj for all j 2 Tn{i}, pS[Ti 6 pTi and pS[Ti ¼ pSi . Now adding (A.46)and (A.50) yields

Xm2T 2

d

am

0@

1Aðpi � pSi Þ þ

Xm2T 2

d

ðampSi � aipTmÞþ PXm2T 2

d

ðampi � aipTmÞþ

¼Xm2T 2

d

ðampTi � aipTmÞþ; ðA:51Þ

where the inequality holds because of Lemma 3.2 by taking a1 = ai, a2 = am, q1 = pi, �q1 ¼ pSi and q2 ¼ pTm.The equality is satisfied because pTi ¼ pi. Since the sum of expressions (A.49) and (A.51) coincides withexpression (A.38) we conclude that v(S) + v(T) 6 v(S [ T). h

References

[1] B. Alidaee, A. Ahmadian, Two parallel machine sequencing problems involving controllable job processing times, EuropeanJournal of Operational Research 70 (1993) 335–341.

[2] P. Borm, G. Fiestras-Janeiro, H. Hamers, E. Sanchez, M. Voorneveld, On the convexity of sequencing games with due dates,European Journal of Operational Research 136 (2002) 616–634.

[3] P. Calleja, P. Borm, H. Hamers, F. Klijn, M. Slikker, On a new class of parallel sequencing situations and related games, Annalsof Operations Research 109 (2002) 265–277.

[4] I. Curiel, G. Pederzoli, S. Tijs, Sequencing games, European Journal of Operational Research 40 (1989) 344–351.[5] I. Curiel, J. Potters, V. Rajendra Prasad, S. Tijs, B. Veltman, Sequencing and cooperation, Operations Research 42 (1994) 566–

568.[6] D. Granot, G. Huberman, The relationship between convex games and minimal cost spanning tree games: A case for

permutationally convex games, SIAM Journal of Algorithms and Discrete Methods 3 (1982) 288–292.

Page 22: Sequencing games with controllable processing times

B. van Velzen / European Journal of Operational Research 172 (2006) 64–85 85

[7] H. Hamers, P. Borm, S. Tijs, On games corresponding to sequencing situations with ready times, Mathematical Programming 70(1995) 1–13.

[8] H. Hamers, F. Klijn, J. Suijs, On the balancedness of multimachine sequencing games, European Journal of Operational Research119 (1999) 678–691.

[9] H. Hamers, F. Klijn, B. van Velzen, On the convexity of precedence sequencing games, CentER Discussion Paper 2002-112,Tilburg University, Tilburg, The Netherlands, 2002.

[10] T. Ichiishi, Supermodularity: Applications to convex games and the greedy algorithm for LP, Journal of Economic Theory 25(1981) 283–286.

[11] E. Nowicki, S. Zdrzalka, A survey of results for sequencing problems with controllable processing times, Discrete AppliedMathematics 26 (1990) 271–287.

[12] L. Shapley, Cores of convex games, International Journal of Game Theory 1 (1971) 11–26.[13] W. Smith, Various optimizers for single-stage production, Naval Research Logistics Quarterly 3 (1956) 59–66.[14] A. van den Nouweland, M. Krabbenborg, J. Potters, Flowshops with a dominant machine, European Journal of Operational

Research 62 (1992) 38–46.[15] B. van Velzen, A note on permutationally convex games, Mimeo., Tilburg University Tilburg, The Netherlands, 2003.[16] R.G. Vickson, Choosing the job sequence and processing times to minimize total processing plus flow cost on a single machine,

Operations Research 28 (5) (1980) 1155–1167.[17] R.G. Vickson, Two single machine sequencing problems involving controllable processing times, AIIE Transactions 12 (3) (1980)

258–262.