42
OPTIMAL RESOURCE ALLOCATION IN ACTIVITY NETWORKS UNDER STOCHASTIC CONDITIONS Salah E. Elmaghraby Department of Industrial & Systems Engineering and The Graduate Program in Operations Research North Carolina State University, Raleigh NC, 27695-7906, USA ([email protected]) Girish Ramachandra Citigroup Global Decision Management, Bangalore, India ([email protected]) Abstract We treat the problem of optimally allocating a single resource under un- certainty to the various activities of a project to minimize a certain economic objective composed of resource utilization cost and tardiness cost. Traditional project scheduling methods assume that the uncertainty resides in the dura- tion of the activities. Our research diers from the traditional view in that it assumes that the work content (or “eort”) of an activity is the source of the ‘internal’ uncertainty — as opposed to the ‘external’ uncertainty — and the duration is the result of the intensity of the resource allocated to the activity, which then becomes the decision variable. The functional relationship between the work content, the resource allocation, and the duration of the activity is arbitrary, though we take it to be hyperbolic. When the work content is known only in probability, we discuss the ap- proach via stochastic programming and demonstrate its inadequacy to respond to the various questions raised in this domain. Our analysis treats the special case when the work content is exponentially distributed, which may be viewed as an ‘upper bound’ distribution on most probability distributions suggested in this eld. This results in a continuous-time Markov chain with a single ab- sorbing state. We establish convexity of the cost function and develop a Policy Iteration-like approach that achieves the optimum in nite number of steps. Corresponding author. 1

Archives: People

  • Upload
    doliem

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Archives: People

OPTIMAL RESOURCE ALLOCATION

IN ACTIVITY NETWORKS UNDER

STOCHASTIC CONDITIONS

Salah E. Elmaghraby†

Department of Industrial & Systems Engineering and

The Graduate Program in Operations Research

North Carolina State University, Raleigh NC, 27695-7906, USA

([email protected])

Girish Ramachandra

Citigroup Global Decision Management, Bangalore, India

([email protected])

Abstract

We treat the problem of optimally allocating a single resource under un-

certainty to the various activities of a project to minimize a certain economic

objective composed of resource utilization cost and tardiness cost. Traditional

project scheduling methods assume that the uncertainty resides in the dura-

tion of the activities. Our research differs from the traditional view in that

it assumes that the work content (or “effort”) of an activity is the source of

the ‘internal’ uncertainty — as opposed to the ‘external’ uncertainty — and the

duration is the result of the intensity of the resource allocated to the activity,

which then becomes the decision variable. The functional relationship between

the work content, the resource allocation, and the duration of the activity is

arbitrary, though we take it to be hyperbolic.

When the work content is known only in probability, we discuss the ap-

proach via stochastic programming and demonstrate its inadequacy to respond

to the various questions raised in this domain. Our analysis treats the special

case when the work content is exponentially distributed, which may be viewed

as an ‘upper bound’ distribution on most probability distributions suggested

in this field. This results in a continuous-time Markov chain with a single ab-

sorbing state. We establish convexity of the cost function and develop a Policy

Iteration-like approach that achieves the optimum in finite number of steps.

† Corresponding author.

1

Page 2: Archives: People

Key Words: Activity Networks, Resource Allocation, Stochastic Work Content,

Phase-Type Distributions.

1 INTRODUCTION AND REVIEW

OF LITERATURE

We study the problem of optimal resource allocation to activities in a project in

order to optimize an economic objective in the face of uncertainty. Uncertainty

in the performance of an activity arises in two different (albeit perhaps interacting)

contexts: external and internal. “External uncertainty” is the result of factors that are

external to the content of the activity itself, such as the weather, labor absenteeism,

inoperative machinery, supplier default, and the like. “Internal uncertainty”, on the

other hand, resides in the indeterminateness of the activity itself even when all the

external factors are performing exactly as hoped for. Internal uncertainty is endemic,

for instance, to all research and development work: even when all personnel, materiel

and equipment are available and functioning perfectly, the achievement of the correct

composition of a new medication may remain elusive. The uncertainty in the activity

is ‘internal’ to the activity itself, not due to any ‘external’ element that is necessary

for, or supportive of, its achievement. The focus of this research is on the internal

uncertainty and the measures to cope with it. To be sure, external uncertainty should

not be ignored — but it is in addition to the internal uncertainty, and its consideration

should not be confused with the internal uncertainty.

Because of the close kinship of the problem dealt with here and the classical

resource-constrained project scheduling problem (RCPSP) we elaborate on the dis-

tinction between our approach and that of the RCPSP.

The classical RCPSP — which has undergone several expansions and transfor-

mations over the past thirty years — deals with the scheduling of precedence- and

resource-constrained activities in order to minimize the project completion time (among

2

Page 3: Archives: People

other measures of performance). Our problem focusses on how to optimally allo-

cate a resource that is of limited availability to the activities, which are precedence-

constrained, such that the total cost of resource usage and project tardiness is mini-

mized.

It is our thesis that the internal uncertainty resides in the estimate of work con-

tent, that the decision made by management concerns mainly the resource allocated

to each activity, and that the duration of an activity is derived from its work con-

tent and the intensity of resource allocated to it. As indicated in Tereso, Araújo

and Elmaghraby [24] as well as in Morgan and Elmaghraby [17], our perspective dif-

fers from the traditional treatments of project planning under constrained resource

availability in the deterministic context, the RCPSP (see the paper by Ramachandra

and Elmaghraby [20] and the references cited therein, as well as the references cited

below) in that we consider the fundamental entity to be the work content (or total

“effort” or “energy”) of the activity, rather than its duration. It is our contention that

considerations of the work content of the activity as the variable to contend with –

rather than the activity’s duration – help focus attention on management’s primary

concern, namely, the management of the resources at their disposal. An important

additional benefit of such a perspective is the relative ease with which stochastic en-

vironments can be treated, which is well beyond the reach of traditional (activity

duration-based) approaches, as will be amply demonstrated below.

It should be mentioned here that the interaction between resource allocation and

activity duration was taken into account within the RCPSP optic by enumerating, for

each activity, the possible ‘modes’ of resource allocation and the resulting durations.

This led to reliance on discrete mathematical programming models which were sup-

planted with clever branch-and-bound procedures; see the book by Demeulemeester

& Herroelen [5] for comprehensive discussion of these and other models.

The literature on the stochastic discrete time-cost trade-off (DTCT) problem —

3

Page 4: Archives: People

which bears some kinship1 to our problem — is quite sparse. A comprehensive sur-

vey as of 2006 can be found in Herroelen and Leus [14]. Those available focus on

scheduling the start date of the activities and do not take resource availability or

resource-induced constraints into account; see, for instance, the recent paper by So-

bel et al [22]. Interestingly enough, the paper by Sobel et al [22] relies on the same

model (of continuous time Markov chains) as this paper, with the major difference

lying in our assumption that the processing of an activity is initiated as soon as it is

precedence-feasible while Sobel et al [22] search for the optimal delay in the initiation

of an activity in order to maximize the present value of the project.

Wollmer [31] discusses a stochastic version of the linear time/cost trade-off prob-

lem. He treats extensions to the problem of minimizing the expected project comple-

tion time subject to a budget constraint and the problem of achieving a feasible fixed

expected project completion time at minimum cost as well as generation of the project

duration-cost curve. Golenko-Ginzburg and Gonik [11] develop a heuristic procedure

for the RCPSP with stochastic activity times, with the objective of minimizing the

expected project duration. Their procedure operates in stages where a decision is

made to schedule the next activity based on the precedence constraints and current

resource availability. If several activities are competing for limited resources, a mul-

tiple knapsack problem is solved to select the next activity (or subset of activities) to

be scheduled. The objective function of the knapsack formulation is the maximum

total contribution of the selected activities towards the expected project duration.

The individual activity contribution is calculated as the product of the average ac-

tivity duration and the probability of being in the critical path during the course of

the project. The probability values are approximated by frequency values found via

simulation. The rationale for this approach is to give ‘critical activities’ priority over

others because of their significant impact on the expected project duration.

Another related publication dealing with the RCPSP with stochastic activity

1To the best of our knowledge, except for the papers cited here, none of these studies is concerned

with the optimal resource(s) allocation to the activities.

4

Page 5: Archives: People

times is due to Fernandez and Armacost [10]. Their short article warns users of off-

the-shelf scheduling software of the danger of omitting non-anticipative constraints.2

Specifically, they mention that Monte Carlo based approaches, such as those in

OperaR°and @RISK

R°for Project, do not incorporate non-anticipative constraints

and therefore their output may be misleading. The primary output of these soft-

ware packages generally is the empirical distribution of project duration. Since the

procedures in these packages solve each scenario separately without enforcing non-

anticipativity constraints, they arrive at un-implementable solutions (i.e., solutions

that are based on information that is not available to the decision-maker at the time

the decision was made).

Stork [23] provides an excellent insight into the stochastic resource scheduling

problem. He addresses the objective of makespan minimization and provides branch-

and-bound algorithms. He uses the classical critical path as lower bound and imple-

ments some clever sorting rules, preselection policies, and dominance rules to prune

the search tree. See the handbook by Demeulemeester and Herroelen [5] for a lucid

summary of Stork’s treatment.

Gutjahr, Strauss and Wagner [12] describe a stochastic branch-and-bound proce-

dure for solving a specific version of the stochastic DTCT where so-called ‘measures’

– actually ‘actions’ such as the use of manpower, the assignment of skilled labor, etc.

– may be used to increase the probability of meeting the project due date, thereby

avoiding heavy penalty costs.

In a more recent contribution, Azaron et al. [1] use control theory to study

resource allocation in “Markov PERTNetworks”, which is another name of the CTMC

model adopted in this research. That paper differs from ours in at least two important

respects:

2A decision based on perfect information is called an anticipative decision. It is anticipative

since it is based on assumed deterministic scenarios. Unfortunately, it is non-implementable since it

relies on information unavailable to a decision maker at the time it is made. The nonanticipativity

constraint of multi-period stochastic problems impose the requirement that solutions be based only

on information known at the time of decisions.

5

Page 6: Archives: People

1. They list three objectives: (i) minimize the resource usage cost. (We include

the resource cost in our total cost function); (ii) minimize the mean duration of

the project. (We assume a given due date and attempt to minimize the cost

of deviating from it. We deal with only one side of this cost, namely, the cost

of tardiness, but our model can easily accommodate a cost of (or gain from)

earliness.) (iii) minimize the variance of the project completion time. (We have

nothing comparable to this objective.)

2. They use a different methodology to solve the problem. The authors explain

that the resource allocation costs and project tardiness costs are in conflict

and therefore they decide to model the problem as a multi-objective stochastic

program. Ultimately the authors describe Goal Attainment and Goal Program-

ming methodologies to solve the problem and provide computational results for

a few examples. They state that their solution allows for activity times to be

taken from any distribution as a generalized Erlang distribution. They also

arrive at the conclusion that an optimal solution to the control problem cannot

be found, and so resort to the discretization of time, modeling the problem with

difference (rather than differential) equations and use non-linear programming

to solve the problem.

Talk about stochastic activity networks typically raises the specter of the use of

stochastic programming (SP). We provide a brief overview of SP and investigate its

applicability to our problem. To aid the understanding of the nature of our problem,

we discuss a recently researched stochastic project scheduling problem that uses the

concepts of SP.We provide a simple example to illustrate our contention that standard

stochastic programming formulations are inapplicable to our problem, mainly due to

the non-separability of the decision variables in the two stages of the SP model.

The problem as it is stated here had been addressed by Tereso, Araújo and El-

maghraby [26] who assumed the resource allocation to an activity, once made, remains

invariant and cannot be changed for the duration of the activity. Recognizing that

6

Page 7: Archives: People

this is at variance with common practice where the manager does indeed change the

resource allocation dynamically according to changes in the state of the project (the

so-called ‘managerial flexibility’ [15]), we model our problem as a continuous time

Markov chain (CTMC ) and propose a ‘policy iteration-type’ approach based on the

state space of the CTMC. The most important drawback of such an approach, apart

from the assumption on the exponential distribution, is the vast increase in the state

space with increase in the size of the project network coupled with the rapid increase

in the number of iterations to reach a solution.

The structure of this paper is as follows. In § 2 we state our problem in more

detail along with the assumptions made. In § 3 we address the technique of stochastic

programming (SP), which is also known as mathematical programming under uncer-

tainty. We discuss recent research efforts that apply the concept of SP to a stochastic

resource-constrained project scheduling problem, and argue for the inapplicability of

standard SP techniques to our problem. In § 4 we outline a ‘policy iteration-type’

procedure of dynamic programming for solving the problem posed, which we model

as a continuous-time Markov chain (CTMC ) along the lines of the work by Kulkarni

& Adlakha [16]. We introduce the ‘phase type distributions’ which play an impor-

tant role in policy evaluation, and discuss its application. Finally § 5 summarizes the

contributions of this paper and points out future avenues of research on this impor-

tant problem. For the sake of self-containment, a brief review of Stochastic Linear

Programming is presented in the Appendix (§ 6).

2 PROBLEMSTATEMENTANDASSUMPTIONS

We are given a project network of activities defined by a graph = (), in

which A is the set of arcs defining the activities, with || = and the precedence

relations among them (in the activity-on-arc (AoA) mode of representation), and

7

Page 8: Archives: People

N is the set of nodes with | | = . The graph is acyclic and we assume that the

nodes are numbered ‘topologically’, meaning that an arrow leads from a node to a

larger-numbered node with the start of the project being node 1 and its termination

being node n.3 Let denote the work content of activity ,4 a random variable

(r.v.). We consider a single renewable resource and let be the allocation of the

resource to activity — the decision variable, a deterministic entity. Let denote

the total availability of the renewable resource, assumed constant over the planning

horizon. Define to be the per-unit cost of usage of the resource per unit time.

Thus an activity that takes time to complete will incur ‘resource usage cost ’ equal

to We further assume that the project has a specified due date , and let

(max{ 0Υ− }) be the cost of tardiness when the project completes at time Υ,a r.v.

Our objective is to determine the resource allocation vector, X =(1 ) to

the activities such that the overall expected cost of resource allocation and tardiness

is minimized. More formally, it is desired to

minX

E"X

+ (max{ 0Υ− })# (1)

subject to: (i) the activities’ precedence constraints which are of the form©Γ−1 () ≺ ; = 2

ªin which Γ−1 () is a subset of the activities and ‘ ≺ ’ indicates precedence; and (ii)

‘box-type’ resource allocation bounds of the form

≤ ≤

Adoption of the expected value of cost as the measure of performance may not

be palatable to some managers because of its identification with the concept of fre-

quency, when the manager is faced with a unique scenario which may not repeat. In

3Nodes 1 and n may be nodes that correspond to the start and termination of no ‘real’ activity;

in which case arcs {(1 )} and {( )} represent fictitious (or dummy) activities.4We designate an activity either by its identity a or by its end-nodes ( ) ; and ∈

8

Page 9: Archives: People

such situation we recommend a ‘fiduciary’ interpretation of probability in which the

concept of ‘average’ may be more acceptable.

3 THE APPROACH VIA STOCHASTIC

PROGRAMMING

Stochasticity in the work content combined with optimization invoke the vision of

implementing the known approach of stochastic linear programming to the resource

allocation problem. Because of the negative conclusion that was reached, and for the

sake of self-containment, we devote Appendix 2 to its discussion. We give first a brief

overview of the approach, then proceed with a review of attempts at implementing

it, which led us to conclude that the approach is ill-equipped to tackle the problems

posed here. This is mainly due to the fact that stochastic linear programming assumes

separability between the decision made at one stage (the vector X in the model) and

the decision made at the subsequent stage (the vector Y in the model), which is not

a valid assumption in activity networks.

4 THE APPROACH VIA

POLICY-ITERATION (PI-TYPE)

For the sake of tractability in analysis, assume that the activity’s work content

follows an exponential distribution

∼ () ∀ ∈ (2)

The assumption of exponential work content carries two important implications: (i)

that the probability distribution of the work content remains the same after the

expenditure of any amount of effort, thanks to the memory-less property of the ex-

ponential distribution, a property that is not shared by any other distribution; and

(ii) that there is no loss of optimality in revising the decision on resource allocation

9

Page 10: Archives: People

only at the moment of an activity completing its processing, which relieves one from

tracking the system over all time.

Assume that there is a single resource and that its allocation to activity a is

bounded from below and from above, i.e.,

0 ≤ ≤ ≤ ∞ ∀ ∈ (3)

Let the resulting duration of activity be denoted by given by

=

⇒ = (4)

Evidently is also exponentially distributed, but with parameter , henceforth

denoted by ; i.e.,

∼ () ; = (5)

Next, consider the cost of the resource usage by an activity. We shall assume that

the total cost of resource allocation to activity denoted by is quadratic in the

allocation over the duration of the activity; i.e.,

= 2 = (6)

The assumption of a quadratic cost function in the intensity of the resource, together

with the ‘hyperbolic’ relation (4) between duration and resource intensity result in

a linear relation between the resource cost of the activity and its work content, as

evidenced by (6) Evidently, a more general relation between the activity’s duration

and resource intensity would result in more complex resource cost functions. For

instance, suppose

=

in which the exponent typically lies in the interval [05 1) and reflects, in some

sense, the ‘efficiency’ of utilization of additional resource. Further, the resource cost

may not be quadratic for the duration, but rather some other other parameter so

10

Page 11: Archives: People

that the cost becomes Then the activity’s resource cost would be given

by

= =

a nonlinear relation between and One would gain a more realistic model of

the process, but simplicity of exposition and computing would then be forfeited.

In this section we analyze the problem from the perspective of ‘managerial flexi-

bility’ which assumes that it is possible to change the resource allocation to the same

activity according to the changes in the state of the project. This represents the

main departure of our treatment from the work of Tereso, Araújo and Elmaghraby

[24]-[27]. To be more precise, consider the scenario where more than one activity

are in progress at any point in time. Eventually one of them finishes first. Now the

manager is allowed to change the allocation of the resource to the still ongoing activ-

ities (the ‘active’ activities). This perspective requires the definition of the ‘state’ of

the project and the corresponding ‘state space’ network. This is basically the model

treated by Kulkarni and Adlakha [16], which is based on the realization that we are

in fact dealing with a continuous time Markov chain (CTMC ). The main distinction

between our study and that reported in the work of Kulkarni and Adlakha is that we

are interested in optimization of resource allocation, while they dealt only with the

analysis of the resulting CTMC.

The next two sections briefly describe the two elements of interest to us in the

construction of our procedure: the structure of the CTMC relevant to our application,

and the properties of the ‘phase type distributions’ and their relevant parameters.

Following these introductory remarks we present the ‘policy iteration-type’ (PI-type)

procedure which is suggested as the solution approach to our problem.

4.1 CONTINUOUS TIME MARKOV CHAINS (CTMC)

In order to be able to transform our problem into a CTMC, we introduce some

notation from [16] which is defined for a fixed allocation vector X

11

Page 12: Archives: People

We have = () as the project network. We assume that the project starts

at time zero and ends at time Υ, a r.v. During the course of the project execution,

each activity can be in one and only one of the following three states:

(i) active: an activity is active at time if it is being executed at time 5

(ii) dormant: an activity is dormant at time if it has finished but there is at

least one unfinished activity that ends as the same node as .

(iii) idle: an activity is called idle at time if it is neither active nor dormant at

time : the activity is either completed or is yet to be started.

For ≥ 0, define the sets

A() = { ∈ : is active at time }D() = { ∈ : is dormant at time }() = (A()D()) : the state at time t

We also make the following assumption:

A1. The work content of the activities in the project, and hence their durations, are

mutually independent positive random variables.

Let S denote the set of all the states of the project, S = { ()}, and let eS beits extension to include the null set; eS = S ∪ {( )}. Note that state () = ( )signifies that all activities are neither active nor dormant, hence they must be idle at

time and therefore the project is completed.

Under the assumption A1, {() ≥ 0} is a CTMC on eS.Furthermore we make the following observations:

5We change the designation of an activity to i instead of a because we wish to use the a to

designate an ‘active’ activity.

12

Page 13: Archives: People

1. The state ( ) is absorbing: once the project is completed, it remains com-

pleted.

2. The process {() ≥ 0} visits any state in S at most once; i.e., each activityis executed exactly once. This implies that all states in S are transient.

3. The above two observations imply that the process {() ≥ 0} is a finite-state,absorbing CTMC with a single absorbing state ( ).

We are now ready to introduce the phase type distribution and its properties.

4.2 PHASE TYPE DISTRIBUTIONS

The name ‘phase type’ distribution stems from the fact that an Erlang distribution is

derived as the sum of ‘stages’ or ‘phases’, all exponentially distributed with the same

parameter The generalized Erlang distribution of order m has m phases (stages);

each is exponentially distributed but of possibly different parameters 1 · · · Thefollowing development is abstracted from Neuts [19].

Definition 1 A continuous probability distribution (·) is of the phase type (PH-distribution) if it is the distribution of the time until absorption in a finite-state

Markov process with a single absorbing state; that is, there exists a probability vector

( +1) and an ‘infinitesimal generator matrix’ of the form

Q =

⎡⎣ T× T0×1

01× 01×1

⎤⎦ =

⎡⎣ T −Te0 0

⎤⎦ (7)

where 0 for 1 ≤ ≤ and ≥ 0 for 6=

Also,

Te+T0 = 0

and the initial probability vector (the so-called ‘counting probability’) of Q is given by

the vector¡γ +1

¢ with

γ · e+ +1 = 1

13

Page 14: Archives: People

The pair (γT) is called the ‘representation’ of (·)

In our case, the process starts in state 0 (project not yet initiated) with probability

1. States 1 are transient so that absorption into state + 1 from any initial

state is certain.

An important result of concern to us is that the matrix T is non-singular (a

necessary and sufficient condition for the states 1 to be transient). Observe

that

T −→ 0 as −→∞

Assuming an initial probability vector¡γ +1

¢ the time to absorption in state

+ 1 corresponding to the initial probability vector is given by

() = 1− γ · T · e for ≥ 0 (8)

Note that the ‘time to absorption’ for the general CTMC translates, in our context,

into the duration of the project, Υ

We now state the following known properties of the distribution (·) and commenton their relevance to our application. For the sake of avoiding confusion we shall

denote the argument of (·) by instead of x, keeping the latter to represent the

intensity of resource allocation.

1. It has a jump of height +1 at = 0 Evidently this is the probability that the

process starts in the absorbing state. This case is of no concern to us since it

implies that the project is complete at its start, which would imply that there

is no project. Therefore for our concern

+1 = 0

2. Its density function () = 0 () on (0∞) ; i.e., excluding the 0 point, isgiven by

() = 0 () = γ · T ·T0 (9)

14

Page 15: Archives: People

In our case the portion of the domain of is the whole real line including the

point at the origin because +1 = 0

3. The kth non-central moments (about the origin) 0 of (·) are all finite andare given by

0 = (−1) × !¡γT−e

¢ for ≥ 0 (10)

This provides the foundation for securing the moments of the project duration;

see Kulkarni and Adlakha [16] for details.

4.3 EXAMPLE OF THE PI-TYPE PROCEDURE

At the outset we wish to emphasize that the procedure we outline is an approximation

because we are considering the cost of tardiness of the expected project completion

time, which we know is not equal to the expected cost of tardiness of the project

completion time. It is well known that the expected value of the project completion

time is an under-estimate of the exact value, so is the cost based on it. Further, it is

also well known that the values deduced for the various measures of performance as-

suming the exponential distribution are an over-estimate of the real values, whatever

they may be. This is true for the probability distributions we have encountered in

the field of project networks because all of them are in the class NBUE (“new better

than used in expectation” — see Barlow and Proschan [2]). These two effects which

drive the values obtained in opposite directions seem to compensate each other with

the result being remarkably close to the exact values. Extensive numerical experi-

mentation has convinced us that the error committed is insignificant except in one

special circumstance, which will be described later.

To motivate the statement of the procedure, consider a simple activity network

comprising three activities as shown in Fig. 1. The corresponding ‘state space tran-

sition diagram’ (or ‘state space’ for short) is displayed in Fig. 2. Each node of the

graph identifies a subset of the activities and their status: a for ‘active’ and d for

15

Page 16: Archives: People

dormant.

Figure 1 here.

Figure 2 here.

The interpretation of the state space is as follows. The initial udc is state (1 3)

which indicates that both activities 1 and 3 are active. If activity #1 finishes first

then activity #2 can be initiated and the new state is (2 3) since activity #3 is

still ‘ongoing’. ‘Control’ — as represented by the arrow — would shift to udc {2 3} However, if activity #3 finishes first then no change in the udc will take place and

the process must wait until activity #1 is completed. Activity #3 remains dormant

during this interval. This is represented in the figure by an arrow leading to state

(1 3) From this latter state ‘control’ can shift only to one other state, namely, the

state in which activity #2 is active while activity #3 remains dormant. This is rep-

resented in the figure by state (2 3) and an arrow leading from (1 3)→ (2 3)

Interestingly enough, state (2 3) can also shift to state (2 3) represented by the

arrow between these two states, if activity #3 finishes before activity #2. The rest

of the figure can be interpreted similarly.

For the sake of concreteness of discussion, assume the unit cost of resource usage

is 1.0 and the cost of tardiness () is 3.0. Let the work content of activities #1, 2,

and 3 be exponentially distributed with parameters 0.2, 0.1, and 0.07, respectively.

Observe that, based on expectations, the expected duration of path over the nodes 1-

2-3 is 102+ 101= 15 while the duration of path 1-3 is 1

007= 1428 Thus the two paths

in this simple project are of almost equal expected duration, and the PERT model

would yield an expected completion time of 15. The exact value of the expected

duration of the project is deduced from the distribution of the time of realization of

16

Page 17: Archives: People

node 3, which in this case is deduced from the probability distribution of node 3,

3 () =¡1− −007

¢ Z

=0

02−02¡1− −01(−)

¢

=¡1− −007

¢ ¡1− −01

¢2

which yields,

E [Υ3] =

Z ∞

=0

[1− 3 ()] = 2122

The error committed in adopting the PERT value is indeed significant in this case

(amounting to some 41.5%). The density function of Υ3 the time of realization of

node 3, is given in Fig.3.

Figure 3 here.

Let the lower and upper bounds on resource availability for all three activities be

1 and 3,; i.e., 1 ≤ ≤ 3 = 1 2 3. Finally let the due date be 8.

Initialization:

Assume we start at the initial resource allocation vector x0 = [1 1 1]. The corre-

sponding infinitesimal generator matrix is Q0, given by:

Q0 =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−027 02 007 0 0 0

0 −017 0 007 010 0

0 0 −02 02 0 0

0 0 0 −010 0 010

0 0 0 0 −007 007

0 0 0 0 0 0

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦The expected project completion time and various costs are computed as follows:

E (Υ3) = 2122 (obtained using eq. (10)

resource cost = 1×µ1

02+1

01+

1

007

¶= $2929

tardiness cost = 3× (2122− 8) = $3967total expected cost = $6896

17

Page 18: Archives: People

Achieving Cost Reduction:

We can seek improvement in the total cost by one of the following two methods.

1. Increase the allocation 1 in small increments ∆ (say ∆ = 01) until no further

improvement in the objective value is achieved. We know that an increment ∆

in the resource allocation to activity 1 changes the rate matrix Q0 to

Q1 =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−027−∆ 020 +∆ 007 0 0 0

0 −017 0 007 010 0

0 0 −020−∆ 020 +∆ 0 0

0 0 0 −010 0 010

0 0 0 0 −007 007

0 0 0 0 0 0

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦from which we can obtain the expected project duration using Eq. (??) and

compute the other costs.

Using this functional evaluation one can adopt a suitable iterative scheme in-

corporating a steepest-descent search over possible values of resource allocations

for all activities. In other words,

“Best” Policy :Xbest = x

ÃX∈

E []

!+×({ 0 E [Υ]− })

2. Because of the symmetry between the two paths in the (original) project net-

work (1 = 1 2 and 2 = 3 in which the path is identified by its activities),

suppose we augment the resource allocation to activity 1 so that the expected

lengths of the two paths are equal; i.e., we seek the solution to the equation

1

021+1

01=

1

007

which ⇒ 1 =5

42857= 11667 (11)

Now, select a value of 1 large enough to make the cost larger than at 1 =

11667 then use dichotomous search to find the optimal value. For instance,

18

Page 19: Archives: People

suppose we select 1 = 15 The new allocation vector X(1) = [15 1 1] results

in the following infinitesimal generator matrix:

Q0(1) =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−037 03 007 0 0 0

0 −017 0 007 010 0

0 0 −03 03 0 0

0 0 0 −010 0 010

0 0 0 0 −007 007

0 0 0 0 0 0

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦The corresponding expected project completion time and the relevant costs are

as follows.

E(1) (Υ) = 2015 (obtained using Eq 10)

resource cost = 1×µ15× 1

02+1

01+

1

007

¶= $3179

tardiness cost = 3× (2015− 8) = $3644total cost = $6823

Notice that the total cost has decreased.

Let us now continue to increase the allocation to activity 1 further, say to

1.75. The updated completion time and the relevant costs corresponding to the

allocation x(2) = [175 1 1] are:

E(2) (Υ) = 1986 (obtained using Eq 10)

resource cost = 1×µ175× 1

02+1

01+

1

007

¶= $3304

tardiness cost = 3× (1986− 8) = $3558total expected cost = $6862

which is larger than the total cost obtained using X(1). Although the tardiness

cost has been reduced, the reduction has been more than offset by the increase

in the resource cost. Therefore we know that the maximal gain in cost is for

19

Page 20: Archives: People

some 1 in the interval [150 175] Depending on the desired accuracy, one

may either enumerate the values of 1 in this interval with greater accuracy,

or use dichotomous search, or even use Fibonacci search [30] to determine the

optimum.

Having secured the most improvement from activity 1, we now try changing the

allocation to activity 2 (or 3). If successful, we follow the same procedure with activity

3 (or 2), and so on until no further improvement is possible. Note that subsequent

reductions in cost due to changes in the allocation to either activities 2 or 3 may

induce further improvement in the cost of activity 1.

Our suggested procedure varies the resource allocation one activity at-a-time. We

are confident that such search will always converge to the global optimum because of

the uni-modality of the cost function (1), see below.

Following through with the iterations, the “best” policy for the example network

is x∗ = [15 15 15] with an associated cost of 62.38. Coincidentally, this is also seen

in the Table 1, which actually summarizes part of the solution search process. The

table displays values of total cost as the allocation for activity 3, 3, is varied while

keeping the allocations for activities 1 and 2, (1 2), fixed. The surface plot in Fig. 5

summarizes the table graphically.

Table 1 (Fig. 4) here.

Expected total cost for fixed allocations (1 2) and varying 3.

Figure 5 here.

4.4 THE PI-TYPE ITERATIVE PROCEDURE

In proposing any sequential search procedure one must establish two important prop-

erties:

20

Page 21: Archives: People

1. the process terminates finitely (for any desired accuracy) at the desired opti-

mum.

2. at termination, the policy obtained and its corresponding value are independent

of the sequence in which the activities were selected. (For instance in the above

example, had we selected activity 2 at the start then proceeded from there,

would we still have reached the same conclusion?)

These two propositions would be easily established if one can demonstrate that

the negative of the reward function

³{}||=1

´= E

"X∈

+ ·max {0Υ− }#

= X∈

E [] + · [max {0Υ− }] (12)

is unimodal in the ensemble of decision variables {}. The first term in (12) is

linear in and therefore convex. It is the second term that is not straightforward

because the impact of changes in ≤ ≤ on the project completion time Υ

is intertwined with other allocations through the inverse of the matrix T (in Eq. 7).

However, one thing is definitely clear: the function max {0Υ− } is monotone non-increasing in each In other words, under strict precedence relations

6 an increase in

from its lower bound to its upper bound can lead only to a shift in the distribution

of Υ towards 0, or stay unchanged. Therefore the function · E [max {0Υ− }]of Eq. 12 is monotonically non-increasing in its argument up to a point (or ridge,

or surface), beyond which it is monotonically non-decreasing (see Fig. 5). If the

increment ∆ is chosen “small enough” then the unique minimal value of (·) mustbe achieved. Observe that there may be a large number of alternate optimal policies

{} that yield the same value. Finally the fact that the decision variables are boundedon both sides ensures the finiteness of the procedure.

The above argument establishes the validity of the following proposition.

6As opposed to ‘generalized precedence relations’, see Elmaghrayb ??.

21

Page 22: Archives: People

Proposition 2 For the choice of the increment ∆ sufficiently small, the iterative

scheme of Policy Iteration achieves the optimal value and an optimal policy in a

finite number of steps.

We are now ready to state

The Procedure:

1. Use standard PERT calculations to determine the so-called ‘critical path(s)’.

The activities on this ‘critical path(s)’ are the primary candidates for additional

resource allocation.

2. Rank all the activities of the project in the order of the frequency of lying on a

‘critical path’, taking cognizance of the result of step 1.

3. Secure the state space of the project assuming some initial values of the decision

variables {} ; say = ∀ ∈

4. Select an appropriate increment in the resource and denote it by ∆ Increment

the resource allocation to each activity, taken in the specified order, and evaluate

the resulting project cost until it starts to increase.

5. Repeat step 4 as many times as necessary until no further decrease in cost is

realized.

4.5 LIMITATIONS OF THE PI-TYPE APPROACH

We summarize some of the limitations of the “Policy Iteration-type” procedure de-

scribed above.

• In the worst case scenario, if is a completely directed acyclic network (dag)

with nodes and ( − 1)2 arcs, then the state space for is given by

() = − −1, where

=

X=0

2(−)

22

Page 23: Archives: People

which grows hyper-exponentially as can be seen from the following table.

: 5 10 20 40

() : 162 68,780,544 2.69561E+30 5.4974E+120

Of course this worst case scenario (of a complete acyclic graph) is rarely realized,

if ever. Still, the state space does grow quite fast. Kulkarni and Adlakha [16]

cite a 48-activity network which gave rise to a state space of almost a quarter

of a million states.

• Apart from the rapid increase in the state space, the search space also has the

potential of rapid growth. For instance, assume a project has 40 activities,

and if we select ∆ = 002 with − = 5, we are now potentially looking at

250× 40 = 104 points, where h is the number of repetitions of step 4 of the

procedure, which is a large number if h is large (say ≥ 10).

• The functional evaluation applies only to exponentially distributed r.v.’s. De-viation from the exponential distribution vitiates the whole structure of the

CTMC on which the procedure is based. More on this point later.

• We cannot implicitly incorporate aggregate resource constraints of the formP∈() ≤ ;7 but they have to be explicitly incorporated as part of the

search procedure.

5 CONCLUSIONS

This paper treats the problem of optimal resource allocation assuming uncertainty in

activity work contentW which, for the sake of analytical tractability, was assumed to

be exponentially distributed. Since activity work contents typically do not follow the

exponential distribution, the question naturally arises whether the model constructed

7The abbreviation ‘udc’ refers to ‘uniformly directed cutset’; see Sigal, Pritsker and Solberg ??.

23

Page 24: Archives: People

(the CTMC) will have to be discarded. Fortunately, the answer is no; the CTMC

model continues to be valid since any distribution of interest to us in this field can

be approximated to any desired degree of accuracy by a ‘phase type’ distribution in

which either all the phases are exponentially distributed with the same parameter

giving rise to the Erlang distribution, or the phases have different parameters

{}=1 giving rise to the Generalized Erlang distribution. In either case, we areback to the CTMC model, but with a vastly expanded state space. Conceptually, the

model presented here remains intact. The approach to effect such representation is

given in Elmaghraby et al [9].

We now outline the challenges ahead in the future course of research in this field

which, in some sense, also represent what we did not treat in this paper.

• We have assumed that the resource availability, denoted by R, is constant overtime. An obvious generalization is to assume time-varying resource availability.

This would entail a radical change in the model of the process since now the

absolute time must be taken into account, which would vitiate the achievement

of optimality considering only changes in the states of the process.

• Our treatment is limited to one critical resource — a rare occurrence in real lifeapplications. Treatment of more than one resource, say three or four critical

resources, is more realistic. The issue here is to achieve ‘balance’ among the

resources in the sense of avoiding idle time of any of them, while simultaneously

respecting the limited availability of each resource.

• We opted to rank the activities according to a heuristic based on the classicalconcept of “critical path” in the sense of the PERTmodel. The question lingers:

Is there a better ranking of the activities so that the variation in the resource

allocation is directed towards the more ‘critical’ activities?

• Our approach can be better focused with considerable gain in computing effi-ciency if the project graph G is series/parallel (s/p); see Valdes, Tarjan and

24

Page 25: Archives: People

Lawler [28] for detailed description of series/parallel graphs and their recogni-

tion. Unfortunately most of project graphs are non series/parallel. This raises

the issue of “transformation” from a non series/parallel to series/parallel graphs.

The approach of Bein, Kamburowski and Stallmann [3] to effect such transfor-

mation is unacceptable in the domain of interest to us for two good reasons: (i)

it is applicable only to the AoA mode of representation which need not be the

mode in which the project is presented, and the translation of a project network

from the AoN mode to the AoA mode is a formidable problem in its own right;

and, most importantly, (ii) it relies on node reduction (i.e., elimination) which

is inadmissible in our application because of the resultant loss of identity of

the various activities. The issue remains: can one transform a graph from non

s/p into a s/p one with minimal effort in reversing the translation back to the

original graph?

• There is need for effective alternate variance reduction techniques. Past researchhas demonstrated that for simulation involving stochastic activity networks,

variance reduction techniques such as LHS perform better when compared to

elementary random sampling with no regard for variance minimization. These

techniques are easy to implement when the generation of the random variates is

via inverse transformation. However, when random variates are generated via

acceptance-rejection techniques, these methods cannot be used with the same

ease. In the case of activity networks, beta-distributed random variates are

commonly used, and they are generated using the acceptance-rejection method.

Hence the need for improved sampling techniques.

• Can meta-heuristics, and there is a plethora of them, be used to advantage inthis stochastic environment?

These and other issues are the subject of our active research in this field.

25

Page 26: Archives: People

References

[1] Azaron, A.,Katagiri, H. and Sakawa M. (2007) “Time-cost trade-off via optimal

control theory in Markov PERT networks,” Annals of Operations Research 15,

47-64.

[2] Barlow, R. E. and Proschan, F. (1975) Statistical Theory of Reliability and Life

Testing. Holt, Rinehart and Winston, New York.

[3] Bein, W. W., Kamburowski, J. and Stallmann, M. F. M. (1992). “Optimal reduc-

tion of two-terminal directed acyclic graphs,” SIAM J. Computing 21, 1112-1129.

[4] Birbil. S. I. and Fang, S-C. (2003) “An electromagnetism-like mechanism for

global optimization”, Journal of Global Optimization 25 :263—282.

[5] Demeulemeester, E. L. and Herroelen, W. S. (2002). Project Scheduling: A Re-

search Handbook, Kluwer ,ISBN 1-40207-051-9.

[6] Elmaghraby, S.E. (1964). “An Algebra for the Analysis of Generalized Activity

Networks,” Management. Sci. 10, 494-514.

[7] Elmaghraby, S.E. and Kamburowski, J. (1990). “On Project Representation and

Activity Floats,” Arab. J. Sci. and Eng. 15:4B ; 626-637.

[8] Elmaghraby, S.E. and Morgan, C.D. (2007) “Resource Allocation in Activity

Networks Under Stochastic Conditions: A Geometric Programming-Sampe Path

Optimization Approach,” paper submitted to the Journal of Economics and

Management.

[9] Elmaghraby, S.E., Benmansour, R., Artiba, A.H. and Allaoui, H. (2009) “Ap-

proximation of continuous distribution via the Generalized Erlang Distribution,”

Proceedings of the INCOM conference, Moscow, Russia, May.

[10] Fernandez, A. A. and Armacost, R. L. (1996) “The role of the nonanticipativ-

ity constraints in commercial software for the stochastic project scheduling”,

Computer and Industrial Engineering 31(1/2):233—236.

[11] Golenko-Ginzburg, D. and Gonik, A. (1997) “Stochastic network project schedul-

ing with nonconsummable limited resources”, International Journal of Produc-

tion Economics 48 :29—37.

[12] Gutjahr, W. J., Strauss, C. and Wagner, E. (2000) “A stochastic branch-and-

bound approach to activity crashing in project management”, INFORMS Jour-

nal on Computing 12 :125—135.

26

Page 27: Archives: People

[13] Herroelen, W.S. and Demeulemeester, E. (1995). “Recent advances in branch-

and-bound procedures for resource-constrained project scheduling problems,”

Proc. Summer School on Scheduling Theory and its Applications, Chapter 12,

259-276, Chretienne. P t al, eds., Wiley, Chichester.

[14] Herroelen, W.S. and Leus, R. (2006) “Project scheduling under uncertainty:

Survey and research potentials,” EJOR 165, 289-306.

[15] Jørgensen, T. and Wallace, S. (1999). “Improving project cost estimation by

taking into account managerial flexibility,” Chapter 3 in Jorgensen, T. Project

Scheduling as a Stochastic Dynamic Decision Problem, NTNU Trondheim, Nor-

way; 85-104. Also appearing as an ‘Extended Abstract’ in Proceeding of PMS98

Workshop, Istanbul, Turkey, July 7-9; 71.

[16] Kulkarni, V. G. and Adlakha, V. G. (1986) “Markov and Markov-Regenerative

PERT networks”, Operations Research 34 :769—781.

[17] Morgan, C. D. and Elmaghraby, S. E. (2007) “Resource Allocation in Activity

Networks: A Geometric Programming Model,” paper accepted for publication,

Journal of Operations and Logistics.

[18] Murty, K. G. (1983) Linear Programming. John Wiley and Sons, New York.

ISBN 047109725X.

[19] Neuts, M. F. (1989) Structured Stochastic Matrices of M/G/1 type and their Ap-

plications. Probability: Pure and Applied - A Series of Testbooks and Reference

Books.

[20] Ramachandra, G. and Elmaghraby, S. E. (2007) “Optimal resource allocation

in activity network: I. The deterministic case”, submitted for publication in IIE

Trans.

[21] Sigal, C. E., Pritsker, A. B. and Solberg, J. J. (1979). ”The Use of Cutsts in

Monte Carlo Analysis of Stochastic Networks”, Math. Comput. Simulation 21,

376-384.

[22] Sobel, M.J., Szmerekovsky, J.G. and Tilson, V. (2009) “Scheduling projects with

stochastic activity duration to maximize expected net present value,” EJOR 198,

697-

[23] Stork, F. (2001) Stochastic Resource-Constrained Project Scheduling. PhD thesis,

Technical University of Berlin , School of Mathematics and Natural Sciences.

[24] Tereso, A. P., Araújo, M. M. and Elmaghraby, S. E. “Adaptive resource alloca-

tion in multimodal activity networks”, IJPE 92 :1—10.

27

Page 28: Archives: People

[25] Tereso, A. P., Araújo, M. M. and Elmaghraby, S. E. (2003) “Experimental results

of an adaptive resource allocation technique to stochastic multimodal projects”.

Technical report, Universidade do Minho, Guimarães, Portugal.

[26] Tereso, A. P., Araújo, M. M. and Elmaghraby, S. E. (2003) “Basic approxi-

mations to an adaptive resource allocation technique to stochastic multimodal

projects”. Technical report, Universidade do Minho, Guimarães, Portugal.

[27] Tereso, A. P., Araújo, M. M. and Elmaghraby, S. E. (2004) “The optimal resource

allocation in stochastic activity networks via the electromagnetism approach”.

Paper presented in Ninth International Workshop on Project Management and

Scheduling(PMS ’04), Nancy, France, and appearing in the Proceedings of that

conference.

[28] Valdes, J., Tarjan, R., and Lawler, E. (1982). “The recognition of series-parallel

digraphs,” SIAM J. Comp. 11(2), 298-313.

[29] Valls, V., Laguna, M., Lino, P., Perez, A. and Quintanilla, S.(1998) “Project

scheduling with stochastic activity interruptions”. in Project Scheduling: Recent

Models, Algorithms and Applications, Kluwer Academic Publishers, 333—353.

[30] Wilde, D.J. (1964) Optimum Seeking Methods, Prentice-Hall, Englewood Cliffs,

N.J.

[31] Wollmer, R. D. (1985) “Critical path planning under uncertainty”,Mathematical

Programming Study 25, 164—171.

28

Page 29: Archives: People

2

31

AoA representation.

3

1 2

3 = 0.07

2 = 0.11 = 0.2

UDC 1 UDC 2

Figure 1: Example net: 3 activities.

29

Page 30: Archives: People

5

3

4

2

1

0

(1a,3a)

(2a,3d)

(2d,3a)(2a,3a)

(1a,3d)

The state space for the three activities project

Figure 2: State Space of 3-activities project.

30

Page 31: Archives: People

Density Function of Completion Time

0

0.01

0.02

0.03

0.04

0.05

0 10 20 30 40 50 60 70 80

time t

den

sity

f(t

)

Figure 3:

31

Page 32: Archives: People

Table 1: Expected Total Cost for Fixed Allocations (1 2) and Varying 3.

( x1 , x2 )

87.8684.2980.7178.0476.4275.4475.4477.0281.40(3,3)

84.1180.5477.2875.1773.5072.4772.4073.9178.20(2.75.2.75)

80.3677.1474.6872.5370.8069.7069.5670.9775.16(2.5,2.5)

77.6574.9272.4170.1968.4067.2166.9768.2772.32(2.25,2.25)

75.9373.1570.5768.2966.4165.1364.7765.9369.80(2,2)

74.8872.0469.3967.0265.0463.6463.1364.1167.76(1.75,1.75)

74.8671.9469.2166.7464.6363.0762.3863.1266.48(1.5,1.5)

76.5073.5070.6768.0665.8064.0563.1163.5566.50(1.25, 1.25)

81.1178.0075.0472.2869.8267.8366.5766.5868.96(1,1)

32.752.52.2521.751.51.251

x3

Figure 4:

32

Page 33: Archives: People

Figure 5:

33

Page 34: Archives: People

6 APPENDIX:

STOCHASTIC LINEAR PROGRAMMING

6.1 BACKGROUND

Stochastic programming is a framework for modeling optimization problems that

involve uncertainty. The most widely applied and studied stochastic programming

models are the two-stage linear programs; see, for example, Murty [18]. Here the

decision maker takes some action in the first stage, after which a random event occurs

affecting the realization of some parameters of the first stage. A “recourse decision”

is then be made in the second stage that compensates for any bad effects that might

have been experienced as a result of the first stage decision. The optimal policy from

such a model is a single first stage policy and a collection of recourse decisions (a

decision rule) defining which second-stage action should be taken in response to each

random outcome in the first stage. Mathematically, this can be generically stated as:

⎧⎪⎪⎪⎨⎪⎪⎪⎩min + E | ( )|

=

≤ ≤

⎫⎪⎪⎪⎬⎪⎪⎪⎭ (13)

where ⎧⎪⎪⎪⎨⎪⎪⎪⎩ ( ) = min ()

()+ () =

≥ 0

⎫⎪⎪⎪⎬⎪⎪⎪⎭ (14)

In this model are vectors and are matrices, all of the

appropriate dimensions.

The first linear program of (13) minimizes the first-stage direct costs, plus

the expected recourse cost E | ( )| over all of the possible scenarios of the secondstage while meeting the first-stage constraints, = . The recourse cost Q depends

both on x, the first-stage decision, and on the random event . The second LP

34

Page 35: Archives: People

of (14) describes how to choose () (a different decision for each realization of the

random scenario) that minimizes the cost () subject to some recourse constraint

()+ () = . One important aspect of stochastic programs is that the first-

stage decision, x, is independent of which second-stage scenario actually occurs. This

is called the ‘non-anticipatory property’.

Solution approaches to stochastic programming models are driven by the type of

probability distributions governing the random parameters. A common approach to

handling uncertainty is to define a small number of possible scenarios to represent the

future. In this case it is possible to compute a solution to the stochastic programming

problem by solving a deterministic equivalent linear program represented in (15) by

introducing a different second-stage (vector) y variable for each scenario.

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

min +P

=1

=

+ = ∀ = 1 ≥ 0

≥ 0 ∀ = 1

⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭(15)

where is the probability of occurrence of scenario i and L represents the total

number of scenarios chosen. Notice that the non-anticipatory constraint is met in

= . There is only one first-stage decision vector, x, whereas there are L second-

stage decisions, one for each scenario. The first-period decision cannot prefer one

scenario over another and must be feasible for each scenario – that is, =

and + = for = 1 . Since we solve for all the decisions, x and {}simultaneously, we are choosing x to be (in some sense) optimal over all the scenarios.

These problems are typically very large scale LP problems, and so, much research

effort in the stochastic programming community has been devoted to developing algo-

rithms that exploit the problem structure, in particular in the hope of decomposing

large problems into smaller more tractable components. Here convexity is a key

property.

35

Page 36: Archives: People

When the probability distributions of the random parameters are continuous, or

there are many random parameters, one is faced with the problem of constructing

appropriate scenarios to approximate the uncertainty. One approach to this problem

constructs two different deterministic equivalent problems, the optimal solutions of

which provide upper and lower bounds on the optimal value of the original problem.

An alternative solution methodology replaces each of the random variables by a finite

random sample and solves the resulting (deterministic) mathematical programming

problem as one would do for the finite scenario case. This is often called an ‘external

sampling method ’. Under fairly mild conditions one can obtain a statistical estimate

of the optimal solution value that converges to the optimal solution as the sample

size increases.

Stochastic integer programming models arise when the decision variables are re-

quired to take on integer values. In most practical situations this entails a loss of

convexity and makes the application of decomposition methods problematic. Tech-

niques for solving stochastic integer programming models have been an active research

area for some time now.

6.2 ONTHEAPPLICATIONOF STOCHASTICPROGRAM-

MING TO PROJECT RESOURCE ALLOCATION

Stochastic programming techniques have been applied to projects with probabilistic

activity durations. However, the literature is very sparse for project planning under

resource constraints in which the focus is on minimizing an economic objective. A

notable mention is the work by Valls et al [29] in which stochasticity enters the

picture in a very specific manner. We briefly discuss their model because it provides

a good understanding of the requirements for stochastic programming to be valid in

the context of projects.

They deal with the problem of resource-constrained project scheduling with sto-

chastic activity interruptions, in which they minimize the total expected value of

36

Page 37: Archives: People

weighted tardiness. Further details of the problem are summarized as follows:

• The project is represented as a graph in the AoA format, where A is the set ofactivities and E is the set of precedence relationships.

• The set of activities, is defined to be the union of two subsets, = ∪,where,

— DA : set of deterministic activities,

— SA : set of stochastic activities; i.e., those that are interrupted for an

uncertain amount of time (initial processing time is known; length of the

interruption and the final processing time are uncertain).

• There are renewable resource types. The availability of each resource type k

in each time period is units, for = 1 .

• Each activity i requires units of resource k in each period for its performance.

• The two parts of the stochastic activities in SA, namely before and after theinterruption, require the same number of units of each resource.

• The processing time of the second part (after the interruption) of a stochasticactivity is independent of the length of the interruption.

From the scheduling perspective, three different situations define a new decision

stage, namely,

(a) beginning of the project,

(b) completion of an activity in or completion of first part of an activity in ,

and,

(c) end of an interruption.

37

Page 38: Archives: People

At each of the three above stages, one must consider the available resources, the

activities yet to be scheduled, and the precedence relationships in order to make

scheduling decisions. Valls et al [29] transform this problem into a two-stage decision

problem, where, in the first stage they assign a priority to each activity and in the

second stage, upon resolution of the uncertainties of the interruptions, they construct

a schedule using these priorities. Therefore a valid solution representation is any

topological ordering of the graph representation of the project .

Their objective can be stated mathematically as

min

() =X∈

()

where is the probability associated with scenario s and represents the total

weighted tardiness for scenario s — that is,

() =X∈

·max { − 0}

where is the due date of activity j and is the completion time of activity j

under scenario s. Since the completion time of each activity depends on the ordering

, both and the expected tardiness ET also depend on .

The solution procedure implemented by Valls et al is a hybrid algorithm based

on scatter search techniques. The procedure searches for improved solutions by gen-

erating priority ordering and evaluating the quality of the ordering by scheduling

the activities under each scenario.

As can be easily seen, the procedure of Valls et al [29] essentially mimics a two-

stage decision making process. In the first stage the priorities are set and in the

second stage there is a recourse available for adjusting the schedule according to the

resolution of the uncertainties. The non-anticipatory property is retained at all times

because the priorities set in the first stage are not disturbed.

We now present a brief discussion to illustrate why a stochastic programming ap-

proach, similar to the one just presented, cannot be applied to our problem. Consider

38

Page 39: Archives: People

the simple project (AoA representation) shown in Fig. 6 and assume there is only

one resource of availability R.

Figure 6 here.

We can enumerate the ‘uniformly directed cutsets’ (udc’s) [21] — a cutset in which

all the arrows are directed from the set X that contains the origin node into the set

= − that contains the terminal node — and depict them in a hierarchical graph

as shown in Fig.7. The number within the node represents the activities in a udc and

the number on the arc indicates the completed activity. For instance, consider nodes

{1 2} and {2 3 4} which are joined by an arrow with the number 1 on it. This meansthat when activity 1 in udc {1 2} is completed, the ‘controlling udc’ now becomes{2 3 4}. On the other hand if activity 2 finishes first, then activities 3 and 4 have towait until their predecessor activity, namely activity 1, is completed.

Figure 7 here.

The first problem encountered in this context is the identification of a “stage”. A

stage can be thought of as an epoch in time when one activity has completed and

another activity can start. In our case, a logical possibility would be to consider

activities processed in a udc as a stage. In other words, processing one udc after

another would translate into proceeding stage-wise. As one can see, this itself presents

a very daunting challenge. Currently, most stochastic programs are modeled as two-

stage programs; modeling and solving multi-stage stochastic programs is quite hard.

Apart from the modeling difficulty, there is another issue that needs to be con-

sidered; viz., the non-anticipatory character of the procedure. In other words, the

first-stage decision; namely the assignment of the priority , is independent of which

scenarios occur in the subsequent stages. Defining and interpreting the nature of

39

Page 40: Archives: People

a decision in a given stage is unclear, thereby making it even more difficult to try

and ensure the non-anticipatory property. In order to understand the intricacy of

the problem, consider a particular scenario related to the example project in Fig.6.

Assume there is an algorithm designed for adaptive resource allocation – that is,

it proceeds stage-by-stage, while performing some sort of ‘look ahead’ action in the

allocation of resources. The graph shown in Fig.7 reveals an interesting problem that

arises in this scheme of adaptive resource allocation. Assume that, in the first stage

we have obtained the vector (∗1 ∗2) that minimizes the expected total cost. We now

‘fix’ the allocation of these two activities and proceed to obtain the optimal values

for the decision variable 3 and 4. When we sample the work content for activities

{1,2,3,4}, we shall know, based on (∗1 ∗2) obtained in the previous step, whether

node 2 or node 3 is realized first. If node 2 is realized first we have to optimize (3 4)

over the updated resource availability, namely − ∗2. On the other hand, if node 3

is realized first we have to wait until activity 1 finishes, then optimize (3 4) over

the original resource availability R. Here comes the dilemma — we have optimized

(3 4) over two different values of resource availability. How do we reconcile these

two results? This will be the case whenever we have two arcs terminating at any

given node in the udc hierarchy graph.

The preceding illustration demonstrates the inapplicability of this type of stochas-

tic programming models to our problem when the udc’s represent the stages. While

this might be the intuitive way to identify a stage in the context of our problem, there

could very well be other ways to model the problem and apply stochastic program-

ming to solve it. We however conjecture that enforcing non-anticipatory constraints

in a sequential decision process such as ours is not possible.

40

Page 41: Archives: People

Figure 6: An example project of 9 activities.

41

Page 42: Archives: People

Figure 7: A hierarchical graph of the udc’s (parial).

42