116
Continuous-Time Dynamic Shortest Path Algorithms by BRIAN C. DEAN Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degrees of Bachelor of Science in Electrical Engineering and Computer Science and Master of Engineering in Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 21, 1999 © Massachusetts Institute of Technology 1999. All rights reserved. Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Department of Electrical Engineering and Computer Science May 21, 1999 Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ismail Chabini Assistant Professor Department of Civil and Environmental Engineering Massachusetts Institute of Technology Thesis Supervisor Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arthur C. Smith Chairman, Departmental Committee on Graduate Theses

Continuous-Time Dynamic Shortest Path Algorithms

  • Upload
    others

  • View
    20

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Continuous-Time Dynamic Shortest Path Algorithms

Continuous-Time Dynamic Shortest PathAlgorithms

by

BRIAN C. DEAN

Submitted to the Department of Electrical Engineering and Computer Sciencein partial fulfillment of the requirements for the degrees of

Bachelor of Science in Electrical Engineering and Computer Scienceand Master of Engineering in Computer Science

at the

MASSACHUSETTS INSTITUTE OF TECHNOLOGY

May 21, 1999

© Massachusetts Institute of Technology 1999. All rights reserved.

Author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Department of Electrical Engineering and Computer Science

May 21, 1999

Certified by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Ismail Chabini

Assistant ProfessorDepartment of Civil and Environmental Engineering

Massachusetts Institute of TechnologyThesis Supervisor

Accepted by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Arthur C. Smith

Chairman, Departmental Committee on Graduate Theses

Page 2: Continuous-Time Dynamic Shortest Path Algorithms

2

Page 3: Continuous-Time Dynamic Shortest Path Algorithms

3

Continuous-Time Dynamic Shortest PathAlgorithms

byBrian C. Dean

Submitted to the Department of Electrical Engineering and Computer Scienceon May 21, 1999, in partial fulfillment of the requirements for the degrees of

Bachelor of Science in Electrical Engineering and Computer Scienceand Master of Engineering in Computer Science.

Abstract

We consider the problem of computing shortest paths through a dynamic network – anetwork with time-varying characteristics, such as arc travel times and costs, which areknown for all values of time. Many types of networks, most notably transportationnetworks, exhibit such predictable dynamic behavior over the course of time. Dynamicshortest path problems are currently solved in practice by algorithms which operatewithin a discrete-time framework. In this thesis, we introduce a new set of algorithmsfor computing shortest paths in continuous-time dynamic networks, and demonstrate forthe first time in the literature the feasibility and the advantages of solving dynamicshortest path problems in continuous time. We assume that all time-dependent networkdata functions are given as piece-wise linear functions of time, a representation capableof easily modeling most common dynamic problems. Additionally, this form ofrepresentation and the solution algorithms developed in this thesis are well suited formany augmented static problems such as time-constrained minimum-cost shortest pathproblems and shortest path problems with time windows.

We discuss the classification, formulation, and mathematical properties of all commonvariants of the continuous-time dynamic shortest path problem. Two classes of solutionalgorithms are introduced, both of which are shown to solve all variants of the problem.In problems where arc travel time functions exhibit First-In-First-Out (FIFO) behavior,we show that these algorithms have polynomial running time; although the generalproblem is NP-hard, we argue that the average-case running time for many commonproblems should be quite reasonable. Computational results are given which support thetheoretical analysis of these algorithms, and which provide a comparison with existingdiscrete-time algorithms; in most cases, continuous-time approaches are shown to bemuch more efficient, both in running time and storage requirements, than their discrete-time counterparts. Finally, in order to further reduce computation time, we introduceparallel algorithms, and hybrid continuous-discrete approximation algorithms whichexploit favorable characteristics of algorithms from both domains.

Thesis Supervisor: Ismail ChabiniTitle: Assistant Professor, Department of Civil and Environmental Engineering,Massachusetts Institute of Technology

Page 4: Continuous-Time Dynamic Shortest Path Algorithms

4

Acknowledgments

There are many people whom I would like to thank for their support and encouragement

during the process of writing this thesis.

I wish to thank my thesis supervisor, Professor Ismail Chabini, for his numerous

insightful comments which helped me improve the quality of the text, and in general for

his motivation and support.

I am grateful to my friends and colleagues in the MIT Algorithms and Computation for

Transportation Systems (ACTS) research group for their encouragement, friendship, and

advice.

I received helpful editorial advice from my friends Boris Zbarsky and Nora Szasz, and

my discussions with both Ryan Rifkin and my sister Sarah were instrumental in the

development of proofs of polynomial bounds on running time for several algorithms.

Finally, I would like to thank my girlfriend Delphine for her patience and encouragement

during some very busy times, and for providing me with crucial insight into the most

difficult proof in entire text. I dedicate this work to her.

Page 5: Continuous-Time Dynamic Shortest Path Algorithms

5

Contents

1. INTRODUCTION......................................................................................................... 7

1.1 OVERVIEW OF RESEARCH........................................................................................... 81.2 THESIS OUTLINE ......................................................................................................... 9

2. LITERATURE REVIEW........................................................................................... 12

2.1 ALGORITHMS FOR FIFO NETWORKS......................................................................... 122.2 DISCRETE-TIME MODELS AND ALGORITHMS............................................................ 132.3 PREVIOUS RESULTS FOR CONTINUOUS-TIME PROBLEMS.......................................... 16

3. CONTINUOUS-TIME MODELS ............................................................................. 18

3.1 PIECE-WISE LINEAR FUNCTION NOTATION............................................................... 183.2 NETWORK DATA NOTATION ..................................................................................... 193.3 DESCRIPTION OF PROBLEM VARIANTS...................................................................... 223.4 SOLUTION CHARACTERIZATION................................................................................ 24

4. PROBLEM FORMULATION AND ANALYSIS.................................................... 26

4.1 MATHEMATICAL FORMULATION ............................................................................... 264.2 SOLUTION PROPERTIES............................................................................................. 294.3 PROPERTIES OF FIFO NETWORKS............................................................................. 334.4 SYMMETRY AND EQUIVALENCE AMONG MINIMUM -TIME FIFO PROBLEMS............. 38

5. PRELIMINARY ALGORITHMIC RESULTS FOR MINIMUM-TIME FIFOPROBLEMS .................................................................................................................... 43

5.1 FIFO MINIMUM -TIME ONE-TO-ALL SOLUTION ALGORITHM ................................... 435.2 PARALLEL ALGORITHMS FOR PROBLEMS IN FIFO NETWORKS................................. 45

6. CHRONOLOGICAL AND REVERSE-CHRONOLOGICAL SCANALGORITHMS............................................................................................................... 50

6.1 FIFO MINIMUM -TIME ALL-TO-ONE SOLUTION ALGORITHM ................................... 506.2 GENERAL ALL-TO-ONE SOLUTION ALGORITHM ....................................................... 616.3 GENERAL ONE-TO-ALL SOLUTION ALGORITHM ....................................................... 716.4 RUNNING TIME ANALYSIS ........................................................................................ 836.5 HYBRID CONTINUOUS-DISCRETE METHODS............................................................. 87

7. LABEL-CORRECTING ALGORITHMS ............................................................... 90

7.1 ALL-TO-ONE SOLUTION ALGORITHM ....................................................................... 917.2 ONE-TO-ALL SOLUTION ALGORITHM ....................................................................... 937.3 RUNNING TIME ANALYSIS ........................................................................................ 95

8. COMPUTATIONAL RESULTS ............................................................................... 98

8.1 NETWORK GENERATION........................................................................................... 988.2 DESCRIPTION OF IMPLEMENTATIONS......................................................................... 998.3 ASSESSMENT OF ALGORITHMIC PERFORMANCE...................................................... 101

Page 6: Continuous-Time Dynamic Shortest Path Algorithms

6

9. CONCLUSIONS ....................................................................................................... 108

9.1 SUMMARY OF RESULTS........................................................................................... 1089.2 FUTURE RESEARCH DIRECTIONS............................................................................. 110

APPENDICES ............................................................................................................... 112

APPENDIX A. THE MIN-QUEUE DATA STRUCTURE...................................................... 112APPENDIX B. RAW COMPUTATIONAL RESULTS........................................................... 113

REFERENCES.............................................................................................................. 115

Page 7: Continuous-Time Dynamic Shortest Path Algorithms

7

Chapter 1

Introduction

Almost all networks have characteristics which vary with time. Many networks such as

transportation networks and data communication networks experience predictable rising

and falling trends of utilization over the course of a day, typically peaking at some sort of

“rush-hour”. Algorithms which account for the dynamic nature of such networks will be

able to better optimize their performance; however, development of these algorithms is

challenging due to the added complexity introduced by the network dynamics, and by the

high demands they typically place on computational resources.

In the literature, one encounters several different definitions of a what constitutes a

dynamic network. In general, a dynamic network is a network whose characteristics

change over time. These networks typically fall into one of two main categories. In the

first category, the future characteristics of the network are not known in advance.

Algorithms which optimize the performance of this type of network are sometimes called

re-optimization algorithms, since they must react to changes in network conditions as

they occur by making small changes to an existing optimal solution so that it remains

optimal under new conditions. These algorithms often closely resemble static network

optimization algorithms, since they are intended to solve a succession of closely-related

static problems. In the second category, if one knows in advance the predicted future

conditions of the network, it is possible to optimize the performance of the network over

all time in a manner which is consistent with the anticipated future characteristics of the

network. Algorithms designed for this second class of dynamic networks typically

represent a further departure from the familiar realm of static network optimization

algorithms. In this thesis, we focus exclusively on this second class of dynamic

networks, for which time-varying network characteristics are known for all time.

Page 8: Continuous-Time Dynamic Shortest Path Algorithms

8

1.1 Overview of Research

This thesis considers the fundamental problem of computing shortest paths through a

dynamic network. In some networks, most notably electronic networks, traditional static

algorithms are often sufficient for such a task, since network characteristics change very

little over the duration of transmission of any particular packet, so that the network is

essentially static for the lifetime of a packet. However, in other network applications,

primarily in the field of transportation, network characteristics may change significantly

during the course of travel of a commodity over the network, calling for more

sophisticated, truly dynamic algorithms. To date, all practical algorithms which solve

dynamic shortest path problems work by globally discretizing time into small increments.

These discrete-time algorithms have been well-studied in the past, and solution

algorithms with optimal running time exist for all common variants of the problem.

Unfortunately, there are several fundamental drawbacks inherent in all discrete-time

approaches, such as very high memory and processor time requirements, which

encourage us to search for more efficient methodologies. Although the literature contains

some theoretical results for continuous-time problems, to date no practical algorithms

have been shown to efficiently solve realistic dynamic shortest path problems in

continuous-time. The goal of this thesis is to demonstrate not only the feasibility of

continuous-time methods, but also to show that the performance of new continuous-time

approaches is in many cases superior to that of existing discrete-time approaches. In

doing so, we develop a new set of highly-efficient serial and parallel algorithms for

solving all common variants of the dynamic shortest path problem in continuous-time,

and we provide an extensive theoretical and computational study of these algorithms.

The representation of time-dependent data is an issue which deserves careful

consideration in any continuous-time model. In this thesis, we impose the simplifying

assumption that all time-dependent network data functions are specified as piece-wise

linear functions of time. Our solution algorithms and their analyses will depend strongly

on this assumption, and we further argue that it is this simplifying assumption which

makes the continuous-time dynamic shortest path problem truly computationally feasible

in practice. First, piece-wise linear functions are easy to store and manipulate, and they

Page 9: Continuous-Time Dynamic Shortest Path Algorithms

9

can easily approximate most “reasonable” time-dependent functions. Additionally, if arc

travel time functions are piece-wise linear, then path travel time functions will also be

piece-wise linear, since we shall see that path travel time functions are given essentially

by a composition of the travel time functions of the arcs along a given path. Almost any

other choice of representation for arc travel time functions results in extremely unwieldy

path travel time functions. For example, the use of quadratic functions to describe arc

travel times results in arbitrarily-high degree polynomial path travel time functions,

which are cumbersome to store and evaluate. Finally, piece-wise linear functions make

sense from a modeling perspective: in order to solve for optimal dynamic shortest paths,

we must know the anticipated future characteristics of the network, and often these are

not known with enough precision and certainty to warrant a more complicated form of

representation. It is common to anticipate either increasing or decreasing trends in

network data, or sharp discontinuities at future points in time when we know for example

that travel along an arc may be blocked for a certain duration of time. Piece-wise linear

functions are appropriate for modeling this type of simple network behavior. Certain

variants of static shortest path problems, such as time-constrained minimum-cost

problems and problems involving time windows are also easy to represent within this

framework, as they involve arc travel times and costs which are piece-wise constant

functions of time.

1.2 Thesis Outline

The material in this thesis is organized as follows:

In Chapter 2, we review previous results from the literature. We discuss key concepts

underlying discrete-time dynamic shortest path algorithms, so as to compare these

approaches with the new continuous-time algorithms developed in later chapters.

Previous theoretical and algorithmic results related to the continuous-time dynamic

shortest path problem are also examined.

Chapter 3 introduces the notation associated with continuous-time models, and classifies

the different variants of the continuous-time dynamic shortest path problem. We will

generally categorize problems by objective, based on whether we desire minimum-time

Page 10: Continuous-Time Dynamic Shortest Path Algorithms

10

or more general minimum-cost paths, and we will consider three fundamental problem

variants based on configuration of source and destination nodes and times: the one-to-all

problem for one departure time, the one-to-all problem for all departure times, and the

all-to-one problem for all departure times.

In Chapter 4, we provide a mathematical formulation of these problem variants, and

discuss important mathematical properties of these problems and their solutions.

Optimality conditions are given, and it is proven that a solution of finite size always

exists for all problem variants. We devote considerable attention to the discussion of

properties of networks which exhibit First-In-First-Out, or FIFO behavior (defined later),

and we will show that the minimum-time one-to-all problem for all departure times is

equivalence through symmetry to the minimum-time all-to-one problem in a FIFO

network.

Chapter 5 is the first of three core chapters which contain the algorithmic developments

of the thesis. Within this chapter, we focus on fundamental results which apply to FIFO

networks. We first describe methods of adapting static shortest path algorithms to

compute minimum-time paths from a source node and a single departure time to all other

nodes in a FIFO dynamic network – this is an important well-established result in the

literature. Additionally, we present a new technique for developing parallel adaptations

of any algorithm which computes all-to-one minimum-time paths or one-to-all minimum-

time paths for all departure times in a FIFO network, in either continuous or discrete

time.

The remaining algorithmic chapters describe two broad classes of methods which can be

applied to solve any variant of the continuous-time dynamic shortest path problem, each

with associated advantages and disadvantages. In Chapter 6 we introduce a new class of

solution algorithms, which we call chronological scan algorithms. We first present a

simplified algorithm which addresses a simple, yet common problem variant: the

minimum-time all-to-one dynamic shortest path problem in networks with FIFO

behavior. The simplified algorithm is explained first because it provides a gradual

introduction to the methodology of chronological scan algorithms, and because it can be

Page 11: Continuous-Time Dynamic Shortest Path Algorithms

11

shown to solve this particular variant of the problem in polynomial time. We then present

general extended chronological scan solution algorithms which solve all variants of the

dynamic shortest path problem, and hybrid continuous-discrete adaptations of these

algorithms which compute approximate solutions and require less computation time.

In Chapter 7, we discuss a second class of solution methods, which we call label-

correcting algorithms, based on previous theoretical results obtained by Orda and Rom

[16]. In FIFO networks, these algorithms are shown to compute minimum-time shortest

paths in polynomial time, although they have a higher theoretical computational

complexity than the corresponding chronological scan algorithms.

Chapter 8 contains the results of extensive computational testing. The two primary

classes of algorithms developed within this thesis, chronological scan algorithms and

label-correcting algorithms, are evaluated for a wide range of both randomly generated

and real networks, and shown to be roughly equivalent in performance. We also compare

the performance of these techniques with that of the best existing discrete-time dynamic

shortest path algorithms, and show that in many cases the continuous-time approaches are

much more efficient, both in terms of processing time and space.

Finally, we give concluding remarks in Chapter 9, including a summary of the results

developed within this thesis, and suggestions for future directions of research.

Page 12: Continuous-Time Dynamic Shortest Path Algorithms

12

Chapter 2

Literature Review

In this chapter, we briefly outline previous results from the literature so as to frame our

new results within a historical perspective. Well-known previous results for computing

shortest paths within FIFO networks are reviewed. We then discuss methods used to

address discrete-time problems; these methods will be compared with new continuous-

time approaches in later chapters. Finally, existing results which relate to the continuous-

time case are reviewed.

2.1 Algorithms for FIFO Networks

It is common for many dynamic networks, for example transportation networks, to satisfy

the well-known First-In-First-Out, or FIFO property. This property is mathematically

described in Section 3.2; intuitively, it stipulates that commodities exit from an arc in the

same order as they entered, so that delaying one’s departure along any path never results

in an earlier arrival at an intended destination. One of the most celebrated results in the

literature on this subject is the fact that the minimum-time dynamic shortest paths

departing from a single source node at a single departure time may be computed in a

FIFO network by a slightly modified version of any static label-setting or label-correcting

shortest path algorithm, where the running time of the modified algorithm is equal to that

of its static counterpart. This result was initially proposed by Dreyfus [10] in 1969, and

later shown to hold only in FIFO networks by Ahn and Shin [1] in 1991, and Kaufman

and Smith [14] in 1993. As many static shortest path algorithms are valid for both

integer-valued and real-valued arc travel times, modified static algorithms which solve

the minimum-time dynamic shortest path problem from a single source node and

departure time can be adopted for dynamic network models in either discrete time or

continuous time. Such modified static algorithms are discussed in Section 5.1.

As the so-called “minimum-time one-to-all for one departure time” dynamic shortest path

problem in a FIFO network is well-solved, this thesis focuses on the development of

Page 13: Continuous-Time Dynamic Shortest Path Algorithms

13

algorithms for all other common problem variants of the continuous-time dynamic

shortest path problem. For the case of FIFO networks, we will discuss methods for

computing shortest paths from all nodes and all departure times to a single destination

node, and shortest paths from a source node to all destination nodes for all possible

departure times, rather than a single departure time. Furthermore, we discuss solution

algorithms for problem variants in networks which may not necessarily satisfy the FIFO

property, and algorithms which determine paths which minimize a more general travel

cost rather than just the travel time.

2.2 Discrete-Time Models and Algorithms

The dynamic shortest path problem was initially proposed by Cooke and Halsey [7] in

1966, who formulated the problem in discrete time and provided a solution algorithm.

Since then, several authors have proposed alternative solution algorithms for some

variants of the discrete-time dynamic shortest path problem. In 1997, Chabini [3]

proposed a theoretically optimal algorithm for solving the basic all-to-one dynamic

shortest path problem, and Dean [9] developed extensions of this approach to solve a

more general class of problems with waiting constraints at nodes. Pallottino and Scutella

(1998) provide additional insight into efficient solution methods for the all-to-one and

one-to-all problems. Finally, Chabini and Dean (1999) present a detailed general

framework for modeling and solving all common variants of the discrete-time problem,

along with a complete set of solution algorithms with provably-optimal running time.

We proceed to discuss the methods used to model and solve discrete-time problems.

Consider a directed network G = (N, A) with n = |N| nodes and m = |A| arcs. Denote by

dij[t] and cij[t] the respective travel time and travel cost functions of an arc (i, j) ∈ A, so

that dij[t] and cij [t] give the respective travel time and cost incurred by travel along (i, j)

if one departs from node i at time t. For clarity, we use square brackets to indicate

discrete functions of a discrete parameter. If we want to minimize only travel time rather

than a more general travel cost, then we let cij [t] = d ij [t] . In order to ensure the

representation of these time-dependent functions within a finite amount of memory, we

assume they are defined over a finite window of time t ∈ 0, 1, 2, …, T, where T is

Page 14: Continuous-Time Dynamic Shortest Path Algorithms

14

determined by the total duration of time interval under consideration and the by

granularity of the discretization we apply to time in this interval. Beyond time T, we can

either disallow travel, or assume that network conditions remain static and equal to the

value they assumed at time T.

It is easy to visualize and solve a discrete-time dynamic shortest path problem by

constructing a time-expanded network, which is a static network that encapsulates the

dynamic characteristics of G. The time-expanded network is formed by making a

separate copy of G for every feasible value of time – its nodes are of the form (i, t), where

i ∈ N and t ∈ 0, 1, 2, …, T, and its arcs are of the form ((i, t), (j, minT, t + dij [t])) ,

where (i, j) ∈ A and t ∈ 0, 1, 2, …, T, and where the travel cost of every such arc is

given by cij [t] . Since paths through the time-expanded network correspond exactly in

cost and structure to paths through the dynamic network G, we can reduce the problem of

computing minimum-cost dynamic paths through G to the equivalent problem of

computing minimum-cost paths through the static time-expanded network, a problem

which we may solve by applying any of a host of well-known static shortest path

algorithms to the time-expanded network. However, since the time-expanded network

has a special structure, specialized static shortest path algorithms may be employed to

achieve a more efficient running time. This special structure is apparent from the

construction of the network: there are multiple levels corresponding to the different

copies of the network G for each value of time, and no arc points from a node of a later

time-level to a node of an earlier time-level. The time-expanded network is therefore

“multi-staged” – it is acyclic with the exception of arcs potentially connecting nodes

within the same level of time.

The running time of solution algorithms for the discrete-time problem depends on the

problem type, but worst case asymptotic running time is almost always pseudo-

polynomial in T (the only significant exception is the aforementioned minimum-time

one-to-all problem in a FIFO network, which is solvable in the same amount of time as a

static shortest path problem). A discrete-time solution algorithm will typically need to

examine a significant portion of the nodes and arcs in the time-expanded network,

Page 15: Continuous-Time Dynamic Shortest Path Algorithms

15

leading to a worst-case lower bound of Ω(nT + mT) on running time. This lower bound

is also often close to the average-case running time of most discrete-time solution

algorithms. Since T usually takes values in the hundreds or thousands, discrete-time

approaches can have very high computational demands. Furthermore, the amount of

memory required by these algorithms is typically Ω(nT) in the worst (and often average)

case, as one needs to assign cost labels to the nodes of the time-expanded network in

order to compute shortest paths through this network.

Although discrete-time solution algorithms are straightforward to design and implement,

they have many drawbacks, including primarily their high storage and processing time

requirements. Additionally, the high running time of a discrete-time algorithm often

doesn’t reflect the true complexity of the dynamics of the underlying problem. For

example, consider a problem in which the travel cost of only one arc changes at exactly

one point in time. In fact, this is a common scenario, since time-constrained minimum-

cost static shortest path problems may be modeled as dynamic problems in which one arc

changes its travel cost (to infinity) at exactly one point in time. In order to obtain a

solution to this problem using discrete-time methods, one must often do as much work as

in a fully-dynamic problem in which all network data is constantly changing – simplicity

in the time-dependent network data very rarely translates to a lower running time. Also,

in most dynamic networks, a high percentage of the network is actually static. For

example, in a transportation network, major roads exhibit dynamic characteristics due to

changes in congestion over the course of a day, but lesser-traveled minor roads,

comprising a large percentage of the network, are for the most part unaffected by

congestion delays and remain essentially static. Discrete-time algorithms typically offer

no corresponding decrease in running time as a reward for this simplicity, as their

performance is determined for the most part only by n, m, and T, and not by the structure

of the time-dependent network data. In contrast, we will see in forthcoming chapters that

the performance of the new continuous-time algorithms developed in this thesis will be

determined primarily by the complexity of the network data, so that their running time is

more closely correlated with the true complexity of a problem, and so that they may solve

simple dynamic problems with much greater efficiency.

Page 16: Continuous-Time Dynamic Shortest Path Algorithms

16

2.3 Previous Results for Continuous-Time Problems

The only significant previous algorithmic results in the literature related to the

continuous-time dynamic shortest path problem are due to Halpern [12] in 1977, and

Orda and Rom [15], [16] in 1990 and 1991. These papers consider the problem variant of

computing optimal paths through a continuous-time dynamic network originating at a

single source node, where waiting is allowed within certain fixed intervals of time at each

node. Halpern considers only the case of computing paths of minimum time, while Orda

and Rom consider the more general case of computing paths of minimum cost. They

each propose solution algorithms, and argue their correctness and termination within a

finite amount of time. The work of Orda and Rom is much more comprehensive than

that of Halpern, and identifies problems for which the algorithm of Halpern will fail to

produce an optimal solution within a finite amount of time.

The primary shortcoming of the algorithms of Halpern and those of Orda and Rom is that

they are presented as theoretical, rather than as practical results. These algorithms rely on

operations on general continuous-time functions as their fundamental operations; hence,

these algorithms as stated can be prohibitively complicated to implement, let alone

implement efficiently. To date no authors have reported on any computational evaluation

of these algorithms, except for one case in which the algorithm of Halpern was evaluated

for a static network with time windows (feasible regions of time during which one’s visit

to a node must fall).

There are two primary approaches for creating a feasible implementation of the

algorithms of Halpern and of Orda and Rom: either one must discretize time, in which

case one enters the discrete-time domain, in which superior algorithms exist, or

alternatively one may impose a simplifying piece-wise linearity assumption on the

continuous-time network data as is done in this thesis. Under piece-wise linearity

assumptions, the methodology underlying the algorithms of Halpern and of Orda and

Rom forms the basis for the second of two classes of solution algorithms discussed in this

thesis, which we call label-correcting algorithms. In Chapter 7, we describe this broad

class of solution algorithms, including the special cases developed by these authors, and

Page 17: Continuous-Time Dynamic Shortest Path Algorithms

17

provide a thorough theoretical analysis of their performance. We show that these

algorithms will run in polynomial time in when used to compute minimum-time paths

through FIFO networks. The other class of solution algorithms, which we call

chronological scan algorithms, is presented in Chapter 6; chronological scan algorithms

are shown to have stronger worst-case theoretical running times than label-correcting

algorithms.

Ioachim, Gelinas, Soumis, and Desrosiers [13] address the question of computing optimal

static shortest paths through an acyclic network in the presence of time windows.

Waiting at nodes is allowed during these windows, and accrues a cost which is linear in

the amount of time spent waiting. The authors devise a solution algorithm which is

relevant to the work in this thesis because it produces piece-wise linear path travel costs.

This result may be seen as a very special case of the general framework developed in this

thesis, since it is restricted in focus only to static, acyclic networks. The continuous-time

dynamic models developed in this thesis provide a more general approach for

representing and solving a much wider class of problems, both dynamic and static,

involving time windows.

Page 18: Continuous-Time Dynamic Shortest Path Algorithms

18

Chapter 3

Continuous-Time Models

In this chapter, we discuss notation for continuous-time models, and describe the

different variants of continuous-time dynamic shortest path problems.

3.1 Piece-Wise Linear Function Notation

The results in this thesis depend strongly on the simplifying assumption that network data

functions are given as piece-wise linear functions of time. We assume that all piece-wise

linear network data functions have a finite number of pieces, and by convention we treat

all points on the boundary between two linear pieces of a function as belonging to the

piece left of the boundary, unless the boundary point is itself a linear piece of zero extent.

We assume, however, that all network data functions given as input to a dynamic shortest

path algorithm will contain only linear pieces of strictly positive extent.

Since there may be several piece-wise linear functions involved with the specification of

a single dynamic shortest path problem, we adopt the following general notation for a

piece-wise linear function f(t):

P(f) : Number of pieces into which f is divided.B(f, k) : The right boundary of the kth piece of f.

By definition, let B(f, 0) = -∞ and B(f, P(f)) = +∞.α(f, k) : Linear coefficient of the kth linear piece of f.β(f, k) : Constant term in the kth linear piece of f.

We may therefore specify an arbitrary piece-wise linear function f(t) as follows.

+∞<<−

≤<≤<

≤<∞−

=

tfPfBif(t)f

fBtfBiftf

fBtfBiftf

fBtiftf

f(t)

fP )1)(,(

)3,()2,()(

)2,()1,()(

)1,()(

))((

)3(

)2(

)1(

(3.1)

),(),()()( kftkftf k βα += (3.2)

Page 19: Continuous-Time Dynamic Shortest Path Algorithms

19

If some time t marks the boundary between two linear pieces of a function, we say that

this function improves at time t if either it is discontinuous and drops to a lower value

after time t, or if it is continuous and its derivative drops to a lower value after time t.

Likewise, we say that a function worsens at a boundary time t if either it is discontinuous

and jumps to a higher value after time t, or if it is continuous and its derivative jumps to a

higher value after time t. This terminology is used because in this thesis we will seek the

minimum value of a piece-wise linear objective function, and a decrease of this function

or its derivative will correspond to an improvement in the objective. It is possible for a

piece-wise linear function to both improve and worsen at a boundary point, if both the

function and its derivative are discontinuous at that point. For example, the function

could improve by jumping to a lower value at time t and simultaneously worsen by its

derivative jumping to a higher value at time t. Finally, we say that a function is simple

over a region if it contains no piece-wise boundaries within that region.

In all of our models, there is no particular significance attached to the time t = 0, or to

times which are positive. We will always consider functions of time to be defined over

all real values of time, and we will compute solutions which represent network conditions

over all values of time. The choice of origin in time can therefore be arbitrary.

3.2 Network Data Notation

Let G = (N, A) be a directed network with n = |N| nodes and m = |A| arcs. Let Ai denote

the set j | (i, j) ∈ A of nodes after node i, and let Bj = i | (i, j) ∈ A denote the set of

nodes before node j. For each arc (i, j) ∈ A, we denote by dij(t) the travel time of a

commodity entering the arc at time t. It is assumed that all arc travel time functions are

piece-wise linear, finite-valued, and greater than some arbitrarily-small positive value.

On occasion, instead of arc travel time functions, it will be more convenient to work with

arc arrival time functions, defined as aij(t) = t + dij(t). We define the quantity P* to be

the total number of linear pieces present among all arc travel time functions:

)(*),(

∑∈

=Aji

ijdPP . (3.3)

Page 20: Continuous-Time Dynamic Shortest Path Algorithms

20

We denote by the piece-wise linear function cij(t) the general cost of traveling along an

arc (i, j) ∈ A, departing at time t. As time approaches +∞, we assume that all travel time

and travel cost functions either increase linearly or remain static and non-negative;

otherwise, least-cost paths of unbounded duration may occur. For the same reason, it is

also assumed that the initial linear pieces of each arc travel time and arc cost function are

constant (and also non-negative, in the case of travel cost); that is, there must be some

point in time before which the network is entirely static. This assumption is generally

not restrictive in practice, because we are typically only interested in computing optimal

paths within the region of time which extends from the present forward.

We say that a function f(t) satisfies the First-In-First-Out, or FIFO property if the

function g(t) = t + f(t) is non-decreasing. A piece-wise linear function f(t) will satisfy

the FIFO property if and only if α(f, k) ≥ -1 for every k ∈ 1, 2, …, P(f) and there are no

discontinuities where at which f(t) drops to a lower value. We describe an arc (i, j) ∈ A

as a FIFO arc if dij(t) satisfies the FIFO property, or equivalently if aij(t) is non-

decreasing. If all arcs (i, j) ∈ A are FIFO arcs, we say that G is a FIFO network.

Commodities will exit from a FIFO arc in the same order as they entered, and

commodities traveling along the same path in a FIFO network will exit from the path in

the same order as they entered. As we shall see, this special characteristic enables the

application of efficient techniques to compute minimum-time paths in FIFO networks.

Finally, if g(t) represents the “arrival time” function t + f(t) of a FIFO travel time

function f, we define the FIFO inverse of g(t) as:

max)()(|

1 τττ tg

tg≤

− = . (3.4)

For any arc (i, j) ∈ A in a FIFO network, the arc arrival time inverse function aij-1(t) gives

the latest possible departure time along the arc for which arrival occurs at or before time

t. In any FIFO network, arc arrival time inverse functions will always exist, and will

themselves be non-decreasing functions of time; we therefore can define the inverse of

the arc travel time function dij(t) of an arc as dij-1(t) = aij

-1(t) – t. It can be easily shown

that each dij-1(t) function will satisfy the FIFO property and will have the same form as

the original dij (t) function – it will be piece-wise linear with a finite number of pieces (no

Page 21: Continuous-Time Dynamic Shortest Path Algorithms

21

more than twice as many pieces as the original function), where each piece spans a

duration of time of strictly positive extent. Taking the FIFO inverse of an arrival time

function twice yields, as would be expected, the original function.

Depending on the situation being modeled, it may be permissible to wait at some

locations in the network during travel along a path. In order to model a general waiting

policy in a dynamic network, we must specify two time-dependent quantities: bounds on

the length of time one may wait at each node, and the costs of waiting for different

lengths of time at each node. Let ubwi(t) be a non-negative-valued piece-wise linear

function which denotes the upper bound on the amount of time one may wait at node i

following an arrival at time t. We assume that these functions satisfy the FIFO property,

because it is sensible for the latest time until one may wait at a node, t + ubwi(t), to be a

non-decreasing function of t. It is not necessary to specify lower bounds on waiting

times, since any amount of required waiting time at a node may be incorporated directly

into the arc travel time functions of the arcs directed out of that node. Finally, for each

node i ∈ N, we denote by WTi(t) the set of feasible waiting times τ | 0 ≤ τ ≤ ubwi(t) if

waiting begins at time t.

In general, the cost of waiting at a node is a function of two parameters: the starting time

and the duration of the interval of waiting. For simplicity, however, we focus in this

thesis only on waiting costs which adhere to a simpler structure: we say waiting costs are

memoryless if the cost of waiting beyond a particular time is not dependent on the

amount of waiting which has occurred up until that time. In this case, one can write the

cost of waiting at node i from time t until time t + τ as the difference between two

cumulative waiting costs, cwci(t + τ) – cwci(t), where

∫=t

ii dttwtcwc0

)()( , (3.5)

and where wi(t) gives the unit-time cost of waiting density function of node i. We assume

that the wi(t) functions are piece-wise constant and non-negative, so that the cumulative

waiting cost functions cwci(t) will be piece-wise linear, continuous, and non-decreasing,

and so the cost of waiting for any interval of time will always be non-negative.

Page 22: Continuous-Time Dynamic Shortest Path Algorithms

22

Memoryless waiting costs are very common in dynamic network models. If the objective

of a dynamic shortest path problem is to find paths of minimum travel time, then waiting

costs will be memoryless because they will satisfy cwci(t) = t. As a final restriction, if

waiting is allowed anywhere in the network, and if G is not a FIFO network or if

minimum-cost paths are desired, then we require continuity of all network data functions.

If this restriction is not met, then an optimal solution may not exist. An example of such

a problem is a single arc network, where unbounded waiting is allowed at no cost at the

source, and where the travel cost of the single arc (i, j) is given by:

+∞∈−∞∈

=),0(

]0,(1)(

tift

tiftcij (3.6)

In this case, there is no optimal solution. In practice, the continuity restriction is not

troublesome, since any discontinuity in a piece-wise linear function may be replaced by

an additional linear piece with an arbitrarily steep slope. For problems in which waiting

is prohibited, continuity of network data functions is not required.

3.3 Description of Problem Variants

Dynamic shortest path problems may be categorized into different variants based on the

criteria used to select optimal paths and on the desired configuration of source and

destination nodes and times. As an objective, it is often the case that we wish to find

paths of minimum travel time; in this case, cij(t) = dij(t), and we call the problem a

minimum-time dynamic shortest path problem. We refer to the more general problem of

finding paths of least cost a minimum-cost problem. Optimal paths through a dynamic

network may contain cycles, depending on the network data. In FIFO networks, though,

we shall see that minimum-time paths will always be acyclic.

As with static shortest path problems, the fundamental dynamic shortest path problems

involve the computation of optimal paths between all nodes and either a single source or

single destination node. The addition of a time dimension, however, complicates the

description of dynamic problems since a “source” now comprises both a source node and

a set of feasible departure times from that node, and a “destination” consists of a node

along with the set of feasible arrival times at that node. Additionally, there is a greater

Page 23: Continuous-Time Dynamic Shortest Path Algorithms

23

distinction in dynamic problems between computing optimal paths emanating from a

source node and computing optimal paths into a destination node. In the realm of static

networks, these two problems are made equivalent by reversing all arcs in the network; in

dynamic networks, the problems are still related but more remotely so, as there is often

no trivial way to transform one to the other. In FIFO networks, however, there is greater

degree of symmetry between these two types of problems. We will discuss methods of

transforming between symmetric problems in FIFO networks in Section 4.4.

In this thesis, we consider two fundamental variants of the dynamic shortest path

problem, which we call the all-to-one and one-to-all problems. The all-to-one problem

involves the computation of optimal paths from all nodes and all departure times to a

single destination node (and an associated set of feasible arrival times at that destination

node). Symmetrically, the one-to-all problem entails the computation of optimal paths

from a single source node (and a set of feasible departure times) to all other nodes. There

are two common flavors of the one-to-all problem. The most common, which we simply

call the one-to-all problem, involves computing a single shortest path from the source

node to every other node, where departure time from the source is possibly constrained.

The other variant, which we call the one-to-all problem for all departure times involves

computing separate shortest paths from the source to every other node for every feasible

departure time. An important result, which we discuss in Section 4.4, is the fact that in a

FIFO network, the minimum-time all-to-one problem is computationally equivalent to the

minimum-time one-to-all problem for all departure times.

The all-to-one problem is of benefit primarily to applications which control and optimize

the system-wide performance of a dynamic network, as its solution allows for

commodities located anywhere in time and space to navigate to a destination along an

optimal route. The one-to-all problem, conversely, is well-suited for applications which

compute optimal routes from the viewpoint of individual commodities traveling through

a dynamic network from specific starting locations. In this thesis we will develop

solution methods for both problems.

Page 24: Continuous-Time Dynamic Shortest Path Algorithms

24

3.4 Solution Characterization

For the all-to-one problem, we designate a node d ∈ N as the destination node. We can

then characterize a solution to the all-to-one problem by a set of functions which describe

the cost and topology of optimal paths to d from all nodes and departure times. The

following functions will constitute the set of decision variables used when computing an

optimal solution to the all-to-one problem:

Ci(w)(t) : Cost of an optimal path to d departing from node i at time t, where

waiting is allowed at node i before departure.Ci

(nw)(t) : Cost of an optimal path to d departing from node i at time t, whereno waiting is allowed at node i before departure.

Ni(t) : Next node to visit along an optimal path to d from node i, time t.Wi(t) : Amount of time to wait at node i, starting at time t, before departing

for node Ni(t) on an optimal path to d.

Based on the information provided by these functions, it is a simple matter to trace an

optimal route from any starting node and departure time to the destination, and to

determine the cost of this route.

For the one-to-all problem, we designate one node s ∈ N as a source node. We will use

an equivalent but symmetric form of notation as in the all-to-one problem in order to

describe a solution to the one-to-all problem. Specifically, we characterize a solution by

a set of functions which specify the structure and cost of optimal paths from s to all nodes

and all arrival times:

Ci(w)(t) : Cost of an optimal path from s to node i and time t, where waiting is

allowed at node i after arrival and up until time t.Ci

(nw)(t) : Cost of an optimal path from s arriving at node i at exactly time t.PNi(t) : Preceding node along an optimal path from s to node i and time t.PTi(t) : Departure time from the preceding node PNi(t) along an optimal

path from s to node i and time t.Wi(t) : Amount of waiting time to spend after arrival at node i up to time t

on an optimal path from s.

Specification of both the previous node and previous departure time functions is

necessary in order to trace a path from any arbitrary destination node and arrival time

back to the source. It is almost always the case that in the one-to-all problem, we are

concerned with finding the best path to each node i ∈ N irrespective of arrival time. In

Page 25: Continuous-Time Dynamic Shortest Path Algorithms

25

this case, we can therefore specify for each node i ∈ N the arrival time opti at that node

which yields the best possible path travel cost, where

)(minarg )( tCopt wi

ti = ∀i ∈ N. (3.7)

Given any destination node i ∈ N, one may use the preceding solution information to

easily determine the cost and structure of an optimal path from s to i.

Under the assumption of piece-wise linear network data functions, we shall see that the

path travel cost functions Ci(w)(t) and Ci

(nw)(t), the waiting time functions Wi(t), and the

previous node departure time functions PTi(t) which comprise the solution to the one-to-

all and all-to-one problems will also be piece-wise linear. To describe the piece-wise

linear behavior of these solution functions, we define a linear node interval, or LNI, to be

a 3-tuple (i, ta, tb), where i ∈ N, and where (ta, tb] is an interval of time of maximal

possible extent during which Ni(t) (or PNi(t)) does not change with time, and during

which the remaining solution functions change as simple linear functions of time. For the

all-to-one problem and for all problems in FIFO networks, we shall see that LNIs will

always span a duration of time of strictly positive extent. However, LNIs comprising the

solution to a one-to-all problem may in some cases span only a single point in time – we

will call such LNIs singular. The LNI (i, ta, tb) chronologically preceding a singular LNI

(i, tb, tb) at some node i ∈ N is taken to represent the interval of time (ta, tb) which

contains neither of its two endpoints.

For each node, it will be possible to partition time so as to divide the piece-wise behavior

of departures from that node into a finite number of LNIs. The output of an all-to-one or

one-to-all dynamic shortest path algorithm will therefore consist of an enumeration of the

complete set of LNIs for every node i ∈ N, along with a specification of the solution

functions during each LNI. During the time extent of a particular LNI at some node i, we

say that node i exhibits linear departure behavior, as arrivals or departures to or from the

node during this interval follow the same incoming or outgoing arc experience an optimal

path cost which changes as a simple linear function of time.

Page 26: Continuous-Time Dynamic Shortest Path Algorithms

26

Chapter 4

Problem Formulation and Analysis

This chapter contains a mathematical formulation and optimality conditions for the all-to-

one and one-to-all continuous-time dynamic shortest path problems. Following this, we

establish some of the fundamental mathematical properties of solutions to these

problems, and we discuss several key properties of FIFO networks. We show that a

solution always exists for all variants of the dynamic shortest path problem, and that this

solution always consists of a finite number of linear node intervals. In the case of a FIFO

network, we will develop stronger bounds on the number of LNIs comprising the solution

to a minimum-time path problem. Finally, we discuss symmetries between different

variants of the minimum-time dynamic shortest path problem in a FIFO network – we

will show that the minimum-time all-to-one problem and the minimum-time one-to-all

problem for all departure times are computationally equivalent, so it will be necessary in

future algorithmic chapters only to develop solution algorithms for one of these two

problems.

4.1 Mathematical Formulation

In order to simplify our formulation of the all-to-one problem, we wish for the destination

node d to appear at the end of all optimal paths, but not as an intermediate node along

these paths. For minimum-time problems, this is not an issue. In minimum-cost

problems, however, we enforce this desired behavior by replacing node d with the

designation d′ and by adding a new artificial destination node d along with a zero-cost arc

(d′, d). We assume that arrivals to d at any point in time are allowed; if there are

restrictions on the set of feasible arrival times at d, we may model these by setting the

time-dependent travel cost of the arc (d′, d) so it incurs infinite travel cost at prohibited

arrival times. In general, one may only constrain the arrival time for minimum-cost or

non-FIFO minimum-time problems, since it is usually not possible to add such a

constraint while satisfying the FIFO property.

Page 27: Continuous-Time Dynamic Shortest Path Algorithms

27

The optimality conditions for the general minimum-cost all-to-one dynamic shortest path

problem are as follows. A proof of necessity and sufficiency of these optimality

conditions appears at the end of this section. In order to ensure optimality, the solution

functions Ci(w)(t) and Ci

(nw)(t) must satisfy:

)()()(min)( )(

)(

)( τττ

++−+=∈

tCtcwctcwctC nwiii

tWT

wi

i∀i ∈ N, (4.1)

≠++=

=∈

diiftdtCtc

diiftC

ijw

jijAj

nwi

i

))(()(min

0)( )(

)( ∀i ∈ N. (4.2)

At optimality, the solution functions Wi(t) and Ni(t) are given by arguments respectively

minimizing equations (4.1) and (4.2). These functions are not necessarily unique, since

there may be several paths and waiting schedules which attain an optimal travel cost.

)()()(minarg)( )(

)(ττ

τ++−+=

∈tCtcwctcwctW nw

iiitWT

ii

∀i ∈ N, (4.3)

))(()(minarg)( )( tdtCtctN ijw

jijAj

ii

++=∈

∀i ∈ N – d. (4.4)

For the one-to-all problem, we take similar steps as in the all-to-one case in order to

prevent the source node s from appearing as an intermediate node in any optimal path, in

order to simplify the formulation of the problem. For minimum-cost one-to-all problems,

we therefore replace s with the designation s′ and add a new artificial source node s and a

zero-cost arc (s, s′). In order to restrict the set of feasible departure times from s we use

the same method as with the all-to-one problem, in which the time-dependent travel time

and cost of the arc (s, s′) is used to determine the set of feasible departure times. As in

the all-to-one problem, restriction of departure times in this fashion only applies to

minimum-cost or non-FIFO minimum-time problems. For the minimum-time one-to-all

problem in a FIFO network, it is necessary to consider only the single earliest possible

departure time, since we shall see in Section 4.3 that departures from later times will

never lead to earlier arrivals at any destination.

Optimality conditions for the one-to-all problem are similar to those of the all-to-one

problem. Since network data is stored in a so-called “forward-star” representation, from

which we cannot easily determine the set of departure times along a link that result in a

particular arrival time, we write the optimality conditions as set of inequalities rather than

Page 28: Continuous-Time Dynamic Shortest Path Algorithms

28

minimization expressions. If Ci(w)(t) and Ci

(nw)(t) represent a feasible set of path costs,

then they must satisfy the following conditions for optimality:

))`(()()( )()( tdtCtctC ijnw

jijw

i +≥+ ∀(i, j) ∈ A, (4.5)

)()()()( )()( ττ +≥−++ tCtcwctcwctC wiii

nwi ∀τ ∈ WTi(t), ∀i ∈ N – s, (4.6)

0)()( )()( == tCtC nws

ws . (4.7)

The functions Wi(t), PNi(t), and PTi(t) must at optimality satisfy the following conditions

in a minimum-cost one-to-all problem:

))(()())(()( )()( tWtCtcwctWtcwctC iw

iiiinw

i +=−++ ∀i ∈ N, (4.8)

)(~

),())(())(( )(~

)(~ tPNiwheretCtPTctPTC i

nwiiiii

wi

==+ ∀i ∈ N – s. (4.9)

Proposition 4.1: Conditions (4.1) and (4.2) are necessary and sufficient for optimality

for the all-to-one dynamic shortest path problem, and conditions (4.5), (4.6), and (4.7)

are necessary and sufficient for optimality for the one-to-all problem.

Proof: It is a simple matter to show that these conditions are necessary for optimality. If

any one of these conditions is violated, then the particular arc or waiting time causing the

violation may be utilized as part of a new path in order to construct a better solution. For

example, if condition (4.2) fails to hold for some node i ∈ N and time t, then this means

there is some outgoing node j ∈ Ai which will lead to a better path than Ni(t). Since

violation of these conditions implies non-optimality, the conditions are therefore

necessary.

In order to show sufficiency, we examine first the case of the all-to-one problem.

Assume conditions (4.1) and (4.2) are satisfied by some functions Ci(w)(t) and Ci

(nw)(t)

which represent a feasible solution. Consider any feasible path from some node i ∈ N

and some departure time t to the destination node d, having the form P = (i1, t1, w1) - (i2,

t2, w2) - … - (ik, tk, wk), where each ordered triple represents a node, an initial time at that

node, and a waiting time before departure from that node. We have i1 = i, t1 = t, ik = d,

wk = 0, and for any valid path we must have )(11 jjiijjj wtdwtt

jj+++=

++ for all j ∈ 1,

2,…, k – 1. Assuming (4.1) and (4.2) are satisfied, we have, respectively,

Page 29: Continuous-Time Dynamic Shortest Path Algorithms

29

)()()()( 11)(

1111)(

1111wtCtcwcwtcwctC nw

iiiw

i ++−+≤ (4.10)

)()()()()( 2)(

111111)(

221111tCwtctcwcwtcwctC w

iiiiiw

i +++−+≤ (4.11)

In general, after expanding the )( 2)(

2tC w

i term in (4.11), followed by )( 3)(

3tC w

i , )( 4)(

4tC w

i ,

etc. along the entire path, since 0)()( =tC wi k

the right-hand side of the inequality in (4.11)

will expand into the cost of the entire path P. Hence, Ci(w)(t) is a lower bound on the cost

of any arbitrary path from node i and time t to the destination, and since Ci(w)(t)

represents the cost of some feasible path, we conclude that the solution represented by

Ci(w)(t) and Ci

(nw)(t) is optimal. A similar, symmetric argument applies to show that

conditions (4.5), (4.6), and (4.7) are sufficient for optimality for the one-to-all problem.

4.2 Solution Properties

We proceed to establish several useful mathematical properties of solutions to the all-to-

one and one-to-all dynamic shortest path problem in a piece-wise linear continuous-time

network.

Lemma 4.1: There always exists a finite time t+, beyond which all nodes exhibit linear

departure behavior. Similarly, there always exists a finite time t –, prior to which all

nodes exhibit linear departure behavior.

Proof: We prove the lemma for the all-to-one problem; the same argument applies to the

one-to-all problem. Consider first the case of constructing a value of t+. We focus on the

region of time greater than all piece-wise boundaries of all network data functions dij(t),

cij(t), wi(t), and cwci(t), so these functions will behave as simple linear functions for the

interval of time under consideration. We further restrict our focus within this interval to a

subinterval of time t > t0, where t0 is chosen such that cij(t) ≥ 0 for all arcs (i, j) ∈ A.

Such a subinterval always exists since we have assumed that all arc travel cost functions

must be either increasing or static and non-negative as time approaches infinity. For any

departure at a time t > t0, there exists an acyclic optimal path to the destination which

involves no waiting, since the introduction of waiting or a cycle can never decrease the

Page 30: Continuous-Time Dynamic Shortest Path Algorithms

30

cost of any path departing at a time t > t0. For any node i ∈ N, we can therefore

enumerate the entire set of feasible acyclic paths from i to d. There will be a finite

number of such paths, and each of these paths will have a simple linear time-dependent

path travel cost for departure times t > t0. The optimal path travel cost from i to d for

departures within this interval will be the minimum of this finite set of linear functions,

and since the minimum of such a set of functions will be a piece-wise linear function with

a finite number of pieces, there must exist some departure time ti+ after which the same

path remains optimal forever from i to d. We can therefore pick t+ to be the maximum of

these ti+ values.

The existence of the time t – is argued from the fact that we know by assumption that

there is some time tstatic before which the network data functions remain static. One may

always find acyclic optimal paths which involve no waiting that arrive within this region

of time, since the presence of waiting or a cycle in any path arriving within this time

interval can never decrease the cost of the path. We thus have an upper bound of

∑∈ Aji

ijd),(

)1,(β on the length of any path which arrives at a time prior to tstatic, so we can

accordingly set t – to the value tstatic – ∑∈ Aji

ijd),(

)1,(β .

Lemma 4.2: The time-dependent function giving the optimal travel cost of any particular

path P as a function of departure time will always be piece-wise linear, and consist of a

finite number of linear pieces, each spanning an extent of time of strictly positive

duration. The function giving the optimal travel cost of any particular path P as a

function of arrival time will always be piece-wise linear and consist of a finite number of

linear pieces, where some of these pieces may span an extent of time of zero duration.

Proof: Suppose that a path P consists of the sequence of nodes i1, i2, …, ik. The time-

dependent travel cost along path P as a function of departure time will depend on the

waiting times spent at the nodes along the path, and the optimal time-dependent travel

cost function is obtained by minimizing over the set of all such feasible waiting times.

For simplicity, we will say that a function is special if it is piece-wise linear and consists

Page 31: Continuous-Time Dynamic Shortest Path Algorithms

31

of a finite number of linear pieces, each spanning an extent of time of strictly positive

duration. We prove the fact that the optimal time-dependent travel cost function (as a

function of departure time) along any path P will be special by induction on the terminal

sub-paths of P of increasing number of arcs: ik, ik-1 – ik, ik-2 – ik-1 – ik, etc. We claim that

each of these sub-paths is special. The induction hypothesis trivially holds for the path

consisting of the single node ik, since this path has zero travel cost. Suppose now that the

induction hypothesis holds for some sub-path Pr+1 = i r+1, …, ik. We can write the time-

dependent optimal travel cost of the sub-path Pr = i r, ir+1, …, ik as a function of departure

time using (4.1) and (4.2) as follows. We denote the optimal travel cost of such a sub-

path at departure time t by TCp,r(t).

))(()()()(min)(11 1,

)(, τττ

τ++++−+=

++ +∈taTCtctcwctcwctTC

rrrri

iirpiiiitWT

rp (4.12)

Denote the composition of the path travel cost function TCr+1 and the arc arrival time

function 1+rr iia as TC′

p,r+1(t). This composed function will be the composition of two

special functions, so TC′p,r+1(t) will also be special. We can then write Equation (18) in

the following simple form

)()(min)()(

, tcwctftTC irtWT

rpi

−+=∈

ττ

, (4.13)

where )()()()( 1,1tCTtctcwctf rpiiir rr +′++=

+ is a sum of special functions, and therefore also

special. If waiting is prohibited at all nodes, then we will have TCp,r(t) = fr(t) – cwci(t),

which will be special. If waiting is allowed as some nodes, then by the continuity

assumption fr(t) will be continuous, and the minimum of fr(t + τ) over the window [t, t +

ubwi(t)] , where the endpoints of this window are continuous non-decreasing functions if

t, will also be a special function. We therefore have the fact that TCp,r(t) satisfies the

induction hypothesis, and by induction, the optimal time-dependent travel cost of the

entire path TCp(t) = TCp,1(t) must be special.

Once the optimal time-dependent path travel cost function TCp(t) is determined as a

function of departure time, we can define an associated travel time function TTp(t) which

gives the time required to transverse path P, departing from time t, using an appropriate

waiting schedule such that we can achieve the minimal path transversal cost TCp(t). It

Page 32: Continuous-Time Dynamic Shortest Path Algorithms

32

can be shown using induction in the same way as in the previous argument, that the

TTp(t) function should be special. We can then write the optimal time-dependent path

travel cost function of path P as a function of arrival time as follows. We denote by

TCp(arr)(t) the optimal path cost as a function of an arrival time t.

)(min)()(|

)( ττττ p

tTT

arrp TCtTC

p =+= (4.14)

Due to the structure of this minimization and the fact that TCp(t) and TTp(t) are special,

the function TCp(arr)(t) will be piece-wise linear with a finite number of pieces, but some

of these pieces may span an extent of time of zero duration.

Proposition 4.2: An optimal solution always exists for both the all-to-one and one-to-all

dynamic shortest path problems. Furthermore, this solution will always consist of a

finite number of LNIs, and for the all-to-one problem all LNIs will be nonsingular.

Proof: We prove the proposition for the all-to-one problem; the same argument

symmetrically applies to the one-to-all problem. We know that there exist finite times t+

and t – as defined in Lemma 4.1. Optimal paths departing at times t > t+ will consist of

no more than n – 1 arcs, since optimal paths are acyclic within this region of time.

Similarly, any optimal path which arrives at the destination at a time t < t – will consist of

no more than n – 1 arcs. By assumption, there exists a positive ε such that ε ≤ dij(t) for all

(i, j) ∈ A. We therefore have a finite bound of L = 2n + (t+ – t –)/ε arcs in any optimal

path. Consider any particular node i ∈ N, and consider the finite set of all paths of L or

fewer arcs from i to the destination node d. For every path P within this set, by Lemma

4.2 we know that the optimal time-dependent travel cost along P will be a piece-wise

linear function of the departure time from node i consisting of a finite number of pieces,

each spanning a positive extent of time. For the all-to-one problem, the function Ci(w)(t)

giving the optimal time-dependent path travel cost from i to d as a function of departure

time is the minimum of a finite number of such piece-wise linear functions, and we can

therefore conclude that the optimal path travel cost functions Ci(w)(t) always a) exists, b)

consists of only a finite number of linear pieces, and c) consists only of pieces which

span an extent of time of strictly positive duration. Also by Lemma 4.2, and the time-

Page 33: Continuous-Time Dynamic Shortest Path Algorithms

33

dependent optimal travel cost of P as a function of arrival time is piece-wise linear with a

finite number of pieces, but some of these pieces may have zero extent. For the one-to-all

problem, the function Ci(w)(t) is the minimum of a finite set of functions which are piece-

wise linear with a finite number of pieces, and therefore Ci(w)(t) will be piece-wise linear

and contain a finite number of linear pieces.

As with the static shortest path problem, the solution to a dynamic shortest path problem

will yield unique path costs Ci(w)(t) and Ci

(nw)(t), but the actual paths which constitute

such a solution may not necessarily be unique.

4.3 Properties of FIFO Networks

It is possible to develop efficient algorithms to solve dynamic shortest path problems in

FIFO networks because these networks satisfy a number of important properties which

simplify many of the computations related to finding shortest paths. We describe some

of these properties as lemmas below.

Lemma 4.3: For any path through a FIFO network, the function giving the arrival time

at the end of the path as a function of departure time at the start of the path is non-

decreasing.

Proof: The arrival time function of a path is equal to the composition of the arrival time

functions of the arcs comprising the path. Since arc arrival time functions are non-

decreasing in a FIFO network, so are path travel time functions, since the composition of

any set of non-decreasing functions yields a non-decreasing function.

Lemma 4.4: Waiting in a FIFO network never decreases the arrival time at the end of

any path.

Proof: Since waiting is equivalent to a delay in the departure time along some path, and

since path arrival time is a non-decreasing function of departure time by Lemma 4.3, we

see that waiting can never lead to a decrease in the arrival time of any path.

Page 34: Continuous-Time Dynamic Shortest Path Algorithms

34

Lemma 4.5: In a FIFO network, one can always find minimum-time paths which are

acyclic.

Proof: Suppose a minimum-time path contains a cycle. This implies that some node i ∈

N is visited at two different times t1 and t2, where t1 < t2. However, a path of equivalent

travel time is obtained by removing the cycle, and by simply waiting at node i from time

t1 until time t2 before continuing along the remainder of the path. Using Lemma 4.4, we

conclude that the same path with the waiting interval removed, which is equivalent to the

original path with the cycle removed, will have an arrival time no later than that of the

original path. Since the removal of any cycle in a path never leads to an increase in

arrival time, we can always compute acyclic minimum-time paths.

Lemma 4.6: A minimum-time problem in a non-FIFO network can be transformed into

an equivalent FIFO minimum-time problem, if unbounded waiting is allowed everywhere

in the network.

Proof: The following transformation of the arc travel time functions produces the desired

effect by selecting the most optimal arrival time attainable by the combination of waiting

followed by travel along an arc:

)(min)(0

* τττ

++=≥

tdtd ijij ∀(i, j) ∈ A (4.15)

A minimum is guaranteed to exist for Equation (4.15) since we have assumed that arc

travel times are continuous in non-FIFO networks in which waiting is allowed. In the

absence of this assumption, the minimum may not be attainable.

Lemma 4.7: For any arrival time ta at a the end of a path, there will exist a

corresponding contiguous interval of departure times which will result in an arrival at

exactly time ta.

Proof: Follows directly from non-decreasing property of path travel time functions,

shown in Lemma 4.3.

Page 35: Continuous-Time Dynamic Shortest Path Algorithms

35

Lemma 4.8: For any given departure time td at a source node s in a FIFO network G,

one can find a shortest path tree embedded in G directed out of s which consists of

shortest paths originating at s at time td.

Corollary: For any arrival time ta at a destination node d in a FIFO network G, one can

find a shortest path tree embedded in G directed into d which consists of shortest paths

arriving to d at time td.

Proof: Follows from the fact that one can always find acyclic shortest paths, due to

Lemma 4.5. Note that for the corollary, the tree of shortest paths arriving at a destination

node at a particular time ta may not include all nodes i ∈ N, since for some nodes there

may not exist a departure time from which an optimal path to the destination arrives at

time ta. Sections 4.4 and 5.1 discuss techniques and results which may be applied to

compute these shortest path trees.

For the next lemma, we will argue bounds on the number of LNI boundaries at each node

which may be caused by each piece-wise linear boundary of each arc travel time

function. In saying that a piece-wise boundary of an arc travel-time function causes an

LNI boundary, we mean that removal of the arc travel time boundary would cause the

removal of the LNI boundary. One can intuitively visualize this causality by considering

a linear piece of an arc travel time function to be a taut string. Pulling on a point in the

middle of the string results in the addition of a new piece-wise boundary at that point, and

any LNI boundary which is created or affected by this pulling is considered to be caused

by this new piece-wise boundary.

Lemma 4.9: In a FIFO network, a piece-wise boundary time t at which some arc travel

time function dij(t) improves may cause at most one LNI boundary at each node i ∈ N. A

piece-wise boundary t at which some arc travel time function dij(t) worsens may cause at

most two LNI boundaries at each node.

Proof: Define Pab to be the set of paths connecting nodes a and b. Let TTp(t) denote the

travel time along path p departing at time t, and let TTab(t) denote travel time of a

Page 36: Continuous-Time Dynamic Shortest Path Algorithms

36

minimal-time path from node a to node b, departing from node a at time t. Consider an

arbitrary node k ∈ N, and an arbitrary arc (i, j) ∈ A where the arc travel time dij(t) either

improves or worsens at some time t* . The optimal path travel time from k to the

destination node d, Ck(nw)(t) can be expressed as the minimum travel time over paths

containing (i, j) and over paths which do not contain (i, j), which we define as Ck(1)(t) and

Ck(2)(t) respectively:

)(),(min)( )2()1()( tCtCtC kknw

k = (4.16)

)(min)(),(

)1( tTTtC p

pjiPp

kkd∈

∈= (4.17)

)(min)(),(

)2( tTTtC p

pjiPp

kkd∉

∈= (4.18)

Since the FIFO property is satisfied, we know by Lemma 4.5 that all minimum-time

paths are acyclic, so the arc (i, j) appears at most once in any optimal path, and we can

rewrite Ck(1)(t) as:

)))`(()(())(()()()1( tTTtdtTTtTTtTTtdtTTtC kiijkijdkiijkik ++++++= (4.19)

The piece-wise linear function t + TTki(t) is equal to the minimum over the set of paths p

∈ Pki of the function t + TTp(t), which by the FIFO property must be non-decreasing;

hence, the function t + TTki(t) is also non-decreasing, since the minimum of a set of non-

decreasing functions is also a non-decreasing function. The piece-wise linear function

f(t) = t + TTki(t) + dij (t + TTki(t)) is also non-decreasing since it is the composition of two

non-decreasing functions: t + dij(t) and t + TTki(t). The presence of a piece-wise

boundary at time t* where dij(t) improves or worsens will cause at most one piece-wise

boundary where f(t) respectively improves or worsens, where this boundary is at a time τ

determined by

*)())(( 1 ttTTt ki−+=τ , (4.20)

which is nothing more than the FIFO inverse of the function t + TTki(t) evaluated at t* .

The piece-wise linear arrival time function t + Ck(1)(t) is given by the composition of the

functions t + TTkd(t) and f(t), both non-decreasing, and is therefore also non-decreasing.

Furthermore, an improvement or worsening of f(t) at time t = τ will cause at most one

Page 37: Continuous-Time Dynamic Shortest Path Algorithms

37

respective boundary where t + Ck(1)(t) respectively improves or worsens, also at time t =

τ. If the function t + Ck(1)(t) improves or worsens at some time t = τ, then so will the

function Ck(1)(t) by itself. We therefore conclude that the presence of a piece-wise

boundary at time t* where dij(t) improves or worsens will cause at most one piece-wise

boundary where Ck(1)(t) respectively either improves or worsens, where this boundary is

at a time τ determined by Equation (4.20). The function Ck(2)(t) will be unaffected by all

piece-wise boundaries of dij(t).

A single piece-wise boundary at which the function Ck(1)(t) improves will cause at most

one LNI boundary (an improvement) at node k. At the point in time t = τ where the

improvement of Ck(1)(t) occurs, either

(i) Ck(1)(τ) < Ck

(2)(τ), or(ii) Ck

(1)(τ) ≥ Ck(2)(τ).

In case (i), due to Equation (4.16), Ck(nw)(t) will improve at time τ with Nk(t) remaining

unchanged. In case (ii), the piece-wise improvement of Ck(1)(t) might cause Ck

(1)(t) to

begin decreasing until it eventually overtakes Ck(2)(t), becoming more minimal at a later

point in time, and resulting in an improvement in Ck(nw)(t) at that later time. In both cases

(i) and (ii), the improvement of Ck(1)(t) causes at most only one LNI boundary at node k

because, after this boundary, Ck(1)(t) will be linearly decreasing below Ck

(2)(t), and an

additional piece-wise boundary (an improvement) of some arc travel time function will

be necessary in order to cause another improvement to Ck(1)(t), and therefore also to

Ck(nw)(t).

If the function Ck(1)(t) worsens at some point in time t = τ, we can apply the same

reasoning. In case (ii), an LNI boundary at node k will not be caused as a result of Ck(1)(t)

worsening at t = τ, since this can only result in Ck(1)(τ) becoming even less optimal. In

case (i), a worsening of Ck(1)(t) at t = τ will cause no more than two LNI boundaries at

node k: one of these boundaries occurs immediately at time t = τ since Ck(nw)(t) will

worsen at t = τ. The second LNI boundary may occur due to the fact that Ck(1)(t) will

start increasing faster or decreasing slower after the first LNI boundary, resulting in

Ck(2)(t) eventually becoming more minimal at some later point in time, and resulting in

Page 38: Continuous-Time Dynamic Shortest Path Algorithms

38

corresponding improvement in Ck(nw)(t) at this later point in time. Therefore, we have

shown that a single piece-wise boundary of an arbitrary arc travel time function can cause

at most three LNI boundaries at every node k ∈ N, since a single piece-wise arc travel

time function boundary may consist of both an improvement and a worsening of the arc

travel time function.

Lemma 4.10: For each node i ∈ N, there will be at most 3P* LNIs comprising a solution

to the minimum-time all-to-one dynamic shortest path problem in a FIFO network.

Proof: Due to Lemma 4.9, we can bound the number of LNI boundaries at each node in

terms of the number of piece-wise boundaries in the arc travel time functions of each arc

(i, j) ∈ A. There are P* - m piece-wise arc travel time boundaries, each of which

represents an improvement, a worsening, or both. By applying Lemma 4.9 we establish

the fact that there must be no more than 3P* LNIs at any given node which are caused by

these piece-wise arc travel time boundary points.

However, we argue that every boundary between two LNIs must be caused by some arc

travel time boundary point. The presence of an LNI boundary at some point in time t0

indicates that there is an intersection at this time where two path travel time functions

cross. At this intersection there is a switch from one optimal path to another. In order for

such an intersection to occur, the slope of at least one of the path travel time functions

must be nonzero. However, since we assume that all path travel time functions are

initially constant (at time t = -∞), the cause for the intersection can be attributed to some

piece-wise boundary point in some arc travel time function which arranged the slopes of

two path travel times in a configuration such that they would intersect at time t0. Since

all of the LNI boundaries in the solution functions are caused by piece-wise arc travel

time boundaries, the lemma is therefore established.

4.4 Symmetry and Equivalence Among Minimum-Time FIFO Problems

Within the realm of minimum-time problems in a FIFO networks, there is a greater

symmetry between the all-to-one problem and the one-to-all problem. In this section we

Page 39: Continuous-Time Dynamic Shortest Path Algorithms

39

discuss methods of transforming between minimum-time FIFO problems which are

symmetric in time. Some of the transformations presented below have been

independently proposed by Daganzo [8].

An alternative way of describing the minimum-time one-to-all problem is to call it a

minimum arrival time problem, for which one must find a set of paths which leave a

designated source node s ∈ N at or after a particular departure time td, which reach each

node i ∈ N at the earliest possible arrival time. While the all-to-one problem is not

completely symmetric in time with respect to this problem, a close variant, which we call

the maximum departure time problem, is exactly symmetric. The maximum departure

time problem involves finding a set of paths which arrive at a designated destination node

d ∈ N at or before some particular arrival time ta which leave each node i ∈ N at the latest

possible departure time. The discussion of these two problems in this subsection will

apply only within the context of FIFO networks.

We now develop a simplified alternative formulation for the minimum arrival time

problem and the maximum departure time problem. For the minimum arrival time

problem, we let EAi(td) denote the earliest arrival time at node i if one departs from the

source node s at or after time td, and we let PNi(td) denote the previous node along a path

to i which arrives at time EAi(td). Similarly, for the maximum departure time problem,

we let LDi(ta) denote the latest possible departure time from node i if one wishes to arrive

at the destination node d at or before time ta, and let Ni(ta) denote the next node along

such a path. One can write optimality conditions for the minimum arrival time problem

as follows. These conditions are well-known, as the minimum arrival time (i.e.

minimum-time one-to-all) problem in a FIFO network has been well-studied in the

literature.

≠=

=∈

sjiftEAa

sjifttEA

diijBi

d

dj

j

))((min)( ∀j ∈ N, (4.21)

))((minarg)( diijBi

dj tEAatPNj∈

= ∀j ∈ N – s. (4.22)

Page 40: Continuous-Time Dynamic Shortest Path Algorithms

40

Similarly, one can write the symmetric optimality conditions for the maximum departure

time problem as:

≠=

= −

∈diiftLDa

diifttLD

ajijAj

a

ai

i

))((max)( 1 ∀i ∈ N, (4.23)

))((maxarg)( 1ajij

Ajai tLDatN

i

∈= ∀i ∈ N – d. (4.24)

In the above expressions, the function aij-1(t) denotes the FIFO inverse of the arc arrival

time function aij(t). We now discuss the possibility of transformations between the one-

to-all (minimum arrival time), maximum departure time, and all-to-one problems.

Lemma 4.11: It is possible to transform between instances (and solutions) of the

minimum arrival time and maximum departure time problems.

Corollary: In a FIFO network, it is possible to transform between instances (and

solutions) of the minimum arrival time problem for all departure times (i.e. the minimum-

time one-to-all problem for all departure times), and the maximum departure time

problem for all arrival times.

Proof: The following operations will achieve a transformation between the minimum

arrival time problem and its solution and the maximum departure time problem:

(i) Reverse the direction of all arcs in the network(ii) Swap the source node s and the destination node d

(iii) Swap the source departure time td with the destination arrival time ta(iv) Let PNi(t) = Ni(t) for all i ∈ N(v) Let EAi(t) = -LDi(t) for all i ∈ N

(vi) Replace each arc arrival time function aij(t) with –aij-1(-t) for all (i, j) ∈ A

Steps (v) and (vi) above actually represent a reversal in the direction of time. The

preceding steps will transform the optimality conditions (4.21) and (4.22) of the

minimum arrival time problem into optimality conditions (4.23) and (4.24) of the

maximum departure time problem and vice versa. Therefore, any solution which satisfies

the optimality conditions for one of the problems prior to transformation will satisfy the

optimality conditions for the other problem after transformation. As the minimum arrival

Page 41: Continuous-Time Dynamic Shortest Path Algorithms

41

time problem for all departure times and the maximum departure time problem for all

arrival times only involves computing EAi(t) or LDi(t) respectively as functions of time

rather than for a particular value of time, the above transformation still applies to these

problems.

Lemma 4.12: In a FIFO network, it is possible to transform between instances (and

solutions) of the maximum departure time problem for all arrival times, and the

minimum-time all-to-one problem.

Proof: A transformation between these two problems will not change the arc travel time

functions. For the minimum-time all-to-one problem in a FIFO network, the function

Ci(nw)(t) gives the travel time of the optimal path to the destination from node i, departing

at time t. Since by Lemma 4.4, waiting is never beneficial in FIFO networks, we will

have Ci(w)(t) = Ci

(nw)(t) and Wi(t) = 0, so only the functions Ci(nw)(t) and Ni(t) are

necessary to specify a solution to the all-to-one problem. Additionally, in a FIFO

network, since path travel times satisfy the FIFO property, the function f(t) = t + Ci(nw)(t)

will be non-decreasing and we can therefore construct its FIFO inverse f -1(t). Similarly,

since the function LDi(t) will for all nodes be a non-decreasing functions, we can

construct its FIFO inverse LDi-1(t). In order for the solution to a minimum-time all-to-

one problem to be consistent with the solution of a maximum-departure time problem for

all arrival times, we must satisfy LDi(t + Ci(nw)(t)) = Ci

(nw)(t). This naturally leads to the

following transformation between the decision variables representing solutions to the two

problems:

)()(),()( )(1 tCttfwheretftLD nwiii +== − ∀i ∈ N (4.25)

ttLDtC inw

i −= − )()( 1)( ∀i ∈ N (4.26)

It can be shown that decision variables satisfying the optimality conditions of one

problem prior to transformation will satisfy the optimality conditions of the other

problem after transformation.

Page 42: Continuous-Time Dynamic Shortest Path Algorithms

42

Proposition 4.3: Any algorithm which solves the minimum-time all-to-one problem in a

FIFO network can be used to solve the minimum-time one-to-all problem for all

departure times in a FIFO network.

Corollary: Any algorithm which solves the minimum-time one-to-all problem for all

departure times in a FIFO network can be used to solve the minimum-time all-to-one

problem in a FIFO network.

Proof: Follows directly from Lemmas 4.11 and 4.12. One may transform between

instances of the minimum-time one-to-all problem and instances of the all-to-one

problem, solve a problem, and then transform the solution back into the form desired for

the original problem. Alternatively, one may modify the workings of an all-to-one

algorithm or a one-to-all for all departure times algorithm so that these transformations

are implicitly represented within the algorithm.

Page 43: Continuous-Time Dynamic Shortest Path Algorithms

43

Chapter 5

Preliminary Algorithmic Results for Minimum-TimeFIFO Problems

A historic result in the development of dynamic shortest path algorithms has been the fact

that the minimum-time one-to-all problem in FIFO networks is solvable by a trivial

adaptation of any static label-setting or label-correcting shortest path algorithm. Details

of this solution technique are given in this chapter. Additionally, we develop a simple,

generic method for adapting any minimum-time FIFO all-to-one problem so that it may

be solved efficiently in parallel using existing serial algorithms. Since the minimum-time

all-to-one problem has been shown in Section 4.4 to be computationally equivalent to the

minimum-time one-to-all problem for all departure times in a FIFO network, the parallel

techniques derived within this chapter will apply to any algorithm which solves either of

these two problems.

5.1 FIFO Minimum-Time One-to-All Solution Algorithm

To solve the FIFO minimum-time one-to-all problem for a single departure time, we must

compute the most optimal path to every node i ∈ N originating from a single source node

s ∈ N and a given departure time td. This problem is the same as the earliest arrival time

problem discussed in Section 4.4. In this section, we present a well-known result from

the literature, due to Dreyfus [10], Ahn and Shin [1], and Kaufman and Smith [14], which

allows this problem to be solved as efficiently as a static shortest path problem.

In order to describe the solution to a minimum-time one-to-all problem in a FIFO

network, it is only necessary to specify a single scalar label for each node rather than a

label which is a function of time, for the following reasons. Even if waiting is allowed at

the source node, we know that departure from the source must occur at time exactly td

since by Lemma 4.4 waiting at the source node will never decrease the travel time of any

path. By the same reasoning, one can argue that it is only necessary to keep track of the

Page 44: Continuous-Time Dynamic Shortest Path Algorithms

44

earliest possible arrival time at each node, which we denote EAi in keeping with the

notation from Section 4.4. Commodities traveling along minimum-time paths from the

source node at time td will only visit each node i ∈ N at time EAi. As discussed in

Section 4.1, the union of the set of optimal paths originating at node s and time td will

form a shortest path tree embedded in G. We denote by PNj the predecessor node of a

node j ∈ N along this tree.

Equations (4.21) and (4.22) from Section 4.4 give the optimality conditions for the

minimum-time one-to-all problem. These conditions are quite similar to the optimality

conditions of a static shortest path problem, and indeed, it is possible to modify any

labeling static shortest path algorithm so that it is based on this extended dynamic

formulation. Algorithm 1 is an example of such a modified static algorithm. For the

results of computational testing of the performance of dynamic adaptations of different

types of static algorithms, the reader is referred to [5].

Initialization:∀i opti ← ∞, PTi ← ∅opts ← td, UnSetNodes = N

Main Loop:For k ← 1 .. n – 1 i ← Select_Mincost_Node( UnSetNodes, opt ) UnSetNodes ← UnSetNodes – i For all j ∈ Ai

If opti + dij(opti) < optj then optj ← opti + dij(opti) PNj ← i

Algorithm 1: A modified version of Dijkstra’s static shortest pathalgorithm which computes minimum-time one-to-all dynamic shortestpaths in a FIFO network.

Using the transformation given in Lemma 4.11, one may use dynamically-adapted static

solution algorithms to solve the maximum departure time problem described Section 4.4

in a FIFO network. Additionally, by employing the transformation given in Lemma 4.6,

one can use dynamically-adapted static solution algorithms to solve minimum-time one-

to-all problems in non-FIFO networks if unbounded waiting is allowed at all nodes. In

Page 45: Continuous-Time Dynamic Shortest Path Algorithms

45

[15], Orda and Rom describe a further extension of this approach which efficiently solves

non-FIFO minimum-time one-to-all problems in which waiting is allowed only at the

source node. We will show in Chapters 6 and 7 methods for solving the minimum-time

all-to-one problem, which due to Proposition 4.3 may also be used to solve the minimum-

time one-to-all problem for all departure times in a FIFO network.

5.2 Parallel Algorithms for Problems in FIFO Networks

One particular advantage of FIFO networks is that it is possible to partition any

minimum-time FIFO all-to-one dynamic shortest path problem into completely disjoint

pieces, so that distributed computation may be performed on these pieces without any

overhead due to message passing between parallel processing units. Since by Proposition

4.3, the minimum-time FIFO one-to-all problem for all departure times is

computationally equivalent to the FIFO minimum-time all-to-one problem, either one of

these problems may be solved in parallel by trivially adapting any serial solution

algorithm for the minimum-time all-to-one problem. Chapters 6 and 7 present different

solution algorithms for the all-to-one problem; either of these may be easily converted to

run in a parallel environment. The parallel decomposition methods we describe below

are suitable for either a distributed-memory or a shared-memory parallel system; we will

for simplicity assume a distributed-memory system for the following discussion.

Our parallel approach to solving the minimum-time all-to-one problem actually involves

solving instead the closely-related maximum departure time problem for all arrival times,

described in Section 4.4. By Lemma 4.12, a solution to this problem can be easily

transformed into a solution to the all-to-one problem – this transformation may be

performed very efficiently either in serial or in parallel, using only a single linear sweep

through the linear pieces of each solution function. We must therefore as an objective

compute the solution functions LDi(t) and Ni(t) for all nodes i ∈ N, as defined in Section

4.4.

The optimality conditions we must satisfy for the maximum departure time problem for

all arrival time problem are given by (4.23) and (4.24). These conditions are somewhat

similar in form to the minimum-time all-to-one optimality conditions, except the

Page 46: Continuous-Time Dynamic Shortest Path Algorithms

46

computation of the solution functions at a particular arrival time ta is no longer dependent

on the value of the solution functions at any other point in time except time ta. Based on

this observation, we propose a method of partitioning a problem up into disjoint sub-

problems by arrival time. Suppose we have K processing units available. We will assign

the kth processor, 1 ≤ k ≤ K, to an interval of arrival times (tk(min), tk

(max)] , such that all of

these intervals are disjoint and such that their union is the entirety of time. It will be the

responsibility of the kth processor to compute the values of the solution functions LDi(ta)

for ta ∈ (tk(min), tk

(max)] and for all nodes i ∈ N. Since there is no dependence between

solution functions at different values of time, the tasks of each of the parallel processing

units will be completely disjoint, and each may therefore be performed independently

with no inter-processor communication.

Initially, the network G and a specification of the FIFO inverse arc arrival time functions

aij-1(t) for all arcs (i, j) ∈ A are to be broadcast to all processing units. Due to the FIFO

property, the solution functions LDi(ta) will be non-decreasing for all nodes i ∈ N. Using

the transformation given in Lemma 4.11, and the solution techniques of the preceding

subsection, it is possible to solve the maximum departure time problem for a particular

arrival time ta very efficiently using a modified static shortest path algorithm. Each

processor is to perform two such computations, in order to determine the respective

values of LDi(tk(min)) and LDi(tk

(max)) for all i ∈ N. Since these solution functions are non-

decreasing, we will then have for the kth processor:

),(),()()( (max)(min)(max)(min)kkakiaiki tttwheretLDtLDtLD ∈≤≤ ∀i ∈ N. (5.1)

Since we know bounds on the values of the solution functions assigned to each processor,

each processor can “clip” the inverse arc arrival time functions aij-1(t) so that they remain

static for all values of time not within the interval of possible values of the solution

function LDi(ta) for that processor. This operation has no affect on the optimal solution

computed by each processor, since at optimality the conditions (4.23) and (4.24) will

never reference any inverse arc arrival time function aij-1(t) at any value of time t which

was clipped away. For the kth processor, the clipping operation utilizes the following

transformation:

Page 47: Continuous-Time Dynamic Shortest Path Algorithms

47

>≤<

≤=

)(

)()(

)(

))((

)(

))((

)((max)

(max)(min)

(min)

(max)1

1

(min)1

1

ki

kiki

ki

kiij

ij

kiij

ij

tLDtif

tLDttLD

tLDtif

tLDa

ta

tLDa

ta ∀(i, j) ∈ A. (5.2)

The resulting clipped inverse arc arrival time functions will in general, for each

processor, consist of far fewer linear pieces than the original inverse arc arrival time

functions. By employing the transformations given in Lemma 4.12, each processing unit

can solve this simplified maximum departure time problem for all arrival times as a

minimum-time all-to-one problem, using any all-to-one solution algorithm. This

computation is likely to proceed quickly due to the smaller number of pieces present in

the clipped problem.

We have therefore devised a generic method of partitioning up any minimum-time all-to-

one problem into several disjoint all-to-one problems, each of which is simpler to solve in

the sense that it contains only a small portion of the linear pieces of the arc travel time

functions provided as input to the original problem. One can adopt one of many

reasonable policies for determining the boundaries of the intervals of time to assign to

each parallel processor. These intervals of time should in general be assigned such that

the number of solution LNIs computed by each processor is expected to be equal. A

perfectly equal partition may not be possible, but intelligent heuristic partitioning

schemes should be able to balance the load reasonably well.

5.2.1 Parallel Techniques for Discrete-Time Problems

It is interesting to note that the approach outlined above for development of parallel

algorithms applies not only to continuous-time problems, but to discrete-time problems as

well. Ganugapati [11] has investigated different strategies of parallel decomposition

based on network topology applied to discrete-time problems. Topology-based

decomposition methods were shown to work well on shared-memory parallel platforms.

However, their performance on distributed-memory platforms suffers from high inter-

processor communication overhead, since the parallel tasks are highly dependent on

each-other. The only method proposed to date for partitioning a single discrete-time

dynamic shortest path problem into completely disjoint sub-problems is due to Chabini,

Page 48: Continuous-Time Dynamic Shortest Path Algorithms

48

Florian, and Tremblay [6]. This method solves the all-to-one minimum-time problem in

a FIFO network by solving the equivalent latest departure time problem for all arrival

times. Each parallel processor is assigned a set of arrival times, and applies a modified

static algorithm (as described in Section 5.1) to solve the latest departure time problem

for each of its assigned arrival times in sequence. We propose an alternative

parallelization technique for FIFO problems, based on the method developed in the

previous subsection, which gives a stronger theoretical running time.

Based on the discussion in Section 2.2, we see that all discrete-time problems can be

viewed as the application of a static shortest path algorithm to compute shortest paths

within a static time-expanded network. For a minimum-time FIFO problem, the time-

expanded network can be partitioned into completely disjoint pieces along the time

dimension as follows. Consider the all-to-one problem with a destination node d ∈ N,

and consider a particular arrival time ta. By solving a single latest departure time

problem, described Section 4.4, one can compute for each node i ∈ N the latest departure

time LDi(ta) for which arrival at the destination is possible by time ta. These departure

times may be efficiently computed using a modified static algorithm, as described in

Section 5.1. The time-space network may then be partitioned into two disjoint pieces,

where one processor is assigned the computation of optimal paths leaving from the set

node-time pairs S1 = (i, t) | i ∈ N, t ≤ LDi(ta), and a second processor is assigned the

remaining node-time pairs S2 = (i, t) | i ∈ N, t > LDi(ta). Any arc in the time-expanded

network crossing from S1 to S2 will never be part of an optimal path, because from any

node-time pair in S1, it is possible to reach the destination by time ta, whereas any optimal

path departing from any node-time pair in S2 must necessarily reach the destination after

time ta. Therefore, this partition effectively divides the time-expanded network into two

completely disjoint pieces; if two processors are assigned the computation of optimal

paths departing from each region, then no communication will be necessary between the

processors during the process of computation. Within each partition, one may apply

discrete-time algorithms with optimal running time, such as those in [5], leading to

stronger running times for each processor than the approach proposed by Chabini,

Florian, and Tremblay.

Page 49: Continuous-Time Dynamic Shortest Path Algorithms

49

This technique generalizes to any number of processors. In the case of multiple

processors, one should choose a set of arrival times which partitions the time-expanded

network into pieces of relatively equal size. Similarly, one may solve the minimum-time

one-to-all for all departure times problem in a discrete-time FIFO network using this

technique, only the partitions are generated in this case by solving minimum arrival time

problems. Time-based parallelization techniques should enable the efficient computation

of both discrete-time and continuous-time dynamic shortest paths in FIFO networks.

Page 50: Continuous-Time Dynamic Shortest Path Algorithms

50

Chapter 6

Chronological and Reverse-Chronological ScanAlgorithms

In this section, we introduce a new class of continuous-time dynamic shortest path

algorithms, which we refer to as chronological scan algorithms. These algorithms will in

general compute a solution by scanning either forward or backward through time, in a

similar fashion to optimal discrete-time algorithms such as those described in [5].

Although some of these algorithms utilize a reverse-chronological scan through time, we

will often refer to all algorithms which scan sequentially through time inclusively as

chronological scan algorithms, regardless of the direction one moves through time.

In order to provide a gradual introduction to the methodology underlying these

approaches, we first consider the simple, yet extremely common case of computing

minimum-time all-to-one paths through a FIFO network. For this problem, we will

present a solution algorithm and prove that its running time is strongly polynomial.

Using the result of Proposition 4.3, this algorithm can be used as well to solve the

minimum-time one-to-all problem for all departure times in a FIFO network, and it can

be made to run in parallel using the techniques given in Section 5.2.

We then proceed to develop extended chronological scan algorithms for solving general

minimum-cost all-to-one and one-to-all dynamic shortest path problems in FIFO or non-

FIFO networks. Although these problems are shown to be NP-Hard, it is argued that for

most reasonable problem instances, the running time should be quite efficient. Finally,

we discuss the possibility of hybrid continuous-discrete variants of these algorithms

which compute an approximate solution in a shorter amount of time.

6.1 FIFO Minimum-Time All-to-One Solution Algorithm

As the minimum-time one-to-all problem in a FIFO network has been optimally solved,

as discussed in Section 5.1, we devote our attention to solving the minimum-time all-to-

Page 51: Continuous-Time Dynamic Shortest Path Algorithms

51

one problem in a FIFO network. We will introduce an reverse-chronological scan

algorithm for solving this problem, and prove a polynomial bound on its worst-case

running time. The all-to-one algorithm which we give for the minimum-time FIFO case

will actually be a subset of the more complicated general minimum-cost all-to-one

algorithm presented in Section 6.2. For minimum-time FIFO problems, the computation

performed by the general algorithm will actually be equivalent to that of the simplified

FIFO algorithm; we will therefore sometimes refer to these two algorithms together as a

unit as simply “the all-to-one chronological scan algorithm”. Due to Proposition 4.3, this

solution algorithm may be used to solve a minimum-time one-to-all problem for all

departure times in a FIFO network. Furthermore, the techniques given in Section 5.2

may be used to perform parallel computation.

Due to Lemma 4.4, we find that all waiting costs and constraints may be ignored when

computing minimum-time paths through a FIFO network, as waiting is never beneficial

in FIFO networks. This fact allows us to considerably simplify the formulation of the all-

to-one problem. In the minimum-time FIFO case, we will have Ci(w)(t) = Ci

(nw)(t) and

Wi(t) = 0, so it is only necessary to specify the Ci(nw)(t) and Ni(t) functions in order to

characterize a solution to the all-to-one problem. A linear node interval in this case

therefore reduces to a region of time during which Ni(t) does not change and during

which only Ci(nw)(t) changes as a simple linear function of time. The optimality

conditions for the FIFO minimum-time all-to-one problem are simplified accordingly to:

≠++=

=∈

diiftdtCtd

diiftC

ijnw

jijAj

nwi

i

))(()(min

0)( )(

)( ∀i ∈ N, (6.1)

))(()(minarg)( )( tdtCtdtN ijnw

jijAj

ii

++=∈

∀i ∈ N – d. (6.2)

To specify a solution, we must therefore enumerate for each node i ∈ N the complete set

of LNIs which determine the piece-wise behavior of the functions Ci(nw)(t) and Ni(t) for

that node over all departure times.

The all-to-one chronological scan solution algorithm proceeds in two phases, which we

briefly outline. Due to Lemma 4.1, as time approaches +∞ eventually all nodes will

Page 52: Continuous-Time Dynamic Shortest Path Algorithms

52

exhibit linear departure behavior, and the set of optimal paths emanating from all nodes

will form a static shortest path tree directed into the destination node d. By computing

the structure of this “infinite-time” shortest path tree, one can determine the values of the

solution functions Ci(nw)(t) and Ni(t) for sufficiently large values of t.

Phase two of the algorithm uses this partial solution as a starting point, and analytically

computes how far one can move backward in time, starting from time t = +∞, such that

all nodes maintain their current linear departure behavior, stopping at the first LNI

boundary it encounters. The solution functions are then adjusted to reflect the change in

linear behavior at this boundary, and the algorithm continues its backward scan through

time until it discovers each LNI boundary in sequence, terminating once all LNIs are

discovered.

6.1.1 Computing the Infinite-Time Shortest Path Tree

Eventually, due to Lemma 4.1, any dynamic network with piece-wise linear

characteristics will reach a steady state where all nodes exhibit linear departure behavior

and in which the set of optimal paths to the destination will form a static shortest path

tree directed into the destination node, which we denote by T∞. We will give a modified

static shortest path algorithm which determines the topology of T∞ and the simple linear

behavior of Ci(nw)(t) for all nodes i ∈ N as time approaches infinity.

Proposition 6.1: Commodities following optimal routes to the destination departing from

any node as departure time approaches infinity will travel along a common static

shortest path tree T∞.

Proof: The proposition readily follows from Lemma 4.1 since, if every node i ∈ N

experiences linear departure behavior during the interval of time which extends from

some time t+ to infinity, then by definition the value of all next node functions Ni(t) will

be constant over this region. Furthermore, due to Lemma 4.5, we can find acyclic

shortest paths. Hence, the next node pointers will define a shortest path tree directed into

the destination node which is static over the interval.

Page 53: Continuous-Time Dynamic Shortest Path Algorithms

53

In determining T∞, we will only need to focus on the behavior of all arc travel time

functions as time approaches infinity. We may therefore concern ourselves only with the

final linear piece of each arc travel time function. For simplicity of notation, let αij =

α(dij , P(dij)) and βij = β(dij , P(dij)), so the final linear piece of each arc travel time

function dij(t) is given simply by αij t + βij. Since we have linear departure behavior at

each node as time approaches infinity, the optimal path travel time functions Ci(nw)(t)

must in this limiting case be simple linear functions of time, which we denote by Ci(nw)(t)

= αit + βi. We can now rewrite the all-to-one optimality conditions based on these

simple linear path travel time functions, in the limiting case as time approaches infinity.

By substituting into the minimum-time all-to-one problem optimality conditions for a

FIFO network given in (6.1), we obtain

≠+++++=

=+∈

diifttt

diift

jijijjijijAj

ii

i

)(min

0

ββααβαβα ∀i ∈ N. (6.3)

The optimality conditions given in (6.3) can be written as separate optimality conditions

for each arc. For sufficiently large values of t, all simple linear node path cost functions

must satisfy:

)()( jijjijjijjijii tt ββαβααααβα +++++≤+ ∀(i, j) ∈ A, i ≠ d, (6.4)

0=+ dd t βα . (6.5)

Based on either of the preceding sets of optimality conditions, one can easily modify any

label-setting or label-correcting static shortest path algorithm so that it computes T∞.

Instead of assigning scalar distance labels to each node, we instead represent the distance

label of each node in this case by a simple linear function. There is a total ordering

imposed on the set of all simple linear functions evaluated at t = +∞, whereby we can

compare any two linear functions by comparing first their linear coefficients, and by

breaking ties by comparing constant terms. To illustrate the computation of T∞ and its

simple linear node labels, Algorithm 2 shows pseudo-code for an adapted generic label-

correcting algorithm which performs this task. Other algorithms, such as Dijkstra’s label-

setting algorithm, may also be adapted for this purpose.

Page 54: Continuous-Time Dynamic Shortest Path Algorithms

54

Initialization:∀i αi ← ∞, βi ← ∞, Ni ← ∅αd ← 0, βd ← 0

Main Loop:Repeat Locate an arc (i, j) ∈ A such that either:

(i) αi > αij + αjαij + αj, or(ii) αi = αij + αjαij + αj and βi > βij + αjβij + βj

If such an arc exists, then αi ← αij + αjαij + αj

βi ← βij + αjβij + βj

Ni ← j If no such arc exists, then terminate the algorithm

Algorithm 2: Adaptation of a generic static label-correcting algorithm tocompute T∞. After termination, the topology of T∞ is given by the nextnode pointers stored in the Ni values, and the simple linear travel cost fromeach node i ∈ N to the destination as time approaches infinity is given byCi

(nw)(t) = αit + βi.

6.1.2 Computing Linear Node Intervals

We proceed to show how to compute all LNIs comprising a solution using a backward

scan through time. Suppose that, for some time tcurrent, we know the value of Ci(nw)(t) and

Ni(t) for all times t > tcurrent, and furthermore we know the simple linear behavior of

Ci(nw)(t) for all times t ∈ (tcurrent - δ, tcurrent] , where δ is taken to be small enough such that

all nodes exhibit linear departure behavior over the region (tcurrent - δ, tcurrent] . We know

due to Proposition 4.2 that all LNIs for the all-to-one problem will span a non-zero extent

of time, so it is always possible to find some such range of time (tcurrent - δ, tcurrent] during

which the solution functions exhibit linear departure behavior. Using the algorithmic

result of the preceding section, we can compute the simple linear behavior of functions

Ci(nw)(t) and the constant behavior of Ni(t) for sufficiently large values of t, so we initially

set tcurrent to +∞ .

The first goal of this section will be to analytically determine an earlier time tnew < tcurrent

to which one may scan backward in time, such that all nodes are guaranteed to maintain

Page 55: Continuous-Time Dynamic Shortest Path Algorithms

55

their current linear departure behavior over the region of time t ∈ (tnew, tcurrent] . The time

tnew will often mark the piece-wise boundary of one or more of the functions Ci(nw)(t) or

Ni(t), and therefore also the boundary of one or more LNIs. For each node with an LNI

that is closed off at time tnew, we will record this LNI and compute the simple linear

behavior of the next LNI beginning at that node (having an upper boundary at time tnew).

After this computation, we will know the value of Ci(nw)(t) and Ni(t) for all t > tnew, and

the simple linear behavior of these functions for all t ∈ (tnew - δ, tnew] , and we can

therefore then set tcurrent to the value of tnew and repeat this process until tnew reaches -∞, at

which point we will have generated the complete set of LNIs characterizing an optimal

solution to the minimum-time all-to-one dynamic shortest path problem in a FIFO

network.

We turn our attention now to the calculation of tnew, given the value of tcurrent and the

simple linear behavior of Ci(nw)(t) for all t ∈ (tcurrent - δ, tcurrent] . Based on the optimality

conditions given in Equations (6.1) and (6.2), we see that there are three possible factors

which can determine the value of tnew. All Ci(nw)(t) functions will retain their simple

linear behavior while reducing the value of t from tcurrent as long as:

(i) the value of t remains within the same linear piece of dij(t), for each arc(i, j) ∈ A,

(ii) the value of t + dij(t) remains within the same linear piece of Cj(nw)(t), for

each arc (i, j) ∈ A, and(iii) the same optimal outgoing arc is selected for each node i ∈ N by the

minimization expression in Equations (6.1) and (6.2).

In order to write these conditions mathematically, our algorithm will maintain as part of

its current state several variables containing indices of the current linear pieces of the

network data and solution functions at time t = tcurrent. These index variables are:

CPi : Index of the current linear piece of Ci(nw)(t), at t = tcurrent,

CPij : Index of the current linear piece of dij(t), at t = tcurrent,CDPij : Index of the current “downstream” linear piece of Cj

(nw)(t), att = aij(tcurrent).

In terms of these index variables, we will have over the region t ∈ (tnew, tcurrent] :

),(),()( )()()(i

nwii

nwi

nwi CPCtCPCtC βα += ∀i ∈ N (6.6)

Page 56: Continuous-Time Dynamic Shortest Path Algorithms

56

),(),()( ijijijijij CPdtCPdtd βα += ∀(i, j) ∈ A (6.7)

),())((),())(( )()()(ij

nwjijij

nwjij

nwj CDPCtaCDPCtaC βα += ∀(i, j) ∈ A (6.8)

One can then write mathematical bounds on tnew based on conditions (i), (ii) , and (iii) as:

(i) )1,( −≥ ijijnew CPdBt ∀(i, j) ∈ A, i ≠ d (6.9)

(ii) ( )),()1,(1 )(

ijijijnw

jij

new CPdCDPCBt βρ

−−

≥ ∀(i, j) ∈ A, i ≠ d,

and ρij > 0(6.10)

with ),(1 ijijij CPdαρ +=

(iii)

+

−−

)),(1)(,(

),(),(1)(

)()(

ijnw

jijij

ijnw

jinw

i

ijnew

CDPCCPd

CDPCCPCt

αβ

ββσ

∀(i, j) ∈ A, i ≠ d,and σij > 0

(6.11)

with )),(1)(,(),(),( )()(ijijij

nwji

nwiijijij CPdCDPCCPCCPd αααασ ++−=

Each of the bounds given in (i), (ii) , and (iii) above is a linear inequality relating the

value of t to known quantities. The denominators ρij and σij of (ii) and (iii) can

potentially be zero or negative for some arcs; if this is the case, then these conditions will

impose no lower bounds on tnew for these arcs. A simple example of such a situation is

the case where the entire network is static, in which case σij will be zero for all arcs. If

we let Γij(i), Γij

(ii), and Γij(iii) represent the lower bound obtainable from each of the

respective conditions (i), (ii) , and (iii) above for a particular arc (i, j) ∈ A, and we let Γij =

maxΓij(i), Γij

(ii), Γij(iii) , then can write the value of tnew as

max),(

ijAji

newt Γ=∈

. (6.12)

We therefore have a method for computing tnew. In order to facilitate rapid computation,

we store the Γij value associated with each arc in a heap data structure, so the

determination of tnew using Equation (6.12) requires only O(1) time.

After the determination of tnew, the behavior of the solution functions Ci(nw)(t) and Ni(t)

will be known for the region of time t > tnew. At time tnew, we may encounter the closure

of existing LNIs at some nodes and the subsequent creation of new LNIs at these nodes to

reflect the changes in the structure and linear cost of optimal paths departing at times t ∈

Page 57: Continuous-Time Dynamic Shortest Path Algorithms

57

(tnew - δ, tnew] . The problem we now consider is that of computing the piece-wise changes

to Ci(nw)(t) and Ni(t) at time tnew.

Only nodes in the set S = i ∈ N | ∃ j ∈ Ai such that Γij = tnew are possible candidates for

piece-wise linear changes at time tnew, since conditions imposed on arcs emanating from

only these nodes were responsible for the value of tnew. For each node i ∈ S, we will re-

compute the simple behavior of the functions Ni(t) and Ci(nw)(t) over the interval (tnew - δ,

tnew] , and also update the linear piece indices for arcs emanating from these nodes to

reflect the decrease in time to tnew. This computation requires three steps:

1. We first update the values of the current indices CPij, and CDPij for each arc (i, j) ∈ A

where i ∈ S, so that these quantities give the index of the current linear pieces of dij(t) and

Cj(nw)(t + dij(t)) respectively, only at time t = tnew rather than at t = tcurrent. Due to the

FIFO property, these indices will not increase as t decreases from tcurrent to tnew. We

therefore successively decrement CPij and CDPij until the desired pieces of dij(t) and

Cj(nw)(t + dij(t)) containing time tnew are found.

2. For all nodes i ∈ S we compute the optimal outgoing arc Ni(t) for t ∈ (tnew - δ, tnew]

using the optimality conditions of Equation (6.2) and the fact that the optimal solution

function Ci(nw)(t) is already known for all nodes i ∈ N and all future times t > tnew due to

our backward transversal through time. Equation (6.2) cannot be directly applied,

however, since a degenerate situation arises if there is a tie among several arcs emanating

from the same node for the optimal outgoing arc at exactly time tnew. In this case, one

must break the tie by selecting an arc which is also optimal at time tnew - δ, where δ is

infinitesimally small, so that this outgoing arc is then optimal for the entire region of time

(tnew - δ, tnew] . This method is equivalent to breaking ties based on the linear coefficients

of the simple linear path travel times experienced by following each potential outgoing

arc. The following equation performs this computation (we denote by arg min* the set of

all arguments which minimize an expression):

Page 58: Continuous-Time Dynamic Shortest Path Algorithms

58

))(()(min*arg

)),,(1)(,(),(maxarg)(

)(

)(

newijnewnw

jnewijAj

i

ijijijnw

jijijj

i

tdtCtdwhere

CPdCDPCCPdtN

i

i

++=

++=

γ

αααγ

∀i ∈ S. (6.13)

3. After the determination of Ni(t) for the interval (tnew - δ, tnew] , optimal path costs may

be easily obtained for this interval of time as follows:

)()),(()()( )()( tNjwheretdtCtdtC iijnw

jijnw

i =++= ∀i ∈ S. (6.14)

We have therefore devised a method for computing Ni(t) and Ci(nw)(t) for all nodes over

the time interval (tnew - δ, tnew] . For any node i ∈ S which transitions to a new LNI at time

tnew, our algorithm must record the completed LNI and initialize the new LNI at node i.

Only a small amount of overhead is required for this task, as LNIs are generated in

reverse chronological order for each node, and therefore may be stored and updated

efficiently in either a linked list or an array data structure. Additionally, the Γij values,

maintained in a heap, must be re-computed for all (i, j) ∈ A such that either i ∈ S or j ∈ S,

so that a value of tnew may be determined on the next iteration of the algorithm.

In the event that tnew reaches -∞, we may terminate our algorithm since the functions

Ci(nw)(t) and Ni(t) will have been determined for all values of time, giving us a complete

solution to the all-to-one problem. As long as tnew is finite, however, we compute the

necessary piece-wise changes of the Ci(nw)(t) and Ni(t) functions at time tnew, then set

tcurrent to the value of tnew and repeat our backward chronological scan until a complete

solution is determined. Algorithm 3 gives the pseudo-code for the complete FIFO all-to-

one algorithm.

Function Compute-Γ(i, j):

Γij(i) ←

−=∞−

otherwiseCPdB

diif

ijij )1,(

ρij ← 1 + α(dij , CPij)

Γij(ii) ←

−−==∞−

otherwiseCPdCDPCB

ordiif

ijijijnw

jij

ij

)],()1,()[/1(

0)( βρ

ρ

σij ← α(dij , CPij) – α(Ci(nw), CPi) + α(Cj

(nw), CDPij)(1 + α(dij , CPij))

Page 59: Continuous-Time Dynamic Shortest Path Algorithms

59

Γij(iii) ←

+

−−

≤=∞−

otherwiseCDPCCPd

CDPCCPC

ordiif

ijnw

jijij

ijnw

jinw

i

ij

ij

)),(1)(,(

),(),(1

0

)(

)()(

αβ

ββσ

σ

Γij ← maxΓij(i), Γij

(ii) , Γij(iii)

Function Update-Piece-Indices(i, j):While(tnew ∉ (B(dij , CPij – 1), B(dij , CPij)]) Decrement CPijWhile((1+α(dij , CPij)) tnew + β(dij , CPij) ∉ (B(Cj

(nw), CDPij – 1), B(Cj(nw), CDPij)])

Decrement CDPij

Function Compute-New-LNI(i):))(()(min*arg )(

newijnewnw

jnewijAj

i tdtCtdi

++←∈

γ ,

where dij(t) = α(dij , CPij) t + β(dij , CPij), and Cj

(nw)(t) = α(Cj(nw), CDPij) t + β(Cj

(nw), CDPij) j ← )),(1)(,(),(maxarg )(

ijijijnw

jijijj

CPdCDPCCPdi

αααγ

++∈

a ← α(dij , CPij) + α(Cj(nw), CDPij)(1 + α(dij , CPij))

b ← (1 + α(Cj(nw), CDPij))β(dij , CPij) + β(Cj

(nw), CDPij)If j ≠ Ni(B(Ci

(nw), CPi)) or a ≠ α(Ci(nw), CPi) or b ≠ β(Ci

(nw), CPi) then Decrement CPi B(Ci

(nw), CPi) ← tnew, B(Ci(nw), CPi – 1) ← -∞

Ni(tnew < t ≤ tcurrent) ← j, α(Ci(nw), CPi) ← a, β(Ci

(nw), CPi) ← b For all i′ ∈ Bi : Compute-Γ(i′, i), Update-Heap(H, Γi’i )For all j ∈ Ai : Compute-Γ(i, j), Update-Heap(H, Γij )

Initialization:∀ i ∈ N : CPi ← 0∀ (i, j) ∈ A : CDPij ← 0, CPij ← P(dij)α(C(nw), 0), β(C(nw), 0), N(t = +∞) ← Algorithm 2∀ i ∈ N : B(Ci

(nw), -1) ← -∞, B(Ci(nw), 0) ← +∞

∀ (i, j) ∈ A : Compute-Γ(i, j)H ← Create-Heap(Γ)tcurrent ← +∞

Page 60: Continuous-Time Dynamic Shortest Path Algorithms

60

Main Loop:While(tcurrent ≠ -∞) tnew ← Heap-Max(H) S ← i ∈ N | ∃ j ∈ Ai such that Γij = tnew For all i ∈ S, j ∈ Ai

Update-Piece-Indices(i, j) For all i ∈ S Compute-New-LNI(i) tcurrent ← tnew

Termination:For all i ∈ N P(Ci

(nw)) ← 1 – CPi

Renumber the pieces of Ci(nw) from 1.. P(Ci

(nw))

Algorithm 3: Reverse-chronological scan algorithm which solves theminimum-time all-to-one dynamic shortest path problem in a FIFOnetwork.

During the execution Algorithm 3, the linear pieces of the Ci(nw)(t) functions are indexed

by the decreasing integers 0, -1, -2, … as they are generated, and renumbered from 1

through P(Ci(nw)) at the end of the algorithm once P(Ci

(nw)) has finally been computed.

Standard auxiliary heap functions are assumed to exist, in order to maintain a heap

containing the values of the Γij bounds.

Proposition 6.2: The all-to-one chronological scan algorithm correctly solves the

minimum-time all-to-one dynamic shortest path problem in a FIFO network.

Proof: To argue correctness, we assert that the algorithm, by design, produces only pieces

of the solution functions Ci(nw)(t) and Ni(t) which satisfy the FIFO optimality conditions

given in Equations (6.1) and (6.2). The algorithm maintains the invariant that, for t >

tcurrent, the optimal values of both of these solution functions have already been computed.

Each iteration of the main loop decreases tcurrent by some positive quantity (due to

Proposition 4.2), until tcurrent reaches -∞ and a complete solution has been determined.

Page 61: Continuous-Time Dynamic Shortest Path Algorithms

61

An analysis of the worst-case and average-case running time of the FIFO minimum-time

all-to-one chronological scan algorithm appears in Section 6.4, and the results of

computational testing of this algorithm may be found in Chapter 8.

6.2 General All-to-One Solution Algorithm

In this section and the next we develop generalizations of the reverse-chronological scan

algorithm which solve any all-to-one or one-to-all dynamic shortest path problem within

the framework under consideration in this thesis. These more general algorithms are

based on the same methodology as the simpler FIFO all-to-one algorithm, and

consequentially we will simplify the discussion of these new algorithms by referring back

at times to results presented in the previous subsection.

We begin by developing a general all-to-one chronological scan algorithm which solves

any all-to-one dynamic shortest path problem. The algorithm proceeds in two phases,

exactly as its simpler FIFO counterpart. Initially, an “infinite-time” tree is computed

which gives the structure and cost of optimal paths departing at sufficiently large values

of t. Next, a backward scan through time is performed in order to compute the LNIs

comprising the solution functions – it is this backward scan through time that is

substantially more complicated for the general algorithm. The full set of solution

functions Ci(w)(t), Ci

(nw)(t), Ni(t), and Wi(t) will be computed as output of this algorithm.

In Section 6.4, we will analyze the worst-case and expected performance of the general

all-to-one chronological scan algorithm. If the general all-to-one chronological scan

algorithm is used to compute minimum-time paths through a FIFO network, then the

operations of the algorithm will reduce exactly to those performed by the simpler FIFO

variant of the algorithm, since the simpler algorithm is merely as subset of the general

algorithm. The performance of the general algorithm applied to find minimum-time

paths in a FIFO network will therefore be the same as that of the minimum-time FIFO

all-to-one algorithm.

6.2.1 Simplification of Waiting Constraints

We briefly discuss a network transformation which will simplify the computation of

optimal paths in the presence of waiting constraints and costs. This subsection only

Page 62: Continuous-Time Dynamic Shortest Path Algorithms

62

applies to problems which involve waiting. Recall that in this case we assume that all

network data functions are continuous. By re-writing the “waiting” optimality condition

(4.1) of the all-to-one problem, we have

)()()(min)( )(

)(

)( tcwctCtcwctC inw

iitWT

wi

i

−+++=∈

τττ ∀i ∈ N. (6.15)

If we let WCFi(t) denote the waiting cost function cwci(t) + Ci(nw)(t), we can write (6.15)

as

)()(min)()(0

)( tcwctWCFtC iitubw

wi

i

−+=≤≤

ττ ∀i ∈ N. (6.16)

Since the waiting cost function of each node is continuous and piece-wise linear, the

minimum of this function in Equation (6.16) will occur either:

(i) at one of the endpoints of the region of minimization, where τ = 0 orτ = ubwi(t), or

(ii) at a time t + τ which is a piece-wise boundary of WCFi(t + τ), wheret < t + τ < t + ubwi(t).

In order to eliminate the possibility of option (i), we perform the following network

transformation: replace each node i at which waiting is allowed with 3 virtual nodes inw,

imw, and iw, which represent respectively a “non-waiting node”, a “maximum waiting

node”, and a “waiting node” corresponding to node i. Each of these virtual nodes are to

be linked to the same neighboring nodes as the original node i. We want the non-waiting

node inw to behave like node i, except that commodities traveling through this virtual

node are prohibited from waiting and must immediately depart along an outgoing arc.

Similarly, commodities traveling through the maximum waiting node imw are required to

wait for the maximum possible allowable length of time before departure on an outgoing

arc – this restriction is enforced by prohibiting waiting at the node imw and by adding

ubwi(t) to the travel time function of each outgoing arc. Commodities traveling through

the waiting node iw are required to wait and depart only at times t + τ which are piece-

wise boundaries of the function WCFi(t + τ), as in option (ii) above. For each waiting

node, we only allow departures from times which are within the closed interval [t, t +

ubwi(t)] following an arrival at time t. Although option (ii) specifies an open interval, the

Page 63: Continuous-Time Dynamic Shortest Path Algorithms

63

fact that we allow departures from waiting nodes from a closed interval of time will not

affect the final solution produced by the algorithm.

This transformation maps each node i at which waiting is allowed into three nodes, two

of which no longer permit waiting, and one of which permits a restricted form of waiting;

however, for the virtual waiting node iw which permits waiting, only option (ii) above

needs to be considered when selecting a minimum waiting time, since the two other

virtual nodes account for the possibilities comprising option (i). The optimal path travel

cost function for node i as a whole, Ci(w)(t), will be given by the minimum of

)()( tC nwinw

, )()( tC nwimw

, and )()( tC wiw

. We can therefore simplify the computation of Ci(w)(t),

since these three virtual optimal path functions will be easier to compute individually.

After the transformation, our network will contain two types of nodes: nodes i for which

waiting is prohibited, for which Ci(w)(t) = Ci

(nw)(t), and nodes j where waiting is allowed,

but for which Cj(w)(t) can be computed in a simple fashion based on option (ii) above.

We will call the set of transformed nodes where waiting is prohibited non-waiting nodes,

and denote this set by Nnw. Similarly, we call the set of nodes at which special restricted

waiting is allowed waiting nodes, and we denote this set by Nw. Additionally, we

describe each piece-wise boundary time of the function WCFi(t + τ) where τ ∈ [0,

ubwi(t)] as a feasible waiting departure time, and we let FWDTi(t) denote the set of all

feasible waiting departure times from a waiting node i, where waiting begins at time t.

The optimal path cost Ci(w)(t) for each waiting node i ∈ Nw is obtained by selecting the

best feasible waiting departure time:

)()(min)()(

)( tcwcWCFtC iitFWDT

wi

i

−=∈

ττ ∀i ∈ Nw. (6.17)

It will be of importance to our chronological scan approaches that the minimization

expression in (6.17) yields the same minimum value as the value of t increases or

decreases slightly, provided that the set of feasible waiting departure times FWDTi(t)

does not change.

Page 64: Continuous-Time Dynamic Shortest Path Algorithms

64

We henceforth assume that we are dealing with a transformed network, in which each

node is either a non-waiting node or a waiting node. From an implementation standpoint,

the transformation may be performed either explicitly or implicitly. The latter of these

choices may be preferable since it will reduce storage requirements. After the execution

of the algorithms described in the next two sections, it will be necessary to perform a

straightforward reverse transformation to recover the solution functions Ci(w)(t), Ci

(nw)(t),

Ni(t), and Wi(t) for the original network.

6.2.2 Computing the Infinite-Time Shortest Path Tree

Using Lemma 4.1 and the same reasoning as in Section 6.1.1, one may argue that as time

approaches infinity, the union of optimal paths departing from each node i ∈ N will form

a static shortest path tree, which we denote T∞. We will develop an algorithm for

computing T∞. This algorithm will be very similar to the analogous algorithm presented

in Section 6.1.1.

Since we may restrict our focus to only the final linear pieces of each arc travel time and

travel cost function, for simplicity of notation let αij(c) = α(cij , P(cij)) and βij

(c) = β(cij ,

P(cij)), and also let αij(d) = α(dij , P(dij)) and βij

(d) = β(dij , P(dij)). As t approaches infinity

we will then have cij(t) = αij(c)

t + βij(c) and dij(t) = αij

(d) t + βij

(d). Since waiting is never

beneficial along paths departing at times larger than t+ (given by Lemma 4.1), we will

have Wi(t) = 0 and Ci(w)(t) = Ci

(nw)(t) for all times t > t+. Furthermore, since each node

will exhibit linear departure behavior over this interval, we can write the optimal path

travel time functions for t > t+ as simple linear functions of time, which we denote by

Ci(w)(t) = Ci

(nw)(t) = αit + βi. Substituting into the general all-to-one optimality

conditions given in Equation (4.2), we obtain the following set of new optimality

conditions for determining the topology and simple linear node costs of T∞:

≠+++++=

=+∈

diifttt

diift

jd

ijd

ijjc

ijc

ijAj

ii

i

)(min

0)()()()( ββααβαβα ∀i ∈ N. (6.18)

Page 65: Continuous-Time Dynamic Shortest Path Algorithms

65

The optimality conditions given in (6.18) can be written as separate optimality conditions

for each arc. For sufficiently large values of t, all simple linear node path cost functions

must satisfy:

)`()( )()()()(j

dijj

cijj

dijj

cijii tt ββαβααααβα +++++≤+ ∀(i, j) ∈ A, i ≠ d (6.19)

0=+ dd t βα (6.20)

Based on either of the preceding sets of optimality conditions, one can easily modify any

label-setting or label-correcting static shortest path algorithm so that it computes T∞.

The pseudo-code of Algorithm 2 from Section 6.1.1 may be trivially modified to perform

this computation.

6.2.3 Computing Linear Node Intervals

As in the FIFO all-to-one algorithm, we proceed to compute all LNIs using a backward

chronological scan through time. Suppose that, for some time tcurrent, we know the value

of Ci(w)(t), Ci

(nw)(t), Ni(t), and Wi(t) for all values of time t > tcurrent. Furthermore suppose

that we know the simple behavior of these functions for all times t ∈ (tcurrent - δ, tcurrent] ,

where δ is taken to be small enough such that all nodes exhibit linear departure behavior

over the region (tcurrent - δ, tcurrent] . Initially, we can take tcurrent to be +∞, and we can set

the initial values of all solution functions using the topology and linear node costs

obtained from the computation if T∞ in the previous section.

As in the FIFO case, we proceed to compute an earlier time tnew < tcurrent to which one can

move in time, such that all nodes are guaranteed to maintain their current linear departure

behavior over the region of time t ∈ (tnew, tcurrent] . For any node with an LNI that is

closed off at time tnew, we will record this LNI and compute the next LNI beginning at

that node. After this computation, we will know the value of the solution functions for all

t > tnew, and their simple behavior for all t ∈ (tnew - δ, tnew] , and we can then set tcurrent to

the value of tnew. Repeating this process until tnew = -∞ will generate the complete set of

LNIs characterizing an optimal solution.

Page 66: Continuous-Time Dynamic Shortest Path Algorithms

66

Based on the optimality conditions (4.1) and (4.2) of the all-to-one problem, and the

discussion of waiting cost simplification given in Section 6.2.1, we can show that there

are seven conditions which determine bounds on the value of tnew. All solution functions

will retain their simple linear behavior while reducing the value of t from tcurrent as long

as:

Basic Conditions:(i) the value of t remains within the same linear piece of dij(t), for each arc

(i, j) ∈ A,(ii) the value of t remains within the same linear piece of cij(t), for each arc

(i, j) ∈ A,(iii) the value of t + dij(t) remains within the same linear piece of Cj

(w)(t), for eacharc (i, j) ∈ A,

(iv) the optimal immediate departure outgoing arc (i, Ni(tcurrent)) is selected foreach node i ∈ N by the minimization expressions in Equation (4.2),

Waiting Conditions:(v) the value of t remains within the same linear piece of ubwi(t) for all waiting

nodes,(vi) the value of t remains within the same linear piece of cwci(t) for all waiting

nodes, and(vii) for all waiting nodes, the interval of feasible waiting at that node must

contain the same set of feasible waiting departure times as FDWTi(tcurrent), soEquation (6.17) will produce a simple linear function.

As with the FIFO all-to-one algorithm, we maintain a set of variables for each node and

arc which will hold indices into the current linear pieces of various network data and

solution functions. These index variables are listed below; the set of such variables we

must maintain in the general case is quite a bit more extensive than that in the FIFO case.

Page 67: Continuous-Time Dynamic Shortest Path Algorithms

67

Node Index Variables:CPi

(w) : Index of the current linear piece of Ci(w)(t), at t = tcurrent.

CPi(nw) : Index of the current linear piece of Ci

(nw)(t), at t = tcurrent.CPi

(ubw) : Index of the current linear piece of ubwi(t), at t = tcurrent.CPi

(cwc) : Index of the current linear piece of cwci(t), at t = tcurrent.CWPi

(C) : Index of the current “maximal waiting” piece ofCi

(nw)(t + ubwi(t)) at t = tcurrent.CWPi

(cwc) : Index of the current “maximal waiting” piece of cwci(t + ubwi(t))at t = tcurrent.

Arc Index Variables:CPij

(d) : Index of the current linear piece of dij(t), at t = tcurrent.CPij

(c) : Index of the current linear piece of cij(t), at t = tcurrent.CDPij : Index of the current “downstream” linear piece of Cj

(w)(t) att = tcurrent + dij(tcurrent).

In addition to these indices, we will maintain for each node i the set of feasible waiting

departure times FWDTi(t) at time tcurrent. This set of waiting times is implemented as a

queue for the following reason: the set FWDTi(t) represents all piece-wise boundary

points of the waiting cost function Ci(nw)(τ) + cwci(τ) over the contiguous window of time

τ ∈ [t, t + ubwi(t)] . As we decrease t over the course of the algorithm, the lower

endpoint of this window will decrease, and the upper endpoint will either remain constant

or decrease in time; as the algorithm progresses, we will therefore add new piece-wise

boundaries to FWDTi(t) as they enter the lower end of this window, and we will remove

these boundaries when they exit from the upper end of the window. In addition to

feasible waiting departure times, the value at each boundary point of the waiting cost

function Ci(nw)(τ) + cwci(τ) is stored along with its corresponding waiting departure time

τ in the queue FWDTi(t).

We can write the following mathematical bounds on tnew based on conditions (i)…(vii)

above.

(i) )1,( )( −≥ dijijnew CPdBt ∀(i, j) ∈ A, i ≠ d (6.21)

(ii) )1,( )( −≥ cijijnew CPcBt ∀(i, j) ∈ A, i ≠ d (6.22)

Page 68: Continuous-Time Dynamic Shortest Path Algorithms

68

(iii)

<−>−−

≥0)],(),()[/1(

0)],()1,()[/1()()(

)()(

ijd

ijijijw

jij

ijd

ijijijw

jijnew ifCPdCDPCB

ifCPdCDPCBt

ρβρρβρ ∀(i, j)∈ A,

i≠ d, andρij ≠ 0

(6.23)

with ),(1 )(dijijij CPdαρ +=

(iv)

−−

),(),(),(

),(),(1)()()(

)()()(

cijijij

wj

dijij

ijnw

jnw

inw

i

ijnew

CPcCDPCCPd

CDPCCPCt

βαβ

ββσ

∀(i, j)∈ A, i ≠ d, andσij > 0

(6.24)

with )),(1)(,(),(),( )()()()()( dijijij

wj

nwi

nwi

cijijij CPdCDPCCPCCPc αααασ ++−=

(v) )1,( )( −≥ ubwiinew CPubwBt ∀i ∈ Nw (6.25)

(vi) )1,( )( −≥ cwciinew CPcwcBt ∀i ∈ Nw (6.26)

(vii) i

ubwiicwc

ii

Ci

nwi

new

CPubwCWPcwcB

CWPCB

β ),()1,(

),1,(max )(

)(

)()(

≥∀i ∈ Nw,and µi > 0

(6.27)

with ),(1 )(ubwiii CPubwαµ +=

Each of the bounds given in (i)…(vii) above is a linear inequality relating the value of tnew

to known quantities. If the denominator ρij in (iii) is zero or if the denominator σij in (iv)

is non-positive for some arcs, then these arcs impose no lower bound on tnew. Similarly,

if the denominator µi in (vii) is zero for some nodes, then these nodes will impose no

lower bound on tnew. We denote by Γij(i)…Γij

(iv) the tightest lower bounds obtainable from

each of the respective conditions (i)…(iv) above for a particular arc (i, j) ∈ A, and we let

Γij be equal to maxΓij(i)…Γij

(iv). Similarly, we denote by Γi(v)…Γi

(vii) the tightest lower

bounds obtainable from each of the respective conditions (v)…(vii) above for a particular

waiting node i ∈ Nw, and we let Γi denote the expression maxΓi(v)…Γi

(vii). We can then

write the value of tnew as

max,maxmax),(

iNi

ijAji

neww

t ΓΓ=∈∈

. (6.28)

In order to facilitate rapid computation, we store the Γij and Γi values associated with

each arc and waiting node in a heap data structure, so the determination of tnew requires

only O(1) time. After the determination of tnew, the behavior of the solution functions

Ci(w)(t), Ci

(nw)(t), Ni(t), and Wi(t) will be known for the region of time t > tnew. At time

Page 69: Continuous-Time Dynamic Shortest Path Algorithms

69

tnew, we may encounter the closure of existing LNIs at some nodes and the subsequent

creation of new LNIs at these nodes to reflect the changes in the structure and linear cost

of optimal paths departing at times t ∈ (tnew - δ, tnew] , for δ sufficiently small. We now

consider the problem of computing the piece-wise changes to the solution functions at

time tnew.

Only nodes in the set S = i ∈ N | Γi = tnew, or ∃ j ∈ Ai such that Γij = tnew are possible

candidates for piece-wise linear changes at time tnew, since conditions imposed on these

nodes or on arcs emanating from only these nodes were responsible for the value of tnew.

For each node i ∈ S, we therefore re-compute the simple behavior of the solution

functions over the interval (tnew - δ, tnew] using the following five steps:

1. We first update the values of the arc index variables CPij(d), CPij

(c), and CDPij for all

arcs (i, j) ∈ A where i ∈ S, so that they correspond to time t = tnew rather than at t = tcurrent.

The update will involve at most a single decrement operation for the indices CPii(d),

CPij(c); however, updating the index CDPij may involve several successive decrements or

increments until the desired linear piece of Cj(nw)(t + dij(t)) is located, depending on

whether or not dij (t) is discontinuous at time tnew. In a FIFO network, it will only be

necessary to decrement these indices.

2. For all nodes i ∈ S we compute the optimal outgoing arc Ni(t) for t ∈ (tnew - δ, tnew]

using the optimality conditions of Equation (4.2) and the fact that the optimal solution

functions Ci(w)(t) is already known for all nodes i ∈ N and all future times t > tnew. As in

the FIFO case, a degenerate situation arises if there is a tie between several arcs

emanating from the same node for the optimal outgoing arc at exactly time tnew, and this

tie must be broken by the linear coefficients of the simple linear path travel times

experienced by following each potential outgoing arc:

)(()(min*arg

)),,(1)(,(),(maxarg)(

)(

)()()(

newijneww

jnewijAj

i

dijijij

wj

cijij

ji

tdtCtcwhere

CPdCDPCCPctN

i

i

++=

++=

γ

αααγ

∀i ∈ S. (6.29)

Page 70: Continuous-Time Dynamic Shortest Path Algorithms

70

3. After the determination of Ni(t) for the interval (tnew - δ, tnew] , optimal path costs for

immediate departure, Ci(nw)(t), may be easily obtained for this interval of time as follows:

)()),(()()( )()( tNjwheretdtCtdtC iijw

jijnw

i =++= ∀i ∈ S. (6.30)

4. For each waiting node i ∈ S ∩ Nw, we compute changes to the set of feasible departure

waiting times FDWTi(t) at time tnew. Due to condition (vii), this set of times must remain

unchanged over the interval of departure times (tnew, tcurrent] . There may, however, be

changes to this set at time tnew: the time tnew is added as a new feasible departure waiting

time if Γi(v) = tnew

or Γi(vi) = tnew, and time tnew + ubwi(tnew) is removed if Γi

(vii) = tnew. This

update takes O(1) computation time since the set FDWTi(t) is implemented as a queue.

5. Finally, for each waiting node i ∈ S ∩ Nw we can determine optimal path costs where

waiting is allowed, Ci(w)(t), for t ∈ (tnew - δ, tnew] and the corresponding optimal waiting

times Wi(t) over this interval using equation (6.17). To do this, for each node i ∈ S we

select the most optimal feasible departure waiting time τi from the set of possibilities

given in the set FWDTi(tnew):

)()(minarg )(

)(τττ

τ

nwii

tFWDTi Ccwc

newi

+=∈

∀i ∈ S ∩ Nw (6.31)

This minimization operation can be made to take only O(1) amortized time if each queue

of feasible waiting departure times is implemented as an augmented queue which we call

a min-queue, described in Appendix A. For the interval of time t ∈ (tnew – δ, tnew] , we

will then have:

ttW ii −= τ)( ∀i ∈ S ∩ Nw, (6.32)

))(()())(()( )()( tWtCtcwctWtcwctC inw

iiiiw

i ++−+= ∀i ∈ S ∩ Nw. (6.33)

We have therefore a method for computing Ci(w)(t), Ci

(nw)(t), Ni(t), and Wi(t) for all nodes

over the time interval (tnew - δ, tnew] . For any node i ∈ S which transitions to a new LNI at

time tnew, our algorithm must record the completed LNI and initialize a new LNI at node

i. Only a small amount of overhead is required for this task, as LNIs are generated in

chronological order for each node, and therefore may be stored and updated efficiently in

either a linked list or an array data structure. Additionally, the Γij values must be re-

Page 71: Continuous-Time Dynamic Shortest Path Algorithms

71

computed for all (i, j) ∈ A such that i ∈ S or j ∈ S, and the Γi values must be re-computed

for all i ∈ S, so that a new value of tnew may be determined on the next iteration of the

algorithm.

In the event that tnew reaches -∞, we may terminate our algorithm. As long as tnew is

finite, however, we compute the necessary piece-wise changes of the solution functions at

time tnew, then set tcurrent to the value of tnew and repeat our backward chronological scan

until a complete solution is determined.

Proposition 6.3: The all-to-one chronological scan algorithm correctly solves the

minimum-cost all-to-one dynamic shortest path problem.

Justification of Proposition 6.3 is identical to that of Proposition 6.2, and is therefore

omitted.

6.3 General One-to-All Solution Algorithm

We next develop a symmetric chronological scan algorithm for solving any general

minimum-cost one-to-all dynamic shortest path problem. In contrast with the all-to-one

algorithm, the one-to-all algorithm works be scanning forward through time, starting

from an initial static shortest path tree computed at time t = –∞. A fundamental

difference in the operation of the algorithm is that it is necessary to compute the solution

functions Ci(w)(t), Ci

(nw)(t), PNi(t), PTi(t), and Wi(t) not at the source end of each arc and at

the current time tcurrent maintained by the chronological scan, but at the destination

(downstream) end of each arc and at some time in the future determined by the arc arrival

time functions. This situation adds considerably more complexity to the solution

algorithm, since it allows for LNIs to be generated and modified in an arbitrary order

rather than in chronological order in non-FIFO networks. The general one-to-all problem

also presents the only occasion in which singular LNIs may exist – this fact will further

complicate our solution algorithm.

Page 72: Continuous-Time Dynamic Shortest Path Algorithms

72

6.3.1 Simplification of Waiting Constraints

In order to simply the computation of optimal paths in the case where waiting is allowed,

we assume that the network transformation described in Section 6.2.1 has been

performed, so that there are now two types of nodes: non-waiting nodes, from which

immediate departure is required, and waiting nodes, which permit a restricted form of

waiting. Recall also that all network data functions are assumed to be continuous in

networks for which waiting is allowed. As discussed in Section 6.2.1, when determining

an optimal waiting time at a waiting node for the all-to-one algorithm, one need only

consider a finite set of feasible waiting departure times which constitute the piece-wise

boundaries of the waiting cost function Ci(nw)(t) + cwci(t) within a feasible range of

waiting times. In the one-to-all algorithm, we will instead be minimizing over previous

arrival times, so we maintain a set of feasible waiting arrival times, constructed as

follows: for any particular time tcurrent, the optimal waiting path cost value Ci(w)(t) at the

time t = tcurrent is given by the following minimization expression over previous times:

)()()(min)( )(

)(0|

)( tcwccwcCtC iinw

iubwt

wi

i

+−=≤−≤

τττττ ∀i ∈ Nw. (6.34)

Due to the FIFO property, the set τ | 0 ≤ t – τ ≤ ubwi(τ) will be a contiguous window of

time, and the endpoints of this window will be non-decreasing functions of t. For

convenience, we will compute an auxiliary function EWATi(t) for each i ∈ Nw, which

represents the earliest waiting arrival time from which one may start waiting at node i,

such that one may depart by or after time t:

min)()(0|

ττττ iubwt

i tEWAT≤−≤

= ∀i ∈ Nw. (6.35)

Computation of the EWATi(t) functions for all nodes is performed in much the same way

as a FIFO inverse is computed, by a single scan over the linear pieces of the ubwi(t)

functions for each node. After this computation, the set τ | 0 ≤ t – τ ≤ ubwi(τ) over

which minimization takes place in (6.34) may be expressed as the interval [EWATi(t), t] .

Due to continuity of network data functions, the minimum in (6.34) over this interval will

be attained either at an endpoint of this interval or at a piece-wise boundary point within

this interval of the waiting cost function, defined as WCFi(t) = Ci(nw)(t) – cwci(t) for the

one-to-all problem. However, due to the simplifying transformation of Section 6.2.1, it is

Page 73: Continuous-Time Dynamic Shortest Path Algorithms

73

only necessary to consider piece-wise boundary points when computing this minimum

for each waiting node. If we denote by FWATi(t) the set of feasible waiting arrival times

for node i and time t, defined as all piece-wise boundaries of WCFi(t) within the interval

of time [EWATi(t), t] , then we can re-write (6.34) as follows:

)()(min)()(

)( tcwcWCFtC iitFWAT

wi

i

+=∈

ττ ∀i ∈ Nw. (6.36)

It will be important to our chronological scan approach that the minimization expression

in (6.36) evaluates to the same result upon slight increases of the value of t, provided that

the set of feasible waiting arrival times FWATi(t) does not change. This will ensure that

Ci(w)(t) will behave as a simple linear function as we scan over any duration of time

during which FWATi(t) does not change.

6.3.2 Computing the Infinite-Time Shortest Path Tree

As in the all-to-one chronological scan algorithm, we initialize all solution functions by

computing a static shortest path tree as time approaches infinity. In the one-to-all, case,

however, this tree will be computed at t = –∞, since we will then scan forward through

time to generate the remainder of the solution. Since we assume the network becomes

static as time approaches -∞, we may generate this tree, which we call T-∞, and its

associated constant node cost functions, by simply applying a static shortest path

algorithm using the initial static travel cost β(cij , 1) as the cost of each arc (i, j) ∈ A.

Since waiting is never beneficial along paths arriving at times earlier than t – (given by

Lemma 4.1), we have Wi(t) = 0 and Ci(w)(t) = Ci

(nw)(t) for sufficiently small t. The

solution functions PNi(t) and PTi(t) are both easily determined from the structure and

optimal path costs of T-∞. We can therefore determine the behavior of all solution

functions for sufficiently small (negative) values of t.

6.3.3 Computing Linear Node Intervals

The one-to-all chronological scan algorithm computes the LNIs comprising all solution

functions using a forward scan through time. We will keep track of the current time

during the scan, tcurrent, initially -∞, and each iteration we will increase tcurrent by some

positive quantity until it reaches +∞ and a complete optimal solution has been

Page 74: Continuous-Time Dynamic Shortest Path Algorithms

74

determined. We will assume prior to each chronological time-step by induction that the

functions Ci(w)(t), Ci

(nw)(t), Wi(t), PNi(t), and PTi(t) have been correctly computed for all

times t ≤ tcurrent.

In addition to computing the solution function Ci(nw)(t) for each node i ∈ N, we will

compute additional solution functions for each arc (i, j) ∈ A, denoted by Cij(nw)(t), where

Cij(nw)(t) represents the cost of an optimal path arriving at exactly time t into node j via arc

(i, j). For all nodes we will then have

)(min)( )()( tCtC nwij

Bi

nwj

j∈= ∀j ∈ N. (6.37)

For any node j ∈ N and time t, the value PNj(t), the preceding node along an optimal path

arriving at time t, is therefore given as

)(minarg)( )( tCtPN nwij

Bij

j∈= ∀j ∈ N. (6.38)

Furthermore, we will compute for each arc the solution function PTij(t), where PTij(t)

gives the departure time at node i of the best path from the source which arrives at node j

exactly at time t, traveling via arc (i, j). We will then have:

)(),()( tPNiwheretPTtPT jijj == ∀j ∈ N. (6.39)

At a particular given time t, we will say that an arc (i, j) ∈ A is favorable if the cost of an

optimal path from the source node to node i and time t, extended by the cost cij(t) of

traveling along arc (i, j) at time t, provides the least cost of any path we know of so far

from the source to node j, arriving at time t + dij(t). If two or more arcs directed into

node j satisfy this condition, we may arbitrarily designate only one of them as favorable;

if arc (i, j) is designated as favorable, then node i will be the “back-pointer” of node j for

an arrival at time t + dij(t); that is, we will have PNj(t + dij(t)) = i and PTij(t + dij(t)) = t.

The following is a partial restatement of the one-to-all optimality conditions, and must be

satisfied for all arcs (i, j) ∈ A. For any favorable arc, the condition is satisfied as an

equality:

)()())(( )()( tctCtdtC ijw

iijnw

ij +≤+ ∀(i, j) ∈ A (6.40)

Page 75: Continuous-Time Dynamic Shortest Path Algorithms

75

The primary difference between the one-to-all algorithm and the all-to-one algorithm is

that we will be adjusting the solution functions Cij(nw)(t), PNi(t), and PTij(t) not at time

tcurrent, but at future times which correspond to the arrival times one attains when

departing along favorable arcs at time tcurrent. Since this operation will be performed

continually for all arcs during the forward scan through time, and since we will

inductively assume that the optimal values of all solution functions will have been

determined for all times t ≤ tcurrent, we will maintain the invariant that the solution

functions Cij(nw)(t), PNi(t), and PTij(t) for all values of time t later than tcurrent represent the

best attainable solution obtained by extending an optimal path arriving prior to or at time

tcurrent by a single arc. Therefore, we can ensure that these future solution function values

will be optimal when they are finally reached by the chronological scan.

In a similar fashion as in the all-to-one algorithm, the one-to-all algorithm will maintain

during its execution a set of state variables, most of them linear piece index variables,

described as follows. In general, for any time tcurrent, we will keep track of these state

variables for a time just beyond time tcurrent.

Page 76: Continuous-Time Dynamic Shortest Path Algorithms

76

Node Variables:CPi

(w) : Index of the current linear piece of Ci(w)(t), for t just past tcurrent

CPi(nw) : Index of the current linear piece of Ci

(nw)(t), for t just past tcurrent

CPi(EWAT) : Index of the current linear piece of EWATi(t) (described in

Section 6.3.1), for t just past tcurrent

CPi(cwc) : Index of the current linear piece of cwci(t), for t just past tcurrent

CWPi(nw) : Index of the current “minimal waiting” piece of Ci

(nw)(t), att = EWATi(tcurrent + ε), where ε is very small.

CWPi(cwc) : Index of the current “minimal waiting” piece of cwci(t), at

t = EWATi(tcurrent + ε), where ε is very small.FWATi : Set of all feasible waiting arrival times (as described in Section

6.3.1) for node i and a time t just past tcurrent, maintained as a min-queue (see Appendix A).

OPNi : Optimal preceding node of node i along an optimal path from thesource which arrives just past time tcurrent at node i.

BDRYi : Boolean variable which is true if time tcurrent marks the boundarybetween LNIs for node i.

Arc Variables:CPij

(d) : Index of the current linear piece of dij(t), for t just past tcurrent.CPij

(c) : Index of the current linear piece of cij(t), for t just past tcurrent.CDPij : Index of the current “downstream” linear piece of Cj

(w)(t) att = aij(tcurrent + ε), where ε is very small.

CPij(nw) : Index of the current linear piece of Cij

(nw)(t), for t just past tcurrent.CPij

(PT) : Index of the current linear piece of PTij(t), for t just past tcurrent.Fij : Boolean variable which is true if (i, j) is a favorable arc at a

departure time t just past tcurrent.INITij : If Fij is true, then this variable holds the value of aij(t0) where t0 is

earliest point in the current interval of time for which arc (i, j) isfavorable, as explained below.

For the purposes of the one-to-all algorithm, we will say that an arc (i, j) is favorable over

an interval of departure times if the arc travel time and cost functions dij(t) and cij(t) are

simple linear functions over this interval. Therefore, we can say that any arc which is

favorable over an interval of time between two times t0 to t1 will cause an adjustment

which improves the solution functions Cij(nw)(t), PTij(t), and PNj(t) over the contiguous

interval of arrival times between aij(t0) and aij(t1). If, while advancing through time, we

cross into a new linear piece of either dij(t) or cij(t) at some time t0 but the arc (i, j)

remains favorable, we say that in this case the initial point at which (i, j) became

favorable, INITij, is reset to t0. Under this convention, INITij always gives the starting

Page 77: Continuous-Time Dynamic Shortest Path Algorithms

77

endpoint of a contiguous region of arrival times during which solution functions are

being modified by a favorable arc (i, j).

Assume now that for some time tcurrent, we know the correct optimal values of all solution

functions for all times t ≤ tcurrent. Furthermore, suppose all state variables are determined

so as to be valid for a time which is infinitesimally greater than tcurrent. We try to

determine a time tnew > tcurrent to which one can advance such that all solution functions

maintain linear departure behavior and such that the set of favorable arcs remains

unchanged during the interval of time (tcurrent, tnew).

By examining equations (6.37), (6.38), and (6.39), we see that the solution functions

Ci(nw)(t) and PTi(t) will maintain simple linear behavior and the function PNi(t) will

maintain simple behavior while increasing the value of t over the region of time (tcurrent,

tnew) as long as the following conditions are satisfied:

(i) the value of t remains within the same linear piece of Cij(nw)(t) for each arc

(i, j) ∈ A.(ii) the same optimal incoming arc is selected by the minimization expression in

(6.38).(iii) the value of t remains within the same linear piece of PTij(t) for each arc

(i, j) ∈ A.

Further explanation is necessary to describe some subtle details related to conditions (i),

(ii) , and (iii) . If an arc (i, j) is not a favorable arc (that is, if Fij is false), then these

conditions pose no difficulty. However, if the arc is favorable, then as one advances the

value of a time t during the interval (tcurrent, tnew) , we know that the solution functions

Cij(nw)(t), PTij(t), and PNj(t) are being simultaneously adjusted at future points in time

given by aij (t), in order to reflect the presence of a better path to node j via a departure

along (i, j) at time t. However, we will actually perform this adjustment of these

functions after the value of tnew is discovered. Hence, for the purposes of evaluating

conditions (i)…(iii) , we must take into account the fact that these functions are

supposedly being simultaneously modified during the scan through time. In the event

that (i, j) is favorable over an interval of departure times between two times t0 and t1,

where t0 ≤ t1, we will be able to lower the solution function Cij(nw)(t) over the interval of

Page 78: Continuous-Time Dynamic Shortest Path Algorithms

78

arrival times aij(t0) to aij (t1). Using the state variable INITij, we keep track of the endpoint

aij(t0). In a non-FIFO network, this may be either the upper or lower endpoint of the

window of time over which Cij(nw)(t) is modified. When evaluating the conditions above,

we must therefore ensure that for times t during this window, we use the anticipated

modified values of the functions Cij(nw)(t) and PTij(t). Fortunately, due to our assumption

of a positive lower bound on all arc travel time functions, the only scenario for which we

will need to use anticipated values of Cij(nw)(t) and PTij(t) is the case where INITij is less

than or equal to tcurrent. In this situation, the value of t will be contained within the

window of modification of these solution functions. Otherwise, if INITij is greater than

tcurrent, then this will not be the case, and the remaining conditions which we impose on

the computation of tnew will prevent the value of t from advancing into a window of

modification of the solution functions. In short, if for any favorable arc (i, j) we have

INITij < tcurrent, then we must use the “anticipated” values of Cij(nw)(t) and PTij(t) rather

than the actual values of these functions when evaluating conditions (i), (ii) , and (iii) .

Next, for the solution functions which involve waiting, we will have over the interval of

time (tcurrent, tnew):

)()()(min)( )(

)(

)( tcwccwcCtC iinw

itFWAT

wi

i

+−=∈

τττ ∀i ∈ N, (6.41)

)()()(minarg)( )(

)(tcwccwcCtW ii

nwi

tFWATi

i

+−=∈

τττ

∀j ∈ N. (6.42)

Equations (6.41) and (6.42) indicate that the solution functions Ci(w)(t) and Wi(t) will

exhibit simple linear behavior while increasing t within the interval (tcurrent, tnew) as long

as Ci(nw)(t) remains a simple linear function, which is covered by the above conditions,

and as long as:

(iv) the value of t remains within the same linear piece of cwci(t) for waiting nodei ∈ Nw.

(v) no new entries are added to the feasible set of waiting arrival times FWATi

for all i ∈ Nw. (Removals from this set are covered by the preceding cases).No new entries are added to this set as long as t remains within the samelinear piece of EWATi(t), and as long as EWATi(t) remains within the samelinear pieces of cwci(t) and Ci

(nw)(t).

Page 79: Continuous-Time Dynamic Shortest Path Algorithms

79

Finally, Equation (6.40) must be satisfied by all arcs, and satisfied as an equality for all

favorable arcs. The same set of arcs will therefore remain favorable while increasing t

within the interval (tcurrent, tnew) as long as:

(vi) the value of t remains within the same linear piece of dij(t) for arc (i, j) ∈ A.(vii) the value of t remains within the same linear piece of cij(t) for arc (i, j) ∈ A.(viii) the value of t + dij(t) remains within the same linear piece of Cij

(nw)(t) for arc(i, j) ∈ A.

(ix) Equation (6.40) is satisfied by all arcs (i, j) ∈ A, and satisfied as equality forall favorable arcs.

We proceed to write the above conditions mathematically.

(i) ),( )()( nwij

nwijnew CPCBt ≤

∀(i, j) ∈ A, j ≠ sand Fij false, orINITij > tcurrent

(6.43)

(ii) )],()[/1( )()( nwij

nwijijnew CPCbt βρ −≤ ∀(i, j) ∈ A, j≠ s

where ρij >0(6.44)

with ,~

,),( )()(j

nwij

nwijij OPNiaCPC =−= αρ

,),(

)],(1/[)],(),([)(

~)(

~

)()()()(

≤++

=otherwiseCPC

tINITandFifCPdCPcCPCa nw

jinwji

currentijijd

ijijc

ijijw

iw

i

αααα

≤−+

=otherwiseCPC

tINITandFifCPdaCPcCPCb nw

jinwji

currentijijd

ijijc

ijijw

iw

i

),(

),(),(),([)(

~)(

~

)()()()(

ββββ

(iii) ),( )(PTijijnew CPPTBt ≤

∀(i, j) ∈ A, j ≠ sand Fij false, orINITij > tcurrent

(6.45)

(iv) ),( )(cwciinew CPcwcBt ≤ ∀i ∈ Nw (6.46)

(v.a) ),( )(EWATiinew CPEWATBt ≤ ∀i ∈ Nw (6.47)

(v.b) ),(

),(),()(

)()()(

EWATii

EWATii

nwi

nwi

newCPEWAT

CPEWATCWPCBt

αβ−

≤ ∀i ∈ Nw,α(EWATi, CPi

(EWAT)) > 0(6.48)

(v.c) ),(

),(),()(

)()()(

EWATii

EWATii

cwci

cwci

newCPEWAT

CPEWATCWPCBt

αβ−

≤ ∀i ∈ Nw,α(EWATi, CPi

(EWAT)) > 0(6.49)

(vi) ),( )(dijijnew CPdBt ≤ ∀(i, j) ∈ A, j ≠ s (6.50)

(vii) ),( )(cijijnew CPcBt ≤ ∀(i, j) ∈ A, j ≠ s (6.51)

Page 80: Continuous-Time Dynamic Shortest Path Algorithms

80

(viii)

<−−>−

≤0)],()1,()[/1(

0)],(),()[/1()()(

)()(

ijd

ijijijw

jij

ijd

ijijijw

jijnew ifCPdCDPCB

ifCPdCDPCBt

σβσσβσ

∀(i, j) ∈ A, j ≠ s,where ρij ≠0

(6.52)

with ),(1 )(dijijij CPdασ +=

(ix.a)

+−−≤

),(),(

),(),(),(1)()(

)()()()(

dijijij

nwij

cijij

wi

wiij

nwij

ijnew

CPdCDPC

CPcCPCCDPCt

βα

βββµ

∀(i,j)∈A,j ≠ s,Fij true,µij > 0

(6.53)

(ix.b)

−−+−≤),(),(

),(),(),(1)()(

)()()()(

dijijij

nwij

ijnw

ijc

ijijw

iw

i

ijnew

CPdCDPC

CDPCCPcCPCt

βα

βββµ

∀(i,j)∈A,j ≠ s,Fij false,µij < 0

(6.54)

with )),(1)(,(),(),( )()()()()( dijijij

nwij

cijij

wi

wiij CPdCDPCCPcCPC ααααµ +−+=

Each of the above conditions is a linear inequality relating the value of tnew to known

quantities. Let Γij(i), Γij

(ii), Γij(iii) , Γij

(vi), Γij(vii), Γij

(viii), Γij(ix.a), and Γij

(ix.b) be the smallest

upper bounds respectively given by conditions (i), (ii) , (iii) , (vi), (vii), (viii) , (ix.a), and

(ix.b) above for a particular arc (i, j) ∈ A, and let Γij denote the minimum of all of these

bounds for a particular arc. Similarly, let Γi(iv), Γi

(v.a), Γi(v.b), and Γi

(v.c) be the smallest

upper bounds given for each waiting node i ∈ Nw for the respective conditions (iv), (v.a),

(v.b), and (v.c), and let Γi denote the minimum of all of these bounds. We then have the

following means of computing tnew:

min,minmin),( i

Niij

Ajinew

w

t ΓΓ=∈∈

. (6.55)

As in the all-to-one algorithm, we store the Γij and Γi values in a heap, so this

computation can be performed in O(1) time.

Recall that the invariant we maintain is that all solution functions are optimally

determined for t ≤ tcurrent and that the current or “anticipated” functions Cij(nw)(t), PNi(t),

and PTij(t) for all values of time t later than tcurrent represent the best attainable solution

obtained by extending an optimal path arriving prior to or at time tcurrent by a single arc.

Since all arc travel times are by assumption greater than some positive constant ε, it

follows from the invariant that we will know the optimal values of Cij(nw)(t), PNi(t), and

Page 81: Continuous-Time Dynamic Shortest Path Algorithms

81

PTij(t) for all values of time t ∈ (tcurrent, tcurrent + ε). However, since all solution functions

are guaranteed to exhibit linear departure behavior over the interval (tcurrent, tnew), we will

therefore know the optimal values of Cij(nw)(t), PNi(t), and PTij(t) for times in the entire

interval t ∈ (tcurrent, tnew), and therefore for all times t < tnew. In order to advance the value

of tcurrent while maintaining the invariant above and while maintaining the necessary set of

node and arc state variables, we perform the following 3 steps:

1. Compute the values of all solution functions Ci(nw)(t), Ci

(w)(t), PNi(t), PTi(t), and Wi(t)

for the interval of time t ∈ (tcurrent, tnew). The time tcurrent may mark the boundary between

LNIs at one or more nodes; however, only nodes i ∈ N for which BDRYi is true will

transition to a new LNI at time tcurrent. All other nodes will carry over their previous

simple linear behavior from t = tcurrent into the region of time (tcurrent, tnew), so we therefore

only need to re-compute solution functions for nodes for which BDRYi is true. We

denote this set of nodes by NB. For any arc (i, j), where j ∈ NB, which is favorable over

the interval (tcurrent, tnew) we must first update the solution functions Cij(nw)(t) and PTij(t) so

as to satisfy Equation (6.40) over this interval. Hence, these functions reflect the

presence of the better path to node j via arc (i, j). The following set of equations is then

used to compute all solution functions over the interval. Each of these equations takes

only O(1) amortized time to apply, due to the amortized performance of the min-queue

data structure representing the set FWATi for each node:

jnw

ijnw

j OPNiwheretCtC == ),()( )()( ∀j ∈ NB (6.56)

jijj OPNiwheretPTtPT == ),()( ∀j ∈ NB (6.57)

jj OPNtPN =)( ∀j ∈ NB (6.58)

)()()(min)( )()( tcwccwcCtC jjnw

jFWAT

nwj

i

+−=∈

τττ ∀j ∈ NB (6.59)

)()(minarg)( )( τττ

jnw

jFWAT

j cwcCtWi

−=∈

∀j ∈ NB (6.60)

Before Equations (6.59) and (6.60) are evaluated, the time tcurrent and the corresponding

value of the waiting cost functions Cj(nw)(tcurrent) – cwcj(tcurrent) should for all j ∈ NB be

added to the set of feasible waiting arrival times, since time tcurrent will be a piece-wise

boundary point of the waiting cost function.

Page 82: Continuous-Time Dynamic Shortest Path Algorithms

82

2. Compute the values of all solution functions Ci(nw)(t), Ci

(w)(t), PNi(t), PTi(t), and Wi(t)

at exactly time tnew, and adjust the values of all future solution functions Cij(nw)(t), PNi(t),

and PTij(t) so that they represent the best attainable solution obtained by extending an

optimal path arriving exactly at time tnew by a single arc. This computation need only be

performed for nodes within the set S = i ∈ N | Γi = tnew, or ∃ j ∈ Ai such that Γij = tnew or

∃ i′ ∈ Bi such that Γi′i = tnew, since only nodes within this set were responsible for the

current value of tnew. After this computation, the state variables BDRYi for all nodes will

be set to true only for nodes within the set S, since these nodes will have potentially

undergone a transition to a new LNI at time tnew. The same set of equations given in

(6.56)…(6.60) are used to perform the solution function update at exactly time tnew. It

may be necessary, however, to adjust some of the linear piece index variables prior to this

computation.

3. Update all state variables so that they reflect a time just after tnew. This step is

straightforward, and performed in the same manner as the corresponding update step in

the all-to-one algorithm. Linear piece index variables are successively incremented or

decremented until they are found to contain a desired point in time.

After the completion of step 3, we may set the value of tcurrent to tnew and repeat the entire

sequence of steps again – the invariant described above will be maintained for the new

value of tcurrent. In the event that tnew reaches +∞, we may terminate the algorithm, having

computed a complete optimal solution.

Proposition 6.4: The one-to-all chronological scan algorithm correctly solves the

minimum-cost one-to-all dynamic shortest path problem.

Proof: We assert that the algorithm maintains the invariant that for any time tcurrent, all

node and arc state variables are correctly maintained, all solution functions are optimally

determined for t ≤ tcurrent, and that the functions Cij(nw)(t), PNi(t), and PTij(t) for all values

of time t later than tcurrent represent the best attainable solution obtained by extending an

optimal path arriving prior to or at time tcurrent by a single arc. Each iteration of the

Page 83: Continuous-Time Dynamic Shortest Path Algorithms

83

algorithm increases tcurrent by some positive amount, and since by Proposition 4.2 the

solution to the one-to-all problem always consists of a finite number of LNIs, the

algorithm will terminate, and it will terminate with an optimal solution.

6.4 Running Time Analysis

In this section we analyze the worst-case and average-case running time of the previous

chronological scan algorithms. For the case of computing minimum-time all-to-one paths

through a FIFO network, we will prove a strongly polynomial bound of O(mP*log n) on

worst-case running time. As with many other network optimization algorithms, this

polynomial bound is expected to be very loose – we will argue that the average-case

running time for sparse planar networks, such as transportation networks, should be

O(n1.5log n) under certain reasonable assumptions. By a planar network, in this thesis we

mean a network model which represents a problem that has a two-dimensional character

with respect to the spatial orientation of nodes. Arcs may conceivably cross each-other,

but it is assumed that arc travel times are generally highly-correlated with Euclidean

distance.

The general all-to-one and one-to-all problems are shown to be NP-Hard, although we

will argue that for most practical cases running time should be quite reasonable. This

theoretical analysis may be compared with the results of computation testing, presented

in Chapter 8.

It is often reasonable to make the assumption that each arc travel time function has at

most a constant number of pieces, since for most applications the complexity of these

functions should not necessarily scale with the size of the network. Under this

assumption, we will be able to show strong polynomial bounds on running time for the

minimum-time all-to-one algorithm in a FIFO network.

Proposition 6.5: The worst-case asymptotic running time of the all-to-one chronological

scan algorithm, when used to compute minimum-time paths in a FIFO network, is

O(mP*log n).

Page 84: Continuous-Time Dynamic Shortest Path Algorithms

84

Corollary: We have the following:

• If P(dij ) is O(1) for all (i, j) ∈ A, then the worst-case running time is O(m2log n).

• If G is a sparse, bounded-degree network and if P(dij) = O(1) for all (i, j) ∈ A, thenthe worst-case running time is O(n2log n).

• Since the all-to-one label-correcting algorithm can be applied to solve the minimum-time one-to-all problem for all departure times by Proposition 4.3, the above resultshold for this problem as well.

Proof: Initialization, including the computation of T∞, requires the same amount of

running time as a static shortest path algorithm, which is absorbed by the total

O(mP*log n) running time bound as long as a suitable polynomial modified static

shortest path algorithm, such as Dijkstra’s algorithm based on a heap, is used. During

each subsequent iteration of the main loop, O(1) time is required to determine a new

value of tnew, and then O(log n) time is required to update in the heap the Γij value for any

arc (i, j) directed into or out of a node for which an LNI boundary or arc travel time

boundary occurs at time tnew. Since, by Lemma 4.10, there is a bound of O(P*) on the

number of LNIs at each node, there will be a bound of O(P*) on the number of heap

updates required for the values of Γij corresponding to each arc (i, j) ∈ A. The total

number of arc updates is therefore O(mP*), each requiring O(log n) time, for a total

running time of O(mP*log n) in the worst case. If the transformation techniques given by

Proposition 4.3 are used to solve a one-to-all problem for all departure times as an all-to-

one problem, then the cost of transformation will always be dominated by the running

time of the all-to-one solution algorithm.

Proposition 6.6: For sparse, bounded-degree FIFO networks, the total running time of

the all-to-one and one-to-all chronological scan algorithms is O((I + S)log n), where I

gives the number of linear pieces present among all network data functions provided as

input, and where S gives the total number of LNIs among all solution functions produced

as output.

Page 85: Continuous-Time Dynamic Shortest Path Algorithms

85

Corollary: For sparse, bounded-degree FIFO networks, the running time of the all-to-

one and one-to-all chronological scan algorithms is no more than a factor of log n larger

than the best possible achievable running time for any solution algorithm.

Proof: For a FIFO network, LNIs for each node will be chronologically generated, and

linear pieces of each network data function will be examined chronologically. Upon the

generation of each LNI or the switch to a new linear piece of any network data function,

at most O(1) of the Γi and Γij values will be updated since the degree of the network is

bounded. Since these values are stored in a heap, there is a cost of O(log n) in running

time for each update, for a total running time of O((I + S)log n). The presence of waiting

constraints and costs will not influence this asymptotic running time, due to the amortized

performance of the min-queue data structure.

Since any solution algorithm must examine all I linear pieces of the network data

functions provided as input, and compute a total of S LNIs as output, we have a lower

bound of Ω(I + S) on running time, and we therefore conclude that if there were a better

algorithm for sparse bounded-degree FIFO networks, its asymptotic running time could

be no more than a factor of log n faster than that of the chronological scan algorithms.

Proposition 6.7: The minimum-cost all-to-one and one-to-all problems, and the

minimum-time all-to-one and one-to-all problems in a non-FIFO network, are all NP-

Hard.

Proof: The reader is referred to a well-written proof by Sherali, Ozbay, and Subramanian

in [18] which a reduction is given from the subset-sum problem, known to be NP-Hard.

The subset-sum problem is stated thus: given a set of N positive integers A1…AN, partition

these integers (if possible) into two subsets such that the sums of the elements in both

subsets are equal. Sherali, Ozbay, and Subramanian construct a “ladder network”

consisting of a series of N+1 primary nodes and a destination node, such that two

possible feasible routes connect primary node i to primary node i + 1: one route with a

static travel time of one, the other with a static travel time of 1 + Ai. Waiting in this

Page 86: Continuous-Time Dynamic Shortest Path Algorithms

86

network is forbidden. The final primary node is connected to the destination node with a

time-dependent arc, the travel time of which is given by:

.2

1

,1

101,10

12)(

1∑

=

=

++>−+≤+=

N

iiASwhere

otherwise

SNtorSNtifStd

(6.61)

If a subset exists with sum equal to exactly S, then a path through the ladder network

corresponding the this subset (i.e. following the routes corresponding to the elements in

the subset), departing at time t = 0,will arrive at exactly the right moment at the last node

so that a travel time of d(t) = 1 is available to continue to the destination, for an arrival

time of t = N + S + 1. Any other subset sum will result in an arrival to the last node at a

time such that the remaining travel time to the destination is S + 2, yielding an arrival

time later than N + S + 1. Therefore, a solution to the dynamic shortest path problem

through this ladder network is equivalent to a solution to the subset-sum problem.

We now argue that average-case running time for a broad class of problems, including

many transportation problem, should be quite efficient. If G is a sparse, bounded-degree

planar network, if P(dij) = O(1) for all (i, j) ∈ A, and if arc travel times are highly-

correlated with Euclidean distance, then the expected running time of the one-to-all and

all-to-one chronological scan algorithms is O(n1.5log n). This result is expected to hold

even for most minimum-cost problems and for problems in non-FIFO networks.

The number of LNI boundaries at a particular node i ∈ N is equal to the number of times

the optimal path from i to the destination d either transitions to a new path or undergoes a

piece-wise change in path travel time. This number is expected to be equal

approximately to the product of the average number of arcs in any such optimal path and

the number of linear pieces in the travel time functions of each of these arcs. For a planar

network in which arc travel times are highly correlated with Euclidean distance, the

average number of arcs in an optimal path is expected to be on the order of n0.5, giving

O(n0.5) expected LNIs at each node in the network. Using the reasoning from the proof

of Proposition 6.5, we therefore arrive at an expected running time of O(n1.5log n).

Page 87: Continuous-Time Dynamic Shortest Path Algorithms

87

Although in the non-FIFO case, there may be more overhead required to maintain the

CDPij arc index variables (in the FIFO case, these index variables decrease monotonically

over the course of the algorithm, but in the non-FIFO case these variables may be both

incremented and decremented substantially more over the course of the algorithm), for

most “reasonable” problems this extra overhead is expected to be negligible.

Since transportation networks, one of the main areas of application for dynamic network

algorithms, almost always satisfy the conditions of the previous claim, the all-to-one

chronological scan algorithm is expected to run in O(n1.5log n) time for these networks.

Since the best-known static shortest path algorithms applied to sparse, bounded degree

networks run in O(nlog n) time, we conclude that our chronological scan algorithms

should for many problems be a factor of only n0.5 slower asymptotically than a static

shortest path algorithm. Although there is typically a larger hidden constant in the

running time of the dynamic algorithm, these results are very encouraging for the

prospect of scaling these algorithms to large networks. The parallel techniques given in

Section 5.2 may be employed to further reduce computation time for minimum-time

problems in FIFO networks.

6.5 Hybrid Continuous-Discrete Methods

The performance of chronological scan algorithms, and in fact of any continuous-time

dynamic shortest path algorithm, is highly sensitive to the complexity of the dynamics

represented by the time-dependent network data functions. For very large networks with

time-dependent data functions containing many linear pieces, it is conceivable that the

solution functions may consist of very many linear pieces, leading to slower computation

times and greater memory demands. However, the factor one should blame for this

decline in performance is not that the solution algorithms are slow, but rather the fact that

these algorithms are required to compute the exact solution to a problem for which the

solution is simply very complex. Since algorithmic performance is primarily determined

by solution complexity, there are intrinsic bounds imposed on the performance of any

continuous-time dynamic shortest path algorithm which computes an exact solution.

Page 88: Continuous-Time Dynamic Shortest Path Algorithms

88

Aside from parallel computation, the only remedy for this difficulty is to devise

algorithms which compute only approximate solutions. Discrete-time algorithms can be

viewed as one such avenue for computing approximate solutions – these algorithms

effectively ignore dynamic solution behavior which takes place at a finer time scale than

the granularity of the global discretization applied to time. In this subsection, we

consider a slight modification of chronological scan algorithms such that they assume the

behavior of discrete-time algorithms in the event that solution dynamics reach such a fine

time scale. We will call such approaches hybrid continuous-discrete algorithms.

The general idea behind these algorithms is to try to force all solution LNIs to be longer

than certain minimum duration in time. This effect is not difficult to achieve – during the

chronological scan through time during which tcurrent either increases or decreases

between positive and negative infinity, we will impose a minimum step-size ∆ so that

tcurrent must either increase or decrease by at least ∆; that is, we require for each step

through time that |tnew – tcurrent| ≥ ∆. This requirement is imposed by adjusting the value

of tnew appropriately as it is computed. For example, consider the case of the all-to-one

chronological scan algorithm. If at any iteration of the algorithm we compute a value of

tnew where tnew > tcurrent – ∆, then we set the value of tnew to tcurrent – ∆ in order to enforce

the minimum step-size requirement. The set of nodes to update after a ∆-step will be

given by the set S = i ∈ N | Γi ≥ tnew or ∃ j ∈ Ai such that Γij ≥ tnew . In principle, no

other changes should be necessary to the remainder of the algorithm. One must ensure

that this is the case, however, since for any particular implementation there are certain

assumptions which one may make in the exact case which are no longer valid in the

hybrid case. For example, in the exact algorithm, the linear piece indices CPij(d), CPij

(c),

CPi(cwc), and CPi

(ubw) are guaranteed to decrease by at most one during any particular

iteration of the algorithm. In the hybrid case, it may be necessary to decrement these

variables several times until they reach their desired values. Accordingly, several values

may enter the queue of feasible waiting departure times FWDTi per iteration in the hybrid

case, whereas at most one value may enter per iteration in the exact case.

Page 89: Continuous-Time Dynamic Shortest Path Algorithms

89

Recall that for discrete-time problems, only integer-valued times within the finite window

0, 1, 2, …, T are considered during computation of a solution. If this is deemed to be an

acceptable approximation, then the hybrid algorithm discussed above may be further

modified so that each iteration must compute a discrete value of tnew which is in the

window 0, 1, 2, …, T, where the algorithm terminates once tnew is determined to be less

than 0. In this case, a bucket approach can be used to determine the next value of tnew

rather than a heap, so that the factor of log n will be effectively removed from the running

time of the algorithm. The worst-case running time of the hybrid algorithm in a FIFO

network should therefore improve to O(mT + SSP), assuming that the number of linear

pieces present in the network data functions is absorbed by this asymptotic quantity.

However, this worst-case running time is identical to the worst-case and average-case

running time of an optimal discrete-time algorithm for the all-to-one problem. Therefore,

in a FIFO network, approximate chronological scan algorithms can be made to perform

asymptotically equal to or better than optimal discrete-time solution algorithms.

It is possible to construct a continuous-discrete hybrid of the one-to-all chronological

scan algorithm as well, using the same type of modifications as described above.

Page 90: Continuous-Time Dynamic Shortest Path Algorithms

90

Chapter 7

Label-Correcting Algorithms

In this Chapter, we describe a second class of continuous-time dynamic shortest path

algorithms, which we call label-correcting algorithms. These algorithms have the

advantage of being simple to describe, due to their similarity to well-known existing

static label-correcting algorithms. Although their worst-case theoretical running time is

worse than that of chronological scan algorithms, these algorithms perform quite well in

practice for almost all reasonable problem instances.

The label-correcting methods we present in this Chapter are all based on the underlying

methodology of algorithms proposed by Halpern [12] and Orda and Rom [16] for

respectively solving the minimum-time and minimum-cost one-to-all problem, in which

waiting is allowed during certain fixed regions of time. The algorithm of Halpern solves

only the minimum-time problem, so we focus below on the more general algorithm of

Orda and Rom. Both papers consider general continuous-time arc travel time and travel

cost functions, and propose algorithms which produce an optimal solution within a finite

amount of time. However, as we shall see in this chapter, implementation of these

algorithms can be a non-trivial task due to the fact that they are based on operations on

general continuous-time functions. Under the piece-wise linearity assumptions of this

thesis, we will be able both to implement these algorithms and to prove strong bounds on

their worst-case and average-case running times, including polynomial worst-case

running time bounds for minimum-time algorithms in FIFO networks.

The results of computational testing of label-correcting algorithms, and a comparison

between their performance and that of chronological scan algorithms, appears in Chapter

8. The all-to-one and one-to-all label-correcting algorithms are described in Sections 7.1

and 7.2 respectively, and an analysis of their theoretical performance is contained in

Section 7.3.

Page 91: Continuous-Time Dynamic Shortest Path Algorithms

91

7.1 All-to-One Solution Algorithm

We proceed to develop a label-correcting solution algorithm for the minimum-cost all-to-

one continuous-time dynamic shortest path problem. Although the performance of this

algorithm is sensitive to whether or not the FIFO condition holds, the statement of the

algorithm itself is not simplified if the FIFO condition holds.

Dynamic label-correcting algorithms are essentially identical in operation to static label

correcting algorithms, only node labels are in this case given as functions of time rather

than as simple scalars. Using the notation developed in this thesis, these node labels are

the solution functions Ci(w)(t) and Ci

(nw)(t) for each node i ∈ N. For simplicity of

discussion, we first assume that waiting is forbidden, so that only the function Ci(nw)(t) is

necessary to specify optimal path costs. The optimality conditions for the general

minimum-cost all-to-one problem where waiting is forbidden are:

))(()()( )()( tdtCtctC ijnw

jijnw

i ++≤ ∀(i, j) ∈ A, i ≠ d, (7.1)

0)()( =tC nwd . (7.2)

Initially, we can set the node label functions to zero for the destination node, or infinity

otherwise. If the node label function Ci(nw)(t) for some arc (i, j) ∈ A violates one of the

optimality conditions given in (7.1), we can “correct” the label by assigning the label the

function min Ci(nw)(t), cij(t) + Cj

(nw)(t + dij(t)) . This correction process replaces the

parts of the solution function Ci(nw)(t) corresponding to values of time which violate

optimality. Each iteration of the algorithm involves locating an arc (i, j) ∈ A which

violates (7.1) and performing a correction of the node cost label of node i. If at a certain

point all optimality conditions are satisfied, then the algorithm terminates with a

complete solution.

If waiting is allowed, the same principle applies, only the process of correcting labels is

more complicated. In this case, we take the solution function Ci(w)(t) to be the label of

each node i ∈ N. At optimality the following conditions must be satisfied for every arc

(i, j) ∈ A, obtained from the all-to-one optimality conditions (4.1) and (4.2):

Page 92: Continuous-Time Dynamic Shortest Path Algorithms

92

)())(()()(min)( )(

)](,[

)( tcwcaCccwctC iijw

jijitubwtt

wi

i

−++≤+∈

ττττ ∀(i, j)∈ A, i ≠ d, (7.3)

0)()( =tC wd . (7.4)

Again, we initialize the node label functions to zero for the destination node and infinity

for all other nodes. If the optimality conditions above are satisfied for all arcs (i, j) ∈ A,

then we have an optimal solution. Otherwise, we must identify an arc (i, j) for which (7.3)

does not hold, and correct the label Ci(w)(t) by assigning to it the function min Ci

(w)(t),

fij (t) – cwci(t) , where fij (t) represents the function given by the result of the minimization

expression in (7.3). Computing fij (t) is not a trivial matter, but under our assumptions of

piece-wise linearity and continuity of all network data functions, this computation is

feasible. The minimum given by fij (t) is in this case achieved at either a value of τ which

is an endpoint of the window [t, t + ubwi(t)] , or at a value of τ which is a boundary

between two linear pieces of the piece-wise linear function cwci(t) + cij(t) + Cj(w)(aij(t))

within this window.

The all-to-one label-correcting algorithm is easy to state, and straightforward to

implement, provided that the following fundamental routines are available:

• Addition or subtraction of two functions,• Computation of the minimum of two functions,• Computation of the composition of two functions,• Determination if a function is less than or equal to function at all points in time,• Computation of a minimum of the form given in (7.3), if waiting is allowed.

In the absence of piece-wise linearity assumptions, some of these operations, in particular

the composition operation and the minimization operation, may be prohibitively

complicated to implement. Moreover, even if implementation is possible, these

operations may require excessive amounts of computation time to perform if the

functions under consideration are not sufficiently simple.

The worst-case and average-case theoretical running time of the all-to-one label-

correcting algorithm are discussed in Section 7.3. We now assert the correctness and

termination of the algorithm.

Page 93: Continuous-Time Dynamic Shortest Path Algorithms

93

Proposition 7.1: The all-to-one label-correcting algorithm correctly solves the minimum-

cost all-to-one dynamic shortest path problem and terminates within a finite amount of

time.

Proof: Proof is given for the case where the “scan-eligible” list of the label-correcting

algorithm is stored as a FIFO queue; it should be possible to extend this proof to cover

arbitrary variants for utilization of the scan-eligible list. Since optimality is required for

termination, we must simply prove that the algorithm terminates in a finite amount of

time. To do this, we employ the same reasoning as in a well-known proof of termination

for static label-correcting algorithms. Suppose that the algorithm proceeds in stages,

where in each stage a pass is made over all arcs (i, j) ∈ A, and any arc violating the

optimality conditions is used to correct the label of the node at the upstream end of the

arc. After the first stage, the algorithm will correctly identify all optimal paths which

consist of at most a single arc. After two stages, the algorithm will discover all optimal

paths consisting of two or fewer arcs, and in general, after N stages, the all optimal paths

of N or fewer arcs will have been discovered. We then claim, using the same reasoning

as the proof of Proposition 4.2, that all optimal paths consist of at most a finite number of

arcs, so that the all-to-one algorithm should terminate within a finite number of stages.

7.2 One-to-All Solution Algorithm

We attribute the one-to-all label-correcting algorithm to Orda and Rom [16]. This

algorithm proceeds in much the same fashion as the all-to-one label-correcting algorithm

discussed in the preceding subsection. The cost label of each node i ∈ N is taken to be

the function Ci(w)(t), and at optimality we must satisfy the following conditions for each

arc (i, j) ∈ A, which are derived from the one-to-all optimality conditions (4.5), (4.6), and

(4.7):

)()()()(min))(( )(

)(|

)( tctcwccwcCtaC ijiiw

ita

ijnw

jij

++−≤=

ττττ ∀(i, j) ∈ A, j ≠ s, (7.5)

0)()( =tC ws . (7.6)

Page 94: Continuous-Time Dynamic Shortest Path Algorithms

94

In order to simplify (7.5), we let fi(t) be the result of the minimization expression over τ.

Given our assumptions of piece-wise linearity (and continuity, in the event that waiting is

allowed), the computation of f(t) is difficult, but feasible. One possible method of

performing this computation involves building fi(t) using a forward scan through the

linear pieces of Ci(w)(t) – cwci(t), in a similar manner as that of the one-to-all

chronological scan algorithm.

To begin with, we will initialize the node label functions of all nodes to infinity, except

for that of the source node, which we set to zero. If at some point in time these node

label functions satisfy the optimality conditions (7.5) and (7.6), we terminate the

algorithm. Otherwise, we locate an arc (i, j) ∈ A for which (7.5) is not satisfied, and we

correct the node label function Cj(w)(t) by performing the assignment:

)()()()),((min))(( )()( tctcwctftaCtaC ijiiijw

jijw

j ++← (7.7)

If aij(t) is strictly increasing (i.e. if the arc travel time dij(t) satisfies the so-called strict

FIFO condition), then we have the following closed-form expression for the assignment:

))(())(())((),(min)( 111)()( tactacwctaftCtC ijijijiijiw

jw

j−−− ++← . (7.8)

In this case aij-1(t) denotes the inverse of the function aij(t), which will be equivalent to

the FIFO inverse of aij(t) under the strict FIFO condition. If the strict FIFO condition is

not satisfied, however, then it is still possible to perform the assignment given in (7.7),

only in this case the function Cj(w)(t) must be constructed by scanning over the linear

pieces of the piece-wise linear function minCj(w)(aij(t)), fi(t) + cwci(t)+cij(t), in much the

same fashion as the one-to-all chronological scan algorithm. Under piece-wise linearity

assumptions of network data, implementation of the one-to-all label-correcting algorithm

is straightforward, provided that the following operations are available:

• Addition or subtraction of two piece-wise linear functions,• Computation of the minimum of two piece-wise linear functions,• Computation of the composition of two piece-wise linear functions,• Determination if a piece-wise linear function is less than or equal to another piece-

wise linear function,• Solution of the minimization of the form given in (7.5), if waiting is allowed,• Determination of the result of the “composed assignment” operation given in (7.7).

Page 95: Continuous-Time Dynamic Shortest Path Algorithms

95

Proposition 7.2: The one-to-all label-correcting algorithm correctly solves the minimum-

cost one-to-all dynamic shortest path problem and terminates within a finite amount of

time.

Proof of the proposition follows the same line of reasoning as the proof of Proposition

7.1, and is omitted.

7.3 Running Time Analysis

We proceed to analyze the theoretical worst-case and average-case running time of the

one-to-all and all-to-one label-correcting algorithms. This analysis will make extensive

use of the running time analysis performed for chronological scan algorithms. Due to

Proposition 6.5, the minimum-cost and non-FIFO minimum-time problems are NP-Hard.

We will therefore focus only on the average-case running time for label-correcting

algorithms applied to these problems. For minimum-time FIFO problems, however, we

are able to establish strong polynomial bounds.

Proposition 7.3: The worst-case asymptotic running time of the all-to-one label-

correcting algorithm, when used to compute all-to-one minimum-time paths in a FIFO

network, is O(nmP*).

Corollary: We have the following:

• If P(dij ) is O(1) for all (i, j) ∈ A, then the worst-case running time is O(m2n).

• If G is a sparse, bounded-degree network and if P(dij) = O(1) for all (i, j) ∈ A, thenthe worst-case running time is O(n3).

• Since the all-to-one label-correcting algorithm can be applied to solve the minimum-time one-to-all problem for all departure times by Proposition 4.3, the above resultshold for this problem as well.

Proof: The strongest known polynomial asymptotic running time for a label-correcting

algorithm is achieved when its list of “scan-eligible” nodes is maintained as a FIFO

queue. In this case, the running time the algorithm will be equal to the number of arcs

times the maximum number of possible corrections of each node label times the amount

Page 96: Continuous-Time Dynamic Shortest Path Algorithms

96

of time required to correct a single label. Since by Lemma 4.5 minimum-time paths in a

FIFO network will contain at most n – 1 arcs, each label will be corrected at most n – 1

times, and since by Lemma 4.10 there are O(P*) LNIs generated for each node, the time

required to correct each label will be O(P*), since any of the operations performed by the

algorithm in a label correction may be performed in time proportional to the number of

linear pieces in the node label functions. The total running time of the minimum-time all-

to-one algorithm for computing optimal paths in a FIFO network is therefore bounded in

the worst case by O(nmP*). As with most label-correcting algorithms, this bound is

expected to be very loose.

We proceed to argue the expected running time of label-correcting algorithms for simple

problems. If G is a sparse, bounded-degree planar (defined as in Section 6.4) network, if

P(dij ) = O(1) for all (i, j) ∈ A, and if arc travel times are highly-correlated with Euclidean

distance, then the expected running time of the one-to-all and all-to-one label-correcting

algorithms is O(n0.5LC(n, m)), where LC(n, m) denotes the expected running time of a

static label-correcting algorithm applied to a network with n nodes and m arcs. This

result is expected to hold even for most minimum-cost problems and for problems in non-

FIFO networks. The reasoning behind this claim is similar to the reasoning behind the

average-case running time analysis for the chronological scan algorithms performed in

Section 6.4. Since we expect on average O(n0.5) LNIs comprising the solution for a

particular node, it will take O(n0.5) time on average to correct each node label, and

therefore we have a total expected running time of O(n0.5LC(n, m)). This expected

running time is very close to the expected running time of chronological scan algorithms,

and indeed, the results of computational testing show that for most problem instances the

two algorithms are very similar in running time.

In a FIFO network, parallel adaptations of the minimum-time all-to-one algorithm may

be constructed using the techniques outlined in Section 5.2, and these adaptations may be

used to efficiently solve either the minimum-time all-to-one problem or the minimum-

time one-to-all for all departure times. Unfortunately, it is not possible to apply the same

methods as in Section 6.5 to develop hybrid continuous-discrete approximation

Page 97: Continuous-Time Dynamic Shortest Path Algorithms

97

algorithms of label-correcting algorithms, so the development of label-correcting

approximation algorithms is an area open to future research.

Page 98: Continuous-Time Dynamic Shortest Path Algorithms

98

Chapter 8

Computational Results

This chapter contains the results of extensive computational testing of the continuous-

time dynamic shortest path algorithms presented in this thesis. In order to fully assess the

performance of these algorithms as a feasible solution technique, we compare their

performance with that of the best known discrete-time dynamic shortest path algorithm.

Appendix B contains all of the raw data collected from computational testing which is

interpreted in this Chapter.

8.1 Network Generation

Since transportation networks constitute one of the primary areas of application for

dynamic shortest path algorithms, we have selected most of the networks used for

computational testing to be networks with a high degree of resemblance to transportation

networks. Two real networks are considered: the Amsterdam A10 outer beltway

network, a large ring-shaped network with 196 nodes and 310 arcs, and an urban network

model of the city of Montreal, consisting of 6906 nodes and 17157 arcs. Additionally,

random networks of two types have been generated. The first type of these random

networks, which we refer to as random planar networks, were generated by placing

random points in a two-dimensional plane, and by randomly adding arcs until a desired

density is achieved. As explained in Section 6.4, a planar network in this thesis is

allowed to have arcs which cross, as long as the network has a general two-dimensional

character in which arcs primarily connect spatially close nodes. In keeping with this

notion, the probability that an arc is added between two randomly-selected nodes in a

random planar network was taken to be proportional to d/1 , where d is the distance

between the two nodes, since arcs in planar networks typically connect pairs of nodes

which are in close proximity. The second type of random network, which we refer to as a

completely random network, contains arcs which connect between randomly chosen pairs

of nodes, where the “length” of each arc is chosen randomly from a uniform distribution

Page 99: Continuous-Time Dynamic Shortest Path Algorithms

99

from 1.0 to 100.0. All random networks are built from an initial spanning tree of arcs in

order to ensure strong connectivity.

Time-dependent arc travel times were generated for each network in several ways, in

order to test the effects of varying types of dynamic situations. In each testing scenario, a

certain designated percentage of the arcs are assigned time-varying characteristics, since

many dynamic networks are expected to have a significant amount of arcs whose

behavior does not change substantially over time. For testing, we consider the entire

spectrum of networks from completely static networks up through networks in which

100% of the arcs exhibit dynamic characteristics. All time-varying arc travel time

functions are generated such that they are dynamic for a two-hour window of time, and

static for all times before this window. These functions are generated by one of two

possible methods. The first method, which we say generates triangular arc travel time

functions, attempts to simulate the tendency of many networks (in particular,

transportation networks) to experience rising congestion which peaks at some sort of

“rush-hour”, and then recedes. To generate a triangular arc travel time function, a

random hour-long interval is selected within the two-hour dynamic window of time. The

arc travel time function is then assigned a triangle-shaped profile over this hour-long

interval, whereby it rises linearly for 30 minutes, and then falls linearly for the remaining

30 minutes. We make certain that the linear slopes involved are shallow enough such

that the FIFO condition is satisfied. Triangular arc travel time functions therefore always

have exactly four linear pieces. The second method we use to generate arc travel time

functions is to simply generate a fixed number of totally random linear pieces within the

two-hour window of dynamic behavior. We say this methods generates random arc

travel-time functions; these functions will always be continuous, and consist of a

designated number of linear pieces. Both FIFO and non-FIFO sets of random arc travel

time functions were generated and tested using this method.

8.2 Description of Implementations

We have constructed implementations of the all-to-one chronological scan and label-

correcting algorithms, both written in C++, and run on a 333 megahertz Pentium-based

Page 100: Continuous-Time Dynamic Shortest Path Algorithms

100

system with 128Mb of memory. The performance of the one-to-all algorithms is

expected to be nearly identical to that of the corresponding one-to-all algorithms. We

consider only the computation of minimum-time paths, in either a FIFO or non-FIFO

network. Solution algorithms which compute minimum-cost paths are expected to

perform similarly. For simplicity, waiting at nodes was not considered for any testing

scenario. The presence of waiting constraints is expected to scale the algorithmic running

time by a small constant. All execution times (except for a few involving very large

problems) were obtained by averaging the execution times of 10 different all-to-one

computations, each with a new random destination. Correctness of algorithms was

ensured by verifying small test networks by hand, and by cross-checking the output of the

chronological scan and label-correcting algorithms for larger problems. For all problem

instances checked, the output of the two methods was identical.

The scan-eligible list of nodes for the all-to-one label-correcting algorithm was

implemented both as a FIFO queue and as a double-ended queue. Results are reported

for both implementations.

Furthermore, the best-known all-to-one discrete-time dynamic shortest path algorithm for

the case where waiting at nodes is not permitted, described in [5], was implemented in

order to make a comparison between continuous-time and discrete-time methods. For the

discrete-time algorithm, we consistently use a discretization of time into 1000 levels; for

a two hour total period of analysis, this corresponds to a discretization of time into

intervals of 7.2 seconds in duration. For the Montreal network, only 100 levels were used

due to memory constraints – the running time for 1000 levels should be 10 times the

running time measured with 100 levels, since the discrete-time solution algorithm has a

running time which is linear in the number of time intervals present. Dynamic arc travel

time data are always stored as piece-wise linear functions continuous-time functions. In

order to apply discrete-time solution algorithms, these functions are sampled at 1000

evenly-spaced points in time. We sometimes report the running time of this

discretization process separately from the running time of the discrete-time solution

algorithm; in general, this discretization process often requires as much or more

Page 101: Continuous-Time Dynamic Shortest Path Algorithms

101

processing time compared to the discrete-time solution algorithm. Whether or not these

two running times should be considered jointly depends on the underlying means of

storing time-dependent network data – if network data is stored in the space-efficient

form of piece-wise linear functions, then extra processing time will be required for

sampling this data. Alternatively, network data can conceivably be stored in discrete

sampled form, but this in turn can require a great deal of memory.

8.3 Assessment of Algorithmic Performance

There are many parameters one may vary when experimentally evaluating the

performance of dynamic shortest path algorithms. In the analysis that follows, we will

illustrate the effects upon computation time and memory of varying each significant

parameter associated with the problem in turn, while holding the remaining parameters

constant. From these results, one may extrapolate with reasonable certainty the

performance of the algorithms under consideration for many common problem instances.

Figure 1 displays the relative performance of each algorithm tested as a function of

network size. The chronological scan (CS) algorithm, the label-correcting algorithm

based on a FIFO queue and on a double-ended queue (LC-FIFO and LC-DEQUEUE

respectively), and the best-known discrete-time all-to-one algorithm are compared. The

DISC data series gives the running time of only the discrete-time algorithm, whereas the

DISC-TOTAL series gives this running time plus the time required to sample the piece-

wise linear travel time functions provided as input.

Each of the tests shown in Figure 1 was performed in a random planar network, where

25% of the arc travel time functions in the network varied with time. For these arcs,

travel time functions consisting of 10 random linear pieces satisfying the FIFO property

were generated. The number of nodes and arcs is scaled uniformly in the figure, so that

average node degree remains constant, and only network size changes between sample

executions.

For each of the network sizes tested, the continuous-time algorithms displayed similar

performance, and outperformed the discrete-time algorithm by a significant factor in

Page 102: Continuous-Time Dynamic Shortest Path Algorithms

102

speed, especially when the time required to sample data is considered. Additionally, the

average number of LNIs per node produced as output was observed to be no higher than

16, leading to more than an order of magnitude of savings in memory space compared

with the vectors of 1000 distance labels produced for each node by the discrete-time

algorithm. It is important to bear in mind, however, that these results for the continuous-

time algorithms are very sensitive to the remaining input conditions, such as the number

of linear pieces used as input, the percentage of the network which is dynamic, and on

factors such as the FIFO property and the planar nature of the network. One must

carefully consider all results presented in this section as a whole in order to be able to

predict the general performance of the continuous-time algorithms.

Figure 1: E xecut ion Time vs. Network Size(Planar, FIFO, 25% Dynamic, 10 Pi eces / Arc)

0

1

2

3

4

5

6

7

8

10x30 30x90 100x300 300x900 1000x3000 3000x9000

Nodes x Arcs

Exe

cutio

n T

ime

(sec

)

DISC-TOTAL

DISC

CS

LC-FIFO

LC-DEQUE

Note that the horizontal axis in Figure 1 and in the remaining figures does not follow a

linear scale. The running time of the DISC-TOTAL algorithm for the large 3000x9000

network was 20.11 seconds, and was omitted for better scaling of the entire graph.

Continuing with our analysis, Figure 2 shows the relationship between running time and

number of arcs, where the number of nodes is held constant at 100. As before, the

Page 103: Continuous-Time Dynamic Shortest Path Algorithms

103

network is planar in nature, and 25% of the arcs have been assigned time-varying travel

time functions with 10 linear pieces which satisfy the FIFO property. According to this

result, the label-correcting algorithms tend to scale better with increasing network

density. The running time of the discrete-time solution algorithm scales linearly with

density, as is expected.

Figure 2: E xecut ion Time vs. Number of Arcs(Planar, FIFO, 100 Nodes, 25% Dynamic, 10 Pieces / Arc)

0

1

2

3

4

5

6

200 400 800 1600 3200

Arcs

Exe

cutio

n T

ime

(sec

)

DISC-TOTAL

CS

DISC

LC-DEQUE

LC-FIFO

The average number of LNIs per node encountered while performing the sample runs for

Figure 2 was observed to fall between 8 and 16, for more than an order of magnitude of

savings in space compared with the discrete-time algorithm.

Figures 3 and 4 demonstrate the negative effects on running time which can occur as a

result of non-FIFO or non-planar networks. In non-FIFO networks, it is possible that

solution functions may consist of a very large number of LNIs due to cycles within

shortest paths. In non-planar networks, the length of a shortest path, in terms of arcs,

may possibly be very large, which will also result in a large number of LNIs within each

solution function. Each network tested in Figures 3 and 4 was randomly generated with

500 nodes, 25% dynamic behavior, and 10 random linear pieces in each dynamic arc

travel time function. Results indicate that the performance of label-correcting algorithms

is degraded far more than that of chronological scan algorithms by non-FIFO and non-

planar network conditions. This behavior is expected, as the cost of correcting labels is

Page 104: Continuous-Time Dynamic Shortest Path Algorithms

104

proportional to the number of LNIs present in the solution functions, and if there are

cycles in optimal paths or if optimal paths contain many arcs, then these labels are likely

to be corrected many times, leading to a decrease in performance for label-correcting

algorithms. The running time of the discrete-time algorithm, shown in Figure 3, is shown

not to depend on properties of network data, as is expected. Finally, the anomalous shape

of the result curves in Figure 4 is due to the fact that some of the random networks tested

happened to exhibit high numbers of LNIs – there was an average of 72 LNIs per node

over all networks generated with 1000 arcs. This average number was the maximum

observed for any of the samples cases in Figures 3 and 4, and still represents an order of

magnitude in savings in terms of memory as compared to the discrete-time algorithm.

Figure 3: Effect of Non-FIFO and Non-Planar Conditions on E xecut ion Time: Chronological Scan Algorithms

(500 Nodes, 25% Dynamic, 10 Pieces / Arc)

0

5

10

15

20

25

30

35

40

1000 2000 4000 8000

Arcs

Exe

cutio

n T

ime

(sec

)

CS (Non-FIFO)

CS (Non-Planar)

DISC

CS (FIFO, Planar)

In Figures 5 and 6, we examine the impact of complexity of the time-dependent network

data functions on the running time of solution algorithms. In Figure 5, random planar

networks with 500 nodes and 1500 arcs were generated, where each time-dependent arc

travel time function has 10 linear pieces and satisfies the FIFO condition. Running time

is then shown as a function of the percentage of arcs in the network randomly selected to

exhibit dynamic behavior (the remaining arcs have static travel time functions).

Page 105: Continuous-Time Dynamic Shortest Path Algorithms

105

Figure 4: Effect of Non-FIFO and Non-Planar Conditions on E xecut ion Time: Label-Correct ing Algorithms

(500 Nodes, 25% Dynamic, 10 Pieces / Arc)

0

10

20

30

40

50

60

70

80

90

1000 2000 4000 8000

Arcs

Exe

cutio

n T

ime

(sec

)

LC-DEQUEUE (Non-FIFO)

LC-FIFO (Non-FIFO)

LC-DEQUEUE (Non-Planar)

LC-FIFO (Non-Planar)

LC-FIFO (FIFO, Planar)

LC-DEQUEUE (FIFO, Planar)

Figure 5: E xecut ion Time vs. Dynamic F ract ion of Network(Planar, FIFO, 500 Nodes, 1500 Arcs, 10 Pieces / Arc)

0

2

4

6

8

10

12

14

16

0 10 25 50 75 100

Percentage of Network with Dynamic Characteristics

Exe

cutio

n T

ime

(sec

)

LC-FIFO

LC-DEQUE

CS

DISC-TOTAL

Page 106: Continuous-Time Dynamic Shortest Path Algorithms

106

The running time of the discrete-time algorithm is shown above not to depend on

complexity of network data functions, as is expected. The performance of the

chronological scan algorithm appears to scale far better than that both variants of the

label-correcting algorithm, which is expected based on the previous discussion of the

impact of complex solution functions upon the label-correcting algorithms. The average

number of LNIs comprising the solution functions was, as would be expected, greatest

for the case where 100% of the network was dynamic – in this case, there were 70 LNIs

per node on average. There is still an order of magnitude in memory savings, even in this

case, compared with the discrete-time algorithm.

In Figure 6, we generate random planar networks with 500 nodes and 1500 arcs, each

with 25% dynamic, FIFO behavior. The parameter we vary in this case is the number of

linear pieces comprising those arc travel time functions which are chosen to exhibit

dynamic behavior. Again, the chronological scan algorithm scales in running time much

better than either of the label-correcting algorithms. An average of 130 LNIs per solution

function (the maximum observed for this test) was observed for the sample case where

there were 50 pieces in each dynamic arc travel time function. Additionally, the discrete-

time algorithm is shown not to be sensitive to the complexity of input data, as expected.

Figure 6: Execution Time vs. Linear Pieces in Network Data Functions

0

5

10

15

20

25

30

35

4 10 25 50 100

Number of Linear Pieces in Each Dynamic Arc Travel Time Function

Exe

cutio

n T

ime

(sec

)

LC-FIFO

LC-DEQUE

CS

DISC-TOTAL

Page 107: Continuous-Time Dynamic Shortest Path Algorithms

107

In addition to networks with random topology, computational testing was performed on

two models of real networks. The Amsterdam A10 outer beltway is a ring network with

196 nodes and 310 arcs. For this network, we assigned random FIFO dynamic travel

time functions with a 4-piece triangular profile to 25% of the arcs. For this case, each

continuous-time algorithm ran in 0.05 seconds, and the discrete-time algorithm (with

sampling included) required 0.5 seconds. On average, there were 12 LNIs per solution

node. Additionally, we ran the same test with 25% of the arcs in the network assigned

random 10-piece FIFO travel time functions. In this instance, the chronological scan

algorithm required 0.11 seconds, the label-correcting algorithms required 0.22 seconds,

the discrete-time algorithm (with sample) required 0.44 seconds, and there were on

average 26 LNIs per solution function. Finally, a very large network model of the city of

Montreal, consisting of 6906 nodes and 17157 arcs, was tested, where 5% of arcs in the

network were assigned 4-piece FIFO dynamic travel time functions. For this case, the

discrete-time algorithm required 53.27 seconds, the label-correcting algorithm based on a

FIFO queue required 24.01 seconds, the dequeue-based label-correcting algorithm

required 10.88 seconds, the chronological scan algorithm required 3.35 seconds, and

there were 6 LNIs per solution function on average.

In summary, continuous-time methods appear to offer substantial improvements both in

terms of running time and memory requirements over existing discrete-time algorithms,

provided that dynamic network conditions are sufficiently well-behaved. For FIFO

planar networks of low degree, where a relatively small percentage of the network is

actually dynamic, and where the dynamic part of the network exhibits relatively simple

behavior, continuous-time approaches appear to be a good deal more efficient in both

time and space than their discrete-time counterparts. This analysis supports the

conclusion that continuous-time methods may be more favorable to incorporate in

network optimization systems designed around many different types of dynamic network

scenarios, particularly those related to transportation networks.

Page 108: Continuous-Time Dynamic Shortest Path Algorithms

108

Chapter 9

Conclusions

Below, we give a summary of the results and contributions developed in this thesis,

followed by suggestions for future research.

9.1 Summary of Results

The developments in this thesis originated from a need for more efficient models and

algorithms for computing shortest paths in dynamic networks than existing discrete-time

methods. In order to escape the inherent restrictions of discrete-time models, we

considered instead the use of continuous-time models, and argued that computation

within this domain was practical only with a simplifying piece-wise linearity assumption

on all time-dependent network data functions. Adopting this assumption, we discussed

the classification of all common problem variants of the dynamic shortest path problem

and formulated these problems mathematically in continuous time.

Mathematical properties of dynamic networks and dynamic shortest path problems,

especially those involving FIFO networks, were discussed. Initially, we proved the

existence and finiteness of a solution for all dynamic shortest path problem variants. In

addition to several other important properties of FIFO networks, we showed strong

polynomial bounds of O(P*) linear pieces in the solution function of each node, and it

was shown that the minimum-time one-to-all problem for all departure times is

computationally equivalent to the minimum-time all-to-one problem in a FIFO network.

The vast majority of the discussion in this thesis is centered around the development of

two broad classes of solution algorithms, either of which can be applied to solve all

variants of the continuous-time dynamic shortest path problem. One of these classes, the

set of label-correcting algorithms presented in Chapter 7, represents a generalization of

previous theoretical algorithmic results by Orda and Rom [16]. These algorithms have

the advantage that they are simple to state and efficient in practice. The second class of

Page 109: Continuous-Time Dynamic Shortest Path Algorithms

109

solution algorithms, the chronological scan algorithms presented in Chapter 6, are

slightly more difficult to state, but perform equally well in practice and have stronger

theoretical running times. These algorithms, while designed for dynamic networks, are

extremely well-equipped to handle other types of extended static problems. For instance,

static shortest path problems with time windows and time-constrained minimum-cost

static shortest path problems are likely to be solved extremely efficiently by the methods

outlined in this thesis. In fact, the dynamic algorithms developed in this thesis may well

represent one of the most effective solution methods for these types of extended static

problems.

For the minimum-time all-to-one problem and the minimum-time one-to-all problem for

all departure times in a FIFO network, we derived strong polynomial worst-case running

time bounds of O(mP*log n) and O(nmP*) respectively for the chronological scan and

label-correcting algorithms. Although we show that the more general minimum-cost and

non-FIFO minimum-time dynamic shortest path problems are NP-Hard, we argue that the

expected running time required to solve most common problems should be quite

reasonable. For instance, for problems in sparse, bounded degree, networks with a planar

character which involve “reasonable” dynamics consisting of at most a constant number

of linear pieces, the expected asymptotic running time for the chronological scan

algorithm and for the label-correcting algorithm should be on the order of only n0.5 times

greater than that of an efficient static shortest path algorithm applied to the same network.

Furthermore, for any FIFO problem in a bounded-degree network, we show that the

running time of the chronological-scan algorithm is only a factor of log n greater than the

combined size of the network data functions specified as input and the solution functions

produced as output by the algorithm, and therefore only a factor of log n away from the

best possible running time for which one could ever hope. For all minimum-time

problems in a FIFO network, we additionally proposed methods to partition a problem

into disjoint sub-problems for parallel computation, and proposed approximation

techniques for further speeding up the chronological scan algorithms.

Page 110: Continuous-Time Dynamic Shortest Path Algorithms

110

The computational study of this thesis showed that the performance of continuous-time

methods is comparable to that of discrete-time methods, especially for cases in which a

relatively fine discretization is applied to time. For certain problems in which the

dynamic nature of the network is not excessively complicated, we have observed as much

as an order of magnitude of savings in running time, and often more than an order of

magnitude of savings in terms of memory requirements by using continuous-time

methods.

9.2 Future Research Directions

There are numerous research directions one may wish to follow as an extension of the

work of this thesis. Although all common variants of the dynamic shortest path problem

are addressed and solved within this thesis, there are several possibilities for future work

relating to this problem. Since the running time of the chronological scan algorithms

developed in this thesis are a factor of log n away from optimality for solving problems in

bounded-degree FIFO networks, and arguably far from optimality for non-FIFO

networks, there exists the possibility that stronger solution algorithms may exist. The

brief treatment devoted to hybrid continuous-discrete chronological scan algorithms can

certainly be expanded to encompass a more rigorous treatment of approximation

algorithms for continuous-time dynamic shortest path problems, and approximation

algorithms derived from label-correcting methods may be possible to develop. Finally,

the one-to-all problem for all departure times has only been addressed within the context

of FIFO networks; no solution techniques currently exist for this problem variant for non-

FIFO networks or minimum-cost problems.

Another exciting prospect for future research is the possibility of applying the algorithmic

methodology developed in this thesis to other continuous-time dynamic network

optimization problems. The general approach employed by the chronological scan

algorithms of this thesis may be well-suited to solving many other problems in dynamic

networks with piece-wise linear characteristics, such as the maximum-flow problem, the

minimum-cost flow problem, the multi-commodity flow problem, and the traveling

salesman problem.

Page 111: Continuous-Time Dynamic Shortest Path Algorithms

111

Finally, continuous-time dynamic shortest path algorithms may be used as a component

within larger network optimization systems. For example, in the rapidly developing field

of Intelligent Transportation Systems (ITS), there is a growing need for algorithms which

can efficiently compute route guidance information for drivers on a dynamic

transportation network. Dynamic shortest path problems are a component within larger

network optimization problems such as dynamic routing and dynamic traffic assignment

problems, and with the advent of continuous-time dynamic shortest path algorithms,

advances may be possible to some of these larger problems. In particular, since there is

no discretization of data, the exact mapping from arc travel time and costs to optimal path

travel times and costs given by continuous-time models preserves the analytical

properties of mathematical solution algorithms, allowing for more robust mathematical

analysis of solution algorithms to these large-scale problems.

Page 112: Continuous-Time Dynamic Shortest Path Algorithms

112

Appendices

Appendix A. The Min-Queue Data Structure

The minimum-cost all-to-one and one-to-all chronological scan algorithms presented in

this thesis can achieve greater efficiency if the set of feasible waiting departure times at

each waiting node is represented using an augmented queue data structure, which we call

a min-queue. A min-queue is a queue which supports the following operations, each

requiring O(1) amortized time.

• Insertion of a value at the head of the queue,• Removal of the value at the tail of the queue,• Computation of the minimum value and the index of its position in the queue,• Addition of a constant to every value in the queue.

For simplicity of discussion, we’ll say that the queue contains nodes of two colors: red

and green. Each red node maintains a pointer to the smallest red node ahead of it in the

queue. Green nodes have no such pointer. As elements are inserted into the queue, they

start out green, and eventually may switch to being red. We also keep track of two global

pointers, one to the smallest red element, and the other to the smallest green element. We

can determine the minimum element in the queue using these pointers in O(1) time.

Insertion of an element works the same as with a regular queue, also takes only O(1) time

(it may be necessary to update the “minimum green element” pointer on an insertion).

Removal of an element is a bit more difficult. Removal of a non-minimal element is

straightforward. If we remove the minimum element in the queue and it is red, then we

simply update the “minimum red element” pointer using the “smallest red element ahead”

pointer of the element we removed. However, if we remove the minimum element and

find that it is green, then we invoke a re-coloring step: scan all the elements in the list

(they all have to be green) and color them red by initializing their pointers. Removal of

an element takes O(1) time unless a re-coloring step is required, in which case it takes

O(n) time. However, re-coloring is free in an amortized sense, since we can “charge”

each element an additional O(1) upon its insertion in order to pay for its eventual re-

coloring sometime in the future.

Page 113: Continuous-Time Dynamic Shortest Path Algorithms

113

Appendix B. Raw Computational Results

The following table contains all data collected during computational testing. Each line

contains the result of running all implemented algorithms in turn on a single test network.

The types of networks and exact implementations of the algorithms used for these tests

are described in Chapter 8.

The Network Type column gives the particular network used for testing: A for the

Amsterdam outer beltway, M for the network model of Montreal, RP for a random planar

network, and R for a completely random network. If the Triangular dij(t) Functions?

column is checked, then random arc travel time functions were generated with a

triangular profile, described in Chapter 8; otherwise, random piece-wise linear arc travel

time functions were generated. The % Dynamic column indicates the percentage of the

arcs in the network with time-varying travel time functions; all other arcs are given static

travel times. For those arcs with dynamic travel time functions, the # Linear Pieces

column gives the number of linear pieces they contain. The two discrete-time running

time columns on the far right of the table give running times for discretizing and solving

a discrete-time dynamic shortest path problem using 1000 discrete levels of time; Chapter

8 contains further comments regarding these algorithms. All running times are given in

seconds.

Net

wor

k T

ype

# N

odes

# A

rcs

FIF

O?

“Tria

ngul

ar”

dij(t

)F

unct

ions

?

% D

ynam

ic

# Li

near

Pie

ces

Ave

rage

# o

f Sol

utio

nLN

Is p

er N

ode

(FIF

O q

ueue

) La

bel-

Cor

rect

ing

Run

ning

Tim

e

Deq

ueue

) La

bel-

Cor

rect

ing

Run

ning

Tim

e

Chr

onol

ogic

al S

can

Run

ning

Tim

e

Dis

cret

izat

ion

Run

ning

Tim

e

Dis

cret

e-T

ime

Dyn

amic

SP

Run

ning

Tim

e

RP 10 30 • 25 10 3 0.000 0.005 0.005 0.000 0.016RP 30 90 • 25 10 5 0.011 0.011 0.011 0.110 0.055RP 100 300 • 25 10 9 0.082 0.077 0.071 0.275 0.187RP 300 900 • 25 10 13 0.401 0.357 0.379 0.769 0.599RP 1000 3000 • 25 10 16 1.720 1.593 1.676 3.626 2.253RP 3000 9000 • 25 10 9 5.220 4.450 7.200 12.58 7.530

RP 100 200 • 25 10 16 0.055 0.060 0.066 0.165 0.126RP 100 400 • 25 10 13 0.159 0.148 0.181 0.330 0.247RP 100 800 • 25 10 9 0.203 0.214 0.418 0.659 0.473RP 100 1600 • 25 10 9 0.379 0.451 1.368 1.538 0.907RP 100 3200 • 25 10 8 0.742 0.835 5.044 3.901 1.720

RP 500 1000 • 25 10 21 0.385 0.407 0.522 0.897 0.709

Page 114: Continuous-Time Dynamic Shortest Path Algorithms

114

RP 500 1000 25 10 72 34.335 33.918 1.560 0.879 0.709R 500 1000 • 25 10 18 0.324 0.330 0.429 0.879 0.637RP 500 2000 • 25 10 14 0.962 0.912 1.137 2.033 1.379RP 500 2000 25 10 25 2.659 2.989 1.987 2.033 1.374R 500 2000 • 25 10 21 1.830 1.868 1.588 2.088 1.231RP 500 4000 • 25 10 14 1.692 1.929 3.610 5.275 2.676RP 500 4000 25 10 29 6.346 10.137 7.269 5.220 2.665R 500 4000 • 25 10 16 3.020 3.630 4.120 5.220 2.360RP 500 8000 • 25 10 9 2.750 3.190 8.570 11.10 5.270RP 500 8000 25 10 38 26.480 77.750 34.070 11.15 5.330R 500 8000 • 25 10 20 9.180 19.340 18.240 11.21 4.440

A 196 310 • • 25 4 12 0.050 0.050 0.050 0.275 0.220A 196 310 • 25 10 26 0.220 0.220 0.110 0.275 0.160M 6906 17157 • • 5 4 6 24.010 10.880 3.350 28.57 13.700

RP 500 1500 • 0 10 1 0.050 0.050 0.050 1.319 1.100RP 500 1500 • 10 10 9 0.490 0.440 0.380 1.374 1.100RP 500 1500 • 25 10 21 1.590 1.320 0.880 1.429 1.100RP 500 1500 • 50 10 43 5.270 6.810 1.920 1.538 1.100RP 500 1500 • 75 10 53 7.800 10.220 2.420 1.538 1.150RP 500 1500 • 100 10 70 15.050 11.540 3.190 1.648 1.100

RP 500 1500 • • 25 4 7 0.270 0.220 0.270 1.429 1.100RP 500 1500 • 25 10 21 1.590 1.320 0.930 1.429 1.150RP 500 1500 • 25 25 62 7.470 6.430 2.690 1.429 1.100RP 500 1500 • 25 50 130 25.000 24.450 5.660 1.429 1.150RP 500 1500 • 25 100 112 29.180 24.840 5.440 1.484 1.100

Page 115: Continuous-Time Dynamic Shortest Path Algorithms

115

References

[1] B. H. Ahn, J. Y. Shin (1991), “Vehicle routing with time windows and time-varyingcongestion”. J. Opl. Res. Soc. 42, 393-400.

[2] R. Ahuja, T. Magnanti, J. Orlin (1993). Network flows: Theory, algorithms, andapplications. Prentice Hall, Englewood Cliffs, NJ.

[3] I. Chabini (1997). “A new algorithm for shortest paths in discrete dynamicnetworks”. Proceedings of the IFAC Symposium on Transportation Systems. Chania,Greece.

[4] I. Chabini (1998). “Discrete Dynamic Shortest Path Problems in TransportationApplications: Complexity and Algorithms with Optimal Run Time”. TransportationResearch Record 1645.

[5] I. Chabini and B. Dean (1999). “The Discrete-Time Dynamic Shortest Path Problem:Complexity, Algorithms, and Implementations”. Submitted for Publication.

[6] I. Chabini, M. Florian, N. Tremblay (1998). “Parallel Implementations of Time-Dependent Shortest Path Algorithms”. Spring INFORMS National Meeting, Montreal,Quebec, April 1998.

[7] L. Cooke and E. Halsey (1966). “The shortest route through a network with time-dependent internodal transit times”. Journal of Mathematical Analysis and Applications14, 492-498.

[8] C. Daganzo (1998). “Symmetry Properties of the Time-Dependent Shortest PathProblem With FIFO”. Private Communication with Ismail Chabini.

[9] B. Dean (1997). “Optimal Algorithms for Computing Shortest Paths in DynamicNetworks with Bounded Waiting at Vertices”. Internal Report.

[10] S. Dreyfus (1969). “An appraisal of some shortest-path algorithms”. OperationsResearch 17, 395-412.

[11] S. Ganugapati (1998). “Dynamic Shortest Path Algorithms: ParallelImplementations and Application to the Solution of Dynamic Traffic AssignmentModels”. Master’s Thesis, MIT Center for Transportation Studies.

[12] J. Halpern (1977). “Shortest Route with Time-Dependent Length of Edges andLimited Delay Possibilities in Nodes”. Zeitschrift fur Operations Research 21, 117-124.

Page 116: Continuous-Time Dynamic Shortest Path Algorithms

116

[13] I. Ioachim, S. Gelinas, F. Soumis, J. Desrosiers (1998). “A Dynamic ProgrammingAlgorithm for the Shrotest Path Problem with Time Windows and Linear Node Costs”.Networks 31, 193-204.

[14] D. E. Kaufman, R. L. Smith (1993). “Fastest paths in time-dependent networks forintelligent-vehicle-highway systems application”. IVHS Journal 1, 1-11.

[15] A. Orda and R. Rom (1990). “Shortest-path and minimum-delay algorithms innetworks with time-dependent edge length”. Journal of the ACM 37 (3), 607-625.

[16] A. Orda and R. Rom (1991). “Minimum weight paths in time-dependent networks”.Networks 21 (3), 295-320.

[17] S. Pallottino and M. Scutella (1998). “Shortest Path Algorithms in TransportationModels: Classical and Innovative Aspects”. In (P. Marcotte and S. Nguyen, Eds.)Equilibrium and Advanced Transportation Modelling. Kluwer 245-281.

[18] H. Sherali, K. Ozbay, S. Subramanian (1998). “The Time-Dependent Shortest Pairof Disjoint Paths Problem: Complexity, Models, and Algorithms”. Networks 31, 259-272.