113
Lattices and maximum flow algorithms in planar graphs Diplomarbeit vorgelegt von Jannik Matuschke TU Berlin 15. September 2009 Combinatorial Optimization & Graph Algorithms Institut f¨ ur Mathematik g Technische Universit¨ at Berlin Betreuung Dr. Britta Peis Gutachter Prof. Dr. Martin Skutella Prof. Dr. Rolf M¨ ohring

Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Lattices and maximum flow algorithms

in planar graphs

Diplomarbeit

vorgelegt von

Jannik MatuschkeTU Berlin

15. September 2009

Combinatorial Optimization &

Graph Algorithms

Institut fur Mathematik

g Technische Universitat Berlin

Betreuung

Dr. Britta Peis

Gutachter

Prof. Dr. Martin Skutella

Prof. Dr. Rolf Mohring

Page 2: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework
Page 3: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Die selbstandige und eigenhandige Ausfertigung versichert an Eides statt

Berlin, den

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Datum Unterschrift

Page 4: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Ich danke allen, die mich beim Schreiben dieser Arbeit unterstutzt haben, sei es als

motivierende Betreuerin, eifrige Korrekturleser oder interessierte Kaffeeraum-Forschungs-

partner, sowie den beiden Gutachtern, die sich jetzt durch viele, hoffentlich angenehm zu

lesende Seiten kampfen mussen. Ganz besonders danke ich meinen Eltern, die nun schon

ganze 25 Jahre immer fur mich da gewesen sind. Allen Kollegen bei der COGA danke ich

fur die bislang sehr schone Zeit und freue mich auf das, was kommt.

Page 5: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Abstract

This thesis provides a comprehensive analysis of the structure of the left/right relation on

the set of simple s-t-paths in a plane graph. As a major result, we prove that this relation

induces a lattice on the path set. This lattice fulfills a special notion of “submodularity”

and is furthermore even “consecutive” if the embedding of the graph is s-t-planar, implying

that Ford and Fulkerson’s uppermost path algorithm for maximum flow in s-t-planar

graphs [10] is in fact a special case of Frank’s general two-phase greedy algorithm for

packing problems on lattice polyhedra [13].

We also show that it is not possible to achieve consecutivity for a submodular lattice

in general planar graphs. Further results include a fast implementation of the two-phase

greedy algorithm using a technique of succinct encoding of chains in consecutive structures

and a short discussion of a weighted version of the maximum flow problem, which can be

solved by the uppermost path algorithm if the weight function meets certain supermodu-

larity and monotonicity conditions.

Zusammenfassung

Die vorliegende Arbeit bietet eine umfassende Analyse der left/right-Relation auf der Men-

ge der einfachen s-t-Pfade eines planar eingebetteten Graphen. Als ein zentrales Resultat

wird gezeigt, dass diese Relation einen sogenannten ”submodularen“ Verband definiert,

der im Falle einer s-t-planaren Einbettung des Graphen sogar ”konsekutiv“ ist. Eine di-

rekte Folgerung dieses Resultats ist, dass der Uppermost-Path-Algorithmus von Ford und

Fulkerson fur das Maximum-Flow-Problem auf s-t-planaren Graphen [10] ein Spezialfall

von Franks Two-Phase-Greedy-Algorithmus fur Packungs-Probleme auf Lattice-Polyedern

[13] ist.

Es wird außerdem gezeigt, dass es nicht moglich ist, einen Verband auf allgemein-

planaren Graphen zu definieren, der gleichzeitig konsekutiv und submodular ist. Weitere

Resultate beinhalten eine schnelle Implementierung des Two-Phase-Greedy-Algorithmus

mit Hilfe einer kompakten Kodierungstechnik fur Ketten in konsekutiven Ordnungen und

eine kurze Betrachtung eines gewichteten Maximum-Flow-Problems. Dieses Problem wird

vom Uppermost-Path-Algorithmus auf planaren Graphen gelost, sofern die Gewichtsfunk-

tion supermodular und monoton steigend ist.

Page 6: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Contents

1 Introduction 1

1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Topics and contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Preliminaries and basic definitions 8

2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.2 Walks, connectedness, trees and cuts . . . . . . . . . . . . . . . . . . 15

2.2.3 The arc space and its subspaces . . . . . . . . . . . . . . . . . . . . . 19

2.3 Planar graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3.1 Embedded graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.3.2 Planar graphs, cycle/cut duality and face potentials . . . . . . . . . 24

2.3.3 Subgraphs of planar graphs, deletion and contraction . . . . . . . . . 28

2.3.4 Algorithmic aspects of planar graphs . . . . . . . . . . . . . . . . . . 30

2.4 Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.4.1 Flow decomposition and the maximum flow problem . . . . . . . . . 34

2.4.2 Max-flow/min-cut and the algorithm of Ford and Fulkerson . . . . . 37

3 Lattices and the two-phase greedy algorithm 40

3.1 Introduction to lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.1 Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.1.2 Covering and packing problems . . . . . . . . . . . . . . . . . . . . . 43

3.1.3 Total dual integrality of lattice polyhedra . . . . . . . . . . . . . . . 45

3.2 The two-phase greedy algorithm . . . . . . . . . . . . . . . . . . . . . . . . 46

3.2.1 A simple implementation . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2.2 Correctness of the algorithm . . . . . . . . . . . . . . . . . . . . . . 49

3.2.3 An implementation with improved running time . . . . . . . . . . . 52

4 The path lattice and maximum flow in planar graphs 58

4.1 The left/right relation and the path lattice of a plane graph . . . . . . . . . 58

4.1.1 The left/right relation . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Page 7: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

4.1.2 The uppermost path and the path lattice of an s-t-plane graph . . . 60

4.1.3 The path lattice of a general plane graph . . . . . . . . . . . . . . . 64

4.1.4 Negative results on non-s-t-planar graphs . . . . . . . . . . . . . . . 73

4.2 Algorithms for maximum flow in planar graphs . . . . . . . . . . . . . . . . 75

4.2.1 The uppermost path algorithm for s-t-planar graphs . . . . . . . . . 76

4.2.2 The duality of shortest path and minimum cut in s-t-plane graphs . 81

4.2.3 The leftmost path algorithm of Borradaile and Klein . . . . . . . . . 85

4.3 Weighted flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5 Conclusion and outlook 95

Notation index 98

Subject index 100

Bibliography 103

Page 8: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework
Page 9: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

1 Introduction

The whole topic of this thesis is divided into three parts, one of which is “planar graphs”,

another is “network flows”, and the third is “lattices”. We will show how this three topics

are connected by establishing a lattice on the set of s-t-paths in a planar graph, implying

that Ford and Fulkerson’s uppermost path algorithm for maximum flow in planar s-t-graphs

is a special case of a general two-phase greedy algorithm on lattice polyhedra.

1.1 Outline

The thesis contains five chapters, which are shortly outlined here.

The first chapter is the introduction you are just reading. It provides an overview of

the main topics, including several references to literature on each particular topic and also

some historical notes. It ends with an overview of new results contributed by this thesis.

The objective of the second chapter is to familiarize the reader with all basic concepts

necessary to understand the rest of the thesis. After a brief “crash course” on basic

preliminaries like permutations, running times of algorithms, the heap data structure and

linear and integer programming (Section 2.1), a non-standard graph model is presented

(Section 2.2) and used to give a comprehensive introduction to planar graphs (Section

2.3).1 Furthermore, a short introduction to network flows is given (Section 2.4).

The third chapter provides basic knowledge on lattices and covering and packing prob-

lems, a general framework for many combinatorial optimization problems (Section 3.1).

This section also includes Hoffman and Schwartz’ total dual integrality result on lattice

polyhedra [19]. The second half of the chapter provides a detailed analysis of Frank’s

two-phase greedy algorithm for solving covering and packing problems on certain lattices

[13] and, in particular, a new implementation of this algorithm with improved running

time (Section 3.2).

The concepts of planar graphs, flows and lattices are then connected in the fourth

chapter by the discussion of a partial order, called the left/right relation, on the set of

simple s-t-paths in a plane graph. We provide the definition of this left/right relation and

show how it induces a lattice on the paths, also pointing out that this lattice is significantly

1Also readers familiar with graphs are encouraged to at least have a look in Sections 2.2 and 2.3 to get

used to some less well-known concepts and notations. However, a concise index of notations is included

at the end of the thesis.

1

Page 10: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

more powerful if the embedding is s-t-planar (Section 4.1). We apply the result on the

maximum flow problem in s-t-planar graphs to show that Ford and Fulkerson’s uppermost

path algorithm [10] is a special case of the two-phase greedy algorithm, and also briefly

discuss two other efficient algorithms for maximum flow in planar graphs (Section 4.2).

Somewhat off the topic, a path weighted version of the maximum flow problem is presented

as well (Section 4.3).

Finally, the thesis is concluded by a fifth chapter containing a short summary and an

outlook to open problems.

1.2 Topics and contributions

We now want to introduce the reader to the three main topics of this thesis – planar

graphs, the maximum flow problem and lattices – and provide references to the literature

and a few historic notes. We will then close the introduction by summarizing the new

results contributed by the thesis.

Planar graphs

Graphs form the basis of many problems in combinatorial optimization. They can be intu-

itively used to model real-world structures, e.g., traffic networks with edges corresponding

to streets and vertices corresponding to junctions. There is a wealth of algorithms for

solving basic problems such as finding a shortest path from one vertex to another in a

given graph, and also results showing that certain problems can very likely not be solved

in polynomial time, most prominently the traveling salesperson problem, which asks for a

tour through a graph visiting every vertex exactly once.

However, one can identify classes of graphs for that certain problems can be solved faster

than for arbitrary graphs in all their generality. A very important class of graphs are planar

graphs. A planar graph is a graph that can be drawn on, or – in more mathematical terms

– embedded in the plane without any two edges intersecting each other. In our example

of a traffic network this corresponds to the absence of bridges and tunnels. In such an

embedding of the graph, called plane graph, the edges partition the plane into several

regions, called faces, one of which is unbounded and thus called the infinite face. Using

these faces as vertices and rotating the edges such that they connect the very same faces

they used to separate yields the dual of the plane graph.

Planar graphs and their duals have interesting properties, which can be exploited by

specially adapted algorithms to achieve faster running times that might not be possible on

non-planar graphs. The maybe most useful property of planar graphs from the algorithmic

point of view is the duality of cycles and cuts proved by Whitney in 1932 [36]. It states

that a cycle in the dual graph is a cut in the primal graph and vice versa. A detailed

introduction to graphs and planar graphs can be found in Sections 2.2 and 2.3, respectively.

2

Page 11: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

It includes an alternative definition of graphs that is more suitable for modeling planar

graphs and is based on a lecture on algorithms for planar graphs by Philip Klein [24].

The notion of planar graphs can be further specialized by requiring two designated

vertices s and t to be situated on the boundary to the infinite face. This class of graphs,

called s-t-planar graphs, is of particular interest for maximum flow computations.

History of maximum flow computations in planar graphs

Network flow theory is among the most important and well-studied areas of combinatorial

optimization. It yields many real-world applications such as transportation of goods,

optimization of traffic networks and evacuation of buildings, and its theoretic results also

imply useful insights on the structure of graphs. The maximum flow problem asks for a

flow of maximum value from a designated source vertex s to a designated sink vertex t in

a graph with capacity bounds on the edges.

Comprehensive research on the structure of flows and networks has brought up a re-

markable variety of efficient algorithms for solving the maximum flow problem (and several

other related problems). The most important structural insight in network flow theory is

the celebrated max-flow/min-cut theorem, which states that the value of a maximum flow

equals the capacity of a minimum cut separating the source from the sink. This pioneering

result was first published in 1956 by Ford and Fulkerson [10], who later on contributed

many other important achievements in network flow theory, including a path augmenting

algorithm for maximum flow computations [12] (cf. Section 2.4 for more details on flows).

In many practical applications, the underlying network corresponds to a planar graph.

A historic example for such an application is given in Figure 1.1. In fact, Ford and

Fulkerson’s path augmenting algorithm was originally developed as a special version for

s-t-planar graphs, based on a conjecture by George Dantzig, the inventor of the simplex

method in linear programming. In their seminal paper of 1956 [10], Ford and Fulkerson

showed that what they then called the “top-most chain”, i.e., the path comprising the

“upper boundary”2 of the graph embedding, crosses every minimum cut that separates

the source from the sink exactly once. They then applied their newly established max-

flow/min-cut theorem on this insight to obtain the correctness of an efficient algorithm,

which is now known as the uppermost path algorithm.

The basic idea of this algorithm is to send as much flow as possible along the uppermost

path, saturating the capacity of at least one edge on the path. This edge then is deleted

and the procedure is repeated until source and sink are disconnected from another. Unlike

the better-known version of the algorithm for general graphs, which is using a residual

network, the uppermost path algorithm never needs to take flow back once it has decided

2We will later formulate a combinatorial definition of this notion.

3

Page 12: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Figure 1.1: A historic real-world example of a planar3 maximum flow problem is this

schematic diagram of the railway network of the Western Soviet Union and

Eastern European countries by Harris and Ross [16]. According to Schrijver

[31], the definition of the maximum flow problem was motivated by the in-

terest of the US Air Force to disrupt the transportation capabilities of their

enemies.

to send it along an edge. Figure 1.2 illustrates a maximum flow computation performed

by the uppermost path algorithm.

Since the development of the uppermost path algorithm, many improving and general-

izing results have emerged in planar flow computation. The state of the art in maximum

flow computation on s-t-planar networks is now comprised by a method of Hassin [17],

which reduces the maximum flow problem to a shortest path problem, combined with a

sophisticated linear time shortest path algorithm by Henzinger, Klein, Rao and Subrama-

nian [18]. In 1994, Weihe [35] proposed an O(|V | log(|V |))-algorithm for maximum flow on

general planar graph, which howsoever is rather complicated to understand. Furthermore,

3Note that the network in Figure 1.1 has several sources (marked as “origins”) and sinks (the three

outgoing edges on the left), which however can all be connected to a single super source and super

sink, respectively. After removing the (redundant) edge between the two sources on the upper right,

this yields an s-t-planar graph.

4

Page 13: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

s t

0+1/2

0+1/1

0/2

0/2

0/3

Iteration 1

s t

1+1/2 1/1

0/2

0+1/2

0+1/3

Iteration 2

s t

2/21/1

0+2/2

1/2

1+2/3

Iteration 3

Figure 1.2: An example of a maximum flow computed by the uppermost path algorithm

within three iterations. The current uppermost path is colored blue with the

bottleneck edge dotted. For every edge, the numbers indicate the flow that is

already traversing the edge plus the flow added in the current iteration, and

the capacity bound of the edge. In the last iteration, deleting the bottleneck

edge results in a disconnection of s and t. The corresponding cut (dashed

red line) is saturated by the flow and thus minimal.

the running time analysis given in [35] ommits some non-trivial details in a pre-processing

step. In 2006, Borradaile and Klein [3] established an intuitive generalization of Ford

and Fulkerson’s uppermost path algorithm, which provably achieves the running time of

O(|V | log(|V |)).The algorithm of Borradaile and Klein relies on a partial order on the set of s-t-paths

in the graph, called the left/right relation, which is a straightforward generalization of

Ford and Fulkerson’s notion of uppermost path. The relation goes back to a partial order

on circulations by Khuller, Naor and Klein [22]. Weihe [35] generalized it to a partial

order on s-t-flows in a plane graph, and Klein [23] finally specialized it again to s-t-paths.

The order is based on the fact that the difference vector of two paths is a circulation in

the graph and that, by cycle/cut duality, those circulations can be represented by a face

potential in the dual graph. This partial order leads us to the notion of lattices.

Lattices and the two-phase greedy algorithm

Covering and packing problems form a very general class of linear programs based on set

systems over a finite ground set E. They include LP formulations of the vertex cover

problem, the minimum spanning arborescence problem, the shortest path problem and

the maximum flow problem. Unfortunately, finding optimal integral solutions of those

problems is NP -hard in general and the number of variables or side constraints can be

exponential in the cardinality of the ground set. However, a result by Hoffman and

5

Page 14: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Schwartz [19] published in 1978 suggested that the problem becomes significantly easier if

the underlying set system is a lattice with certain additional properties.

Lattices are partially ordered sets such that every pair of incomparable elements has a

least common upper bound, called join, and a largest common lower bound, called meet.

In this thesis, we assume that the partially ordered set is in fact a set system over a finite

ground set. In this case, it is often natural to require that meet and join of two sets

only consist of elements of those two sets, a property implied by submodularity. Another

helpful property a lattice can fulfill is consecutivity, i.e., any element of the ground set that

is included in two comparable sets of the lattice must be included in every set inbetween.

The definition of consecutivity is related to the total unimodularity of interval matrices,

a result in integer linear programming. In fact, Hoffman and Schwartz used this result to

show that covering and packing problems over submodular and consecutive lattices fulfill

total dual integrality. Details on lattices, submodularity and consecutivity can be found

in Section 3.1, including a concise proof of Hoffman and Schwartz’ result.

Based on Hoffman and Schwartz’ definition of lattice polyhedra, Frank [13] established

a general two-phase greedy algorithm for solving covering and packing problems on sub-

modular and consecutive lattices, assuming that the lattice is given by an oracle. He first

introduced his algorithm as a direct generalization of a minimum spanning arborescence

algorithm by Fulkerson [14]. In Section 3.2, we discuss the two-phase greedy algorithm

based on a succinct framework presented by Faigle and Peis [9], who also extended the

algorithm on supermodular lattices.

Contributions

Our discussion of the two-phase greedy algorithm in Section 3.2 will include a fast im-

plementation of the algorithm based on a simple heap data structure. To our knowledge,

this is the first presentation of an O(|E| log(|E|))-implementation4 of the algorithm. The

implementation also yields a general technique for storing chains in consecutive structures

on space linearly bounded in the size of the ground set.

In Section 4.1, we show that the left/right relation actually induces a submodular lattice

on the set of simple s-t-path in a plane graph, and furthermore, that this lattice even is

consecutive if the underlying embedding is s-t-planar.5 We also point out several other

properties of the relation that hold for s-t-planar embeddings but not for plane graphs in

general and finally show that it is not possible to achieve consecutivity with any submod-

ular lattice in planar graphs in general – no matter how we define the partial order.

4The running time stated here is ignoring the oracle operations, which depend on the particular problem.5Up to this point, the lattice property has only been proven for the left/right relation on circulations

[22], where it basically corresponds to a componentwise-min/max-lattice on n-tuples of numbers. No

results on paths have been known until now however.

6

Page 15: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

The above result on s-t-planar graphs implies that Ford and Fulkerson’s uppermost

path algorithm is a special case of the two-phase greedy algorithm. In Section 4.2, we

show how to compute a sequence of uppermost paths in overall linear time, yielding an

implementaion of the oracle needed by the two-phase greedy version of the uppermost path

algorithm. Together with the storage technique for chains mentioned above, this yields

the possibility of storing path decompositions of s-t-planar flows on space O(|V |). We

also discuss Hassin’s method of reducing the maximum flow to a shortest path problem in

detail and show that the path lattice of an s-t-plane graph is a restriction of a cut lattice

in the dual graph to simple cuts. We state the leftmost path algorithm of Borradaile and

Klein and their proof of correctness, as it yields a beautiful argument based on cycle/cut

duality and interdigitating spanning trees. We will amend their results by showing that

the paths chosen by the algorithm in fact comprise a chain in the path lattice established

in Section 4.1. Moreover, we show that the right first search algorithm introduced by

Ripphausen-Lipa, Wagner and Weihe [29] actually constructs the leftmost (i.e., maximum

w.r.t. the left/right relation) simple v-t-path for every vertex v – so far, the proof of

the existence of such a unique leftmost path, which is an immediate consequence of the

lattice property of the left/right relation, has not been given without premises (as, e.g.,

the absence of clockwise cycles in [3]).

As the results on lattice polyhedra and the two-phase greedy algorithm allow for a weight

function on the lattice elements, we discuss a path weighted version of the maximum flow

problem in Section 4.3. Although our maximum weighted flow problem can be seen as

a special case of the much more general weighted abstract flow model by Martens and

McCormick [27], our framework operates with a completely different notion of supermod-

ularity. Our results imply that the dual minimum weighted cut problem is totally dual

integral on s-t-planar graphs if the weight function is supermodular, and that the upper-

most path algorithm solves the problem optimally if the function is additionally monotone

increasing. A further result on weighted flows in general shows that the path weighted

value of a flow does not depend on its decomposition if and only if the path weights can

be expressed by weights on the arcs.

7

Page 16: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

2 Preliminaries and basic definitions

2.1 Basic concepts

In order to understand all results discussed in this thesis, the reader should be familiar

with essential mathematical notions of linear algebra (in particular vector spaces and per-

mutations), as well as the fundamental concepts of algorithms and their running times,

in particular the O-notation, and some elementary results of complexity theory. Further-

more, a basic knowledge of linear (and integer) programming is useful, as most problems

occurring in this thesis are formulated as linear programs, and at some points results of

linear programming theory are used. For the sake of self-containment, we shall however

briefly cite most and even prove some of the elementary results used in this thesis. In

addition, pointers to introductive literature are provided. A reader well-acquainted with

the above topics may wish to skip this section.

Equivalence relations, partitions and permutations

Permutations and their orbits play a central role in the definition of combinatorial graph

embeddings we are going to present. For this reason, we briefly repeat the basic facts

needed. More detailed information on this topic can be found in any basic linear algebra

book [1]. The reader should know that an equivalence relation on a set E is a reflexive,

symmetric and transitive relation on E and that such a relation induces a partition of E

into disjoint subsets, called equivalence classes. If E is finite, a bijective function π : E → E

is called permutation. It is easy to verify that in this case x ∼ y :⇔ ∃ j ∈ N : πj(x) = y

defines an equivalence relation. The equivalence classes induced by this relation are called

orbits of π. In fact, every permutation can be decomposed into the concatenation of cyclic

permutations on the orbits, i.e., permutations of the form (e1, . . . , ek) with π(ej) = π(ej+1)

and π(ek) = e1.

Algorithms, running times and NP -hardness

The design and analysis of algorithms is a main motivation for research in combinatorial

optimization. Since algorithms are supposed to efficiently solve problems, the notion of

running times plays a crucial role herein. We assume that the reader is familiar with the

concepts of running times and acquainted with the use of the O-notation, and furthermore

8

Page 17: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

has basic knowledge in complexity theory, in particular the notion of NP -hardness. A

detailed introduction to algorithms, their running time and elementary data structures

can be found in [34]. Further details, including basic notions of complexity theory can be

found in [25]. Yet, we also give a very short and naive introduction to this topics.

The running time of an algorithm is usually given as a function f(n) measuring the

maximum number of elementary operations performed by the algorithm for any input of

size n or less. As constant factors are not interesting from the theoretical (and, up to a

certain degree, also from the pratical) point of view, we crucially simplify the analysis of

running times by using the O-notation, which ignores constant factors.

Definition 2.1. Let f, g : N → R+ be two functions. We write f = O(g) and say f is

asymptoctically bounded by g if there is a c ∈ N with f(n) ≤ cg(n) for all n ∈ N.

We usually aim to find algorithms that solve a given problem in time polynomial in the

encoding size of the instance, as super-polynomial running times grow so fast that often

instances of practical size are not solved in reasonable time. A class of special interest

are decision problems (those problems that can be answered with “yes” or “no”) that

have a certificate for every “yes”-instance that enables us to check in polynomial time

that the answer is indeed “yes”. The class of these problems is called NP . Very often,

a problem can be reduced to another problem, i.e., we can solve the first problem by

solving polynomially many instances of the latter problem. If every problem in NP can

be reduced to a particular problem A, this problem A is called NP -hard, or NP -complete

if additionally it is in NP . Cook [5] proved that the problem of checking the satisfiability

of a boolean formula in conjunctive normal form is NP -complete. As the relation “A can

be reduced to B” is transitive, reducing any NP -hard problem to another problem implies

that the latter problem also is NP -hard. As providing a polynomial time algorithm for

an NP -hard problem would imply that all problems in NP could be solved in polynomial

time (which seems to be very unlikely), NP -hard problems are usually considered to be

“intractable”. Among the most famous NP -hard problems are the travelling salesperson

problem, the partition problem and the vertex cover problem, which we shall encounter

later in this thesis (cf. Karp’s famous list of 21 NP -complete problems [20]).

The heap data structure

We loosely describe the heap data structure, which is used by the two-phase greedy algo-

rithm presented later. A heap stores a number of elements and a key, which we assume to

be a rational number, for each element. It supports the following operations:

• getMinimumElement and getMinimumKey return an element with the smallest key

currently in the heap and the value of the smallest key, respectively. They both can

be performed in constant time.

9

Page 18: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

• removeElement(e) removes the element e, which is passed by a pointer, in time

O(log(n)), with n being the number of elements currently in the heap.

• insertElement(e, γ) inserts element e with key γ into the heap in time O(log(n)),

with n being the number of elements currently in the heap.

Our discussion of heaps is based on the implementation presented in [34]. Here, an

array is used to store the actual data. As arrays are initialized with a certain size, i.e., a

maximum number of elements that cannot be exceeded, we will need to specify a maximum

size for the heap at initialization of the heap. This however is no limitation to our purposes

as we will know a bound on the maximum number of elements in the heap a priori. Assume

that the heap contains n elements a1, . . . , an, which are stored at the first n entries in the

array along with their keys (for simplicity, we will from now on identify the elements

with their keys). The operations described above will be designed in such a way that the

following heap property is maintained.

Notation. For i ∈ {1, . . . , n}, we say the heap property at position i is fulfilled if

(2i > n or ai ≤ a2i) and (2i+ 1 > n or ai ≤ a2i+1).

We refer to the elements a2i and a2i+1 (if they exist) as children of ai and conversely to

ai as the parent of a2i and a2i+1.

Note that a heap with one or less elements trivially fulfills the heap property at every

position. The notions of child and parent intuitively induce a binary tree structure on

the heap. The heap property states that a node in the tree has at most the value of its

children. Thus the element a1 is a minimum element if the heap property is fulfilled at

every node. We now describe how the heap operations can be implemented. The key idea

is that the depth of the tree, i.e., the maximum length of a parent-children-path from a1

to some other element, is bounded by log(n).

• getMinimumElement and getMinimumKey simply return a1 and its key, respectively.

• removeElement(e): Let k be the position of e in the array. We overwrite e in the

heap by moving the element f = an to the position k. Clearly, the heap property

still holds at every position except for k. If the heap property also holds at k,

everything is fine. Otherwise, we exchange ak with a2k if a2k ≤ a2k+1, and with

a2k+1 otherwise. In this way, the heap property is restored at k. We repeat this

procedure at the new position of f as long as the heap property is violated. The

property is trivially fulfilled if f arrives at a position with index i > n2 . As the index

increases by a factor of 2 in every iteration, the running time of the operation is

bounded by O(log(n)).

10

Page 19: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

• insertElement(e, γ): We insert e at position n + 1. If the key of the parent of

e is smaller than γ, the heap property is fulfilled at every position. Otherwise

we repeatedly exchange e with the current position of its parent, until the heap

property is fulfilled – this is trivially true if e arrives at a1, which happens after at

most O(log(n)) exchanges, as the index decreases by a factor of 12 in every iteration.

Linear programming, duality and complementary slackness

Linear programs and integer linear programs form a very general class of optimization

problems. We assume that the reader is familiar with the basic concepts and fundamental

results in linear programming theory, the most important of which we will repeat here

briefly. A detailed introduction to linear programming can be found in [2]. A very densely

written and comprehensive work on both linear and integer linear programming is [30].

The task of a linear program is to maximize a linear objective function subject to a

system of linear inequalities.

Problem 2.2 (Linear Programming).

Given: A ∈ Rm×n, b ∈ Rm, c ∈ Rn

Task: Find x∗ ∈ P := {x ∈ Rn : Ax ≤ b, x ≥ 0} with cTx∗ = max{cTx : x ∈ P}, or

decide that the maximum is infinite, or that P = ∅.

The dual of a linear program (P ) max{cTx : Ax ≤ b, x ≥ 0} is the linear program

(D) min{bT y : AT y ≥ c, y ≥ 0}. It is easy to check that the dual of the dual is the primal

program again. The concept of duality is interesting as it yields powerful optimality

criterions for feasible solutions of the primal and dual program.

Theorem 2.3 (Duality theorem of linear programming). If both (P ) and (D) have feasible

solutions, then max{cTx : Ax ≤ b, x ≥ 0} = min{bT y : AT y ≥ c, y ≥ 0}.

Proof. We only prove that the value of the dual is an upper bound on the primal (weak

duality): Let x be a feasible solution of (P ) and y be a feasible solution of (D). Then

cTx ≤ (AT y)Tx = yTAx ≤ yT b, as x, y ≥ 0.

A concise proof of the duality theorem (and many other fundamental results of lin-

ear programming) using the fundamental theorem of linear inequalities can be found in

Chapter 7 of [30].

11

Page 20: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Theorem 2.4 (Complementary slackness theorem). Let x be a feasible solution of (P )

and y be a feasible solution of (D). Then x and y are optimal solutions if and only if

xi > 0 ⇒ (AT y)i = ci ∀i ∈ {1, . . . , n}yj > 0 ⇒ (Ax)j = bj ∀j ∈ {1, . . . ,m}

Proof.

yT b− cTx = yT b− yTAx+ yTAx− cTx

= yT (b−Ax) + (yTA− cT )x

=m∑j=1

yj(bj − (Ax)j)︸ ︷︷ ︸≥0

+n∑i=1

((yTA)i − ci)xi︸ ︷︷ ︸≥0

≥ 0

with equality if and only if each of the summands is 0, which is equivalent to the con-

ditions of complementary slackness. By the duality theorem, cTx = yT b is equivalent to

optimality.1

The set of feasible solutions of a linear program is a polyhedron, i.e., the intersection of

finitely many halfspaces. Using the fundamental theorem of linear inequalities, one can

also show that every polyhedron can be written as the Minkowski sum of the convex hull

of finitely many vertices and a finitely generated convex cone (cf. Chapter 7 of [30]). It

easily follows by convexity of the objective function that whenever a linear program has

an optimal solution, there also is an optimal solution that is a vertex of the underlying

polyhedron (unless the polyhedron is not pointed).

Integer linear programming and total dual integrality

Sometimes, one is interested only in integer solutions of a linear program, which then be-

comes an integer (linear) program. In contrast to linear programming, which can be solved

by the ellipsoid method in polynomial time (cf. Khachiyan [21]), integer programming can

easily be seen to be NP -hard (cf. Example 3.12). However, in some cases the polyhedron

comprising the set of feasible solutions of a linear program has only integral vertices. In

this case, solving the linear program is equivalent to solving the integer linear program.

An important class of linear programs with this property can be characterized by total

dual integrality. Note that from now on we will restrict ourselves to rational input data,

as otherwise the convex hull of the set of integral solutions is not necessarily a polyhedron.

Definition 2.5 (Total dual integrality). Let A ∈ Qm×n, b ∈ Qm. The system of linear

inequalities Ax ≤ b, x ≥ 0 is totally dual integral if the corresponding dual program1In this thesis, we will indeed only use the sufficiency of the complementary slackness conditions, which

already follows from weak duality proven above.

12

Page 21: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

min{bT y : AT y ≥ c, y ≥ 0} has an integral optimal solution for any integral objective

function c ∈ Zn for that the minimum is finite.

The fundamental insight that motivates this definition is the following result due to

Edmonds and Giles [8] (also cf. Chapter 22 of [30] for a proof). It states that the vertices

of a polyhedron P are integral if and only if the maximum value of any integral objective

function over P is integral (as long as it is finite).

Theorem 2.6. Let A ∈ Qm×n, b ∈ Qm and define P := {x ∈ Rn : Ax ≤ b, x ≥ 0}.Then max{cTx : x ∈ P} = max{cTx : x ∈ P ∩ Zn} for every c ∈ Qn if and only if

max{cTx : x ∈ P} ∈ Z for every c ∈ Zn for that the maximum is finite.

Corollary 2.7. If Ax ≤ b, x ≥ 0 is totally dual integral and b is integral, then the linear

program max{cTx : Ax ≤ b, x ≥ 0} has an integral optimal solution for every c ∈ Qn.

Another particularly useful class of linear programs with integral optimal solutions are

based on totally unimodular matrices.

Definition 2.8 (Total unimodularity). A matrix A ∈ Zm×n is totally unimodular if each

of its subdeterminants has value −1, 0 or 1.

In particular, all entries of a totally unimodular matrix are −1, 0 or 1. As vertices of a

polyhedron fulfill a full column-rank subsystem of the defining system of linear inequalities

with equality, Cramer’s rule implies the following result (cf. [30] for more details).

Theorem 2.9. Let A ∈ {−1, 0, 1}m×n be totally unimodular and b ∈ Zn. Then the

polyhedron {x : Ax ≤ b, x ≥ 0} has only integral vertices.

An example for totally unimodular matrices are interval matrices, i.e., 0-1-matrices for

which in each column all 1-entries are consecutive. They will occur in a later section in

the context of so-called consecutive lattices, and applying the above results yields total

dual integrality of certain linear programs over these lattices.

Example 2.10 (Interval matrices). An interval matrix is a matrix A ∈ {0, 1}m×n such

that for all j ∈ {1, . . . n} and for all i1, i2 ∈ {1, . . . , n} with Ai1j = 1 and Ai2j = 1, we

have Akj = 1 for all k with i1 ≤ k ≤ i2. Interval matrices are totally unimodular.

Proof. As any submatrix of an interval matrix is again an interval matrix, it is sufficient

to show that det(A) ∈ {−1, 0, 1} for any square interval matrix A ∈ {0, 1}n×n. We show

this by induction, with the case n = 1 being trivial. So let n > 1. Among all columns

of the matrix that have a 1-entry in the first row, choose one that contains a minimum

number of 1-entries. Let l be the index of this column and k := max{i : Ail = 1}. Let

A be the matrix that arises from A by replacing row i with Ai· − A1· for i ∈ {2, . . . , k}.For any colum j ∈ {1 . . . n}, either A1j = 0 and therefore A·j = A·j , or Aij = 0 for

13

Page 22: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

i ∈ {2 . . . , k}. Thus, the matrix A ∈ Z(n−1)×(n−1) that arises from A by removing the first

row and the jth column is an interval matrix. Applying the induction hypothesis yields

det(A) = det(A) = (−1)1+l det(A) ∈ {−1, 0, 1}.

2.2 Graphs

In most common definitions of graphs, be it directed or undirected, the central objects are

the vertices, while the edges are defined in terms of the vertices, i.e., as relations on the

set of vertices. However, there is an alternative way of definition focussing on the edges,

which is more convenient when dealing with planar graphs, as we do throughout large

parts of this thesis.

The definitions presented in this section and the next are mainly taken from or inspired

by a lecture on algorithms for planar graphs given by Philip Klein at Brown University

[24], although many proofs were added and additional results introduced.

2.2.1 The model

In the model, each edge is equipped with two anti-parallel darts that represent the two

possible orientations of the edge. A vertex then is defined as the set of its outgoing darts.

The darts furthermore induce a notion of direction on the graph (which we may respect

or not, however it is convenient for us), by entitling one dart of every edge to be the arc

and the other to be the anti-arc. An example of a graph is given in Figure 2.1.

Definition 2.11 (Graph). A graph G = (V,E) consists of a set of edges E and a partition

V of its dart set←→E := E × {−1, 1} into disjoint subsets, called vertices.

For a dart d = (e, i) ∈←→E , we define rev(d) := (e,−i) to be the reverse of d, tailG(d)

to be the unique subset v ∈ V with d ∈ v and headG(d) := tailG(rev(d)) (we ommit the

subscript G if there is no ambiguity). A dart in−→E := E × {1} is referred to as an arc, a

dart in←−E := E×{−1} is referred to as an anti-arc. We will write −→e for an arc (e, 1) and

←−e for an anti-arc (e,−1). For D ⊆←→E we define E(D) := {e ∈ E : −→e ∈ D or ←−e ∈ D}.

A vertex v ∈ V and an edge e ∈ E are incident, if −→e ∈ v or ←−e ∈ v. Two vertices

v, w ∈ V are adjacent, if there is a d ∈ v with rev(d) ∈ w. A dart d ∈ v is referred to as an

outgoing dart of v, while a dart d with rev(d) ∈ v is reffered to as an incoming dart of v.

A loop is an edge e ∈ E with tail(−→e ) = head(−→e ). Two darts d1, d2 ∈←→E are parallel

if tail(d1) = tail(d2) and head(d1) = head(d2). They are anti-parallel, if d1 and rev(d2)

are parallel. Two edges e1, e2 ∈ E are parallel if −→e1 and −→e2 are parallel or anti-parallel. A

graph G is simple if it does not contain loops or parallel edges.

Sometimes we want to consider only a part of a given graph, a subgraph. Note that

whenever we are restricting to a smaller edge set, also the sets defining the vertices are

changed. Fortunately, there is an intuitive way to identify the vertices of a graph with those

14

Page 23: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

v1

v2e1

e7

v3

e2

e5 e6 v4

e3

e4

Figure 2.1: The depicted graph corresponds to the partition v1 = {−→e1 ,−→e2}, v2 =

{←−e1 ,−→e3 ,−→e5 ,←−e6 ,−→e7 ,←−e7 ,←−e3}, v3 = {←−e2 ,

−→e4 ,←−e5 ,−→e6}, v4 = {←−e3 ,

←−e4}. The graph

is not simple as e7 is a loop and e5 and e6 are parallel edges (although their

arcs are anti-parallel).

of a subgraph, which becomes clear in the following definition. However, as our model

does not allow singletons (isolated vertices without edges), some vertices may vanish when

edges are removed from the graph. This does not affect our purposes.

Definition 2.12 (Subgraph). Let G = (V,E) be a graph. A subgraph of G is a graph

G[F ] = (VF , F ), where F ⊆ E and VF contains a corresponding vertex vF := v ∩←→F for

every vertex v ∈ V of the original graph (whenever the intersection is non-empty).

2.2.2 Walks, connectedness, trees and cuts

Many combinatorial optimization problems are based on the fact that in a graph vertices

are connected by sequences of darts, called walks. The most elemental version of a walk

is a simple path or simple cycle.

Definition 2.13 (Walk, path, cycle). A walk or x-y-walk is a non-empty sequence of

darts d1, . . . , dk such that head(di) = tail(di+1) for i ∈ {1, . . . , k− 1} and x = tail(d1) and

y = head(dk). If for all darts of an x-y-walk the underlying edges are pairwise distinct,

then the walk is called path or x-y-path for x 6= y or cycle if x = y. A path or cycle is

called simple if the heads of all its darts are pairwise distinct. A walk, path or cycle is

directed if it only contains arcs. We say a vertex v ∈ V is on the walk (path, cycle), if

there is a dart d in the walk with head(d) = v or tail(d) = v.

If P = d1, . . . , dk is an x-y-walk and Q = d′1, . . . , d′k′ is a y-z-walk, we denote the x-

z walk d1, . . . , dk, d′1, . . . , d

′k′ by P ◦ Q. We denote the y-x-walk rev(dk), . . . , rev(d1) by

rev(P ). If P is a simple path or cycle and tail(di) = v, head(dj) = w for i < j, we denote

the v-w-path di, . . . , dj by P [v, w].

For convenience, we will sometimes identify a path or cycle with the set of its darts

(instead of a sequence), particularly when the path or cycle is simple and hence the order

of the sequence is already determined by the set. Furthermore, the proof of the following

15

Page 24: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

lemma describes how we can obtain a simple path with same endpoints from a given walk

by skipping certain darts. In the same manner we can obtain a simple cycle from a given

(possibly non-simple) cycle.

Lemma 2.14. Every x-y-walk contains a subsequence that is a simple x-y-path. Any cycle

contains a subsequence that is a simple cycle.

Proof. Let d1, . . . , dk be an x-y-walk. Starting with i1 := max{j : tail(dj) = x}, define

iteratively il+1 = max{j : tail(dj) = head(dl)} if head(dil) 6= y and stop otherwise. In

every iteration head(dil) 6= y implies il < k and thus tail(dil+1) = head(dil). So the

maximum always exists and is larger than il. This implies that il < il+1 and hence the

procedure must terminate at some point with head(diL) = y for some L ≤ k. Moreover, the

heads of the darts di1 , . . . , diL must be pairwise distinct, as otherwise head(dil) = head(dil′ )

implies il+1 = il′+1 by construction, a contradiction to the strict monotonicity of the

sequence. Consequently, di1 , . . . , diL is a simple x-y-path.

Now let d1, . . . , dk be a cycle. d1, . . . , dk−1 is a tail(d1)-head(dk−1)-walk, which by the

first part of the lemma contains a simple tail(d1)-head(dk−1)-path P . Appending dk to P

yields a simple cycle, as rev(dk) /∈ P .

Note that not every x-x-walk necessarily contains a cycle or simple cycle. For example,

the short sequence d, rev(d) is already a tail(d)-tail(d)-walk for any d ∈←→E , but it only

contains a cycle if d is a loop. The following lemma characterizes simple paths and simple

cycles as inclusionwise minimal.

Lemma 2.15. An x-y-walk is a simple x-y-path if and only if it contains no proper

subsequence that is an x-y-walk. A cycle is a simple cycle if and only if it contains no

proper subsequence that is a cycle.

Proof.

“⇒”: Let d1, . . . , dk be a simple x-y-path. Let di1 , . . . , dil be a subseqence that is a x-y-

walk as well. As dk is the only dart with head(dk) = y in the simple path, dil = dk

must hold. Inductively, dij = dj and k = l follows. Thus, the subsequence is not

proper. The same argument holds for simple cycles.

“⇐”: By Lemma 2.14, every x-y-walk (cycle) contains a simple x-y-path (simple cycle) as

subsequence. If this subsequence is not proper, it must be the walk (cycle) itself.

It is easy to check that the relation “There is a path from v to w or v = w” is transitive

(and trivially reflexive), and that the relation “There is a path from v to w and vice versa,

or v = w” is additionally symmetric and hence an equivalence relation. The following

definition makes use of this fact.

16

Page 25: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Definition 2.16 (Connected components). For a set of darts D ⊆←→E we define the two

relations

v →D w :⇔ There is a path from v to w using only darts in D or v = w.

v ↔D w :⇔ v →D w and w →D v.

The equivalence classes of the equivalence relation ↔←→E

are called the connected com-

ponents of G, while the equivalence classes of ↔−→E

are called the strongly connected com-

ponents of G. A graph is connected, if it consists of one single connected component.

Remark 2.17 (Edge contraction). Given a graph, we may want to contract a certain

edge, i.e., to remove it from the graph and to merge its two endpoints together to a single

vertex that contains the outgoing darts of both of them. Using the ↔-relation described

above we can establish the connection between vertices of a graph and the corresponding

merged vertices, even after several contractions, in the following way. Let EC be the set

of edges that were contracted (the order does not matter). Let (Vi)i∈I be the partition of

V induced by ↔←→EC

. The contracted graph GC contains a vertex vi =(⋃

v∈Viv)\←→EC for

each subset in the partition, which is exactly the vertex that arised from merging all the

vertices in Vi.

Definition 2.18 (Tree). A tree is a connected graph that contains no cycle. If a subgraph

of G is a tree and contains all vertices of G, it is called spanning tree. A set of darts D ⊆←→E

is an arborescence or out-tree if E(D) is the edge set of a tree and for every vertex v in

that tree there is at most one dart d ∈ D with head(d) = v. If rev(D) = {rev(d) : d ∈ D}is an arborescence, D is a root-directed tree or in-tree.

Usually we will simply identify a tree with the set of its edges.

Theorem 2.19 (Characterization of spanning trees). Let G = (V,E) be a graph and

T ⊆ E. The following statements are equivalent.

(1) T is a spanning tree.

(2) |T | = |V | − 1 and T connects all vertices.

(3) |T | = |V | − 1 and T contains no cycle.

(4) For all v, w ∈ V there is a unique simple v-w-path in T .

(5) For all e ∈ E \T there is a unique simple cycle in T ∪{e} containing −→e and a unique

simple cycle containing ←−e .

Proof. See Theorem 2.4 of [25].

17

Page 26: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Remark 2.20. The fact that spanning trees contain |V | − 1 edges implies that in an ar-

borescence every vertex except one designated root vertex has an incoming dart. Property

(4) then yields that there is a unique path from the root to every vertex in the arbores-

cence. Likewise, in a root-directed tree, there is a unique path form every vertex to the

root. We denote this unique path with T [v], respectively. If w is a vertex on T [v], we say

that v is a descendant of w and w is an ancestor of v.

Theorem 2.21 (Matroid property of spanning trees). Let G = (V,E) be a connected

graph and S ⊆ E be an edge set containing no simple cycles. Then there is a spanning

tree T with S ⊆ T .

Proof. Let κ be the number of S-connected components, i.e., the number of equivalence

classes with respect to↔←→S

. We use induction on κ. If κ = 1, then S is a spanning tree. If

κ > 1, by the connectedness of G there must be an edge e ∈ E between two S-connected

components. Adding e to S does not create a cycle as such a cycle must contain e, but there

is no other connection between the two connected components. However, it decreases the

number of S-connected components by 1. So the induction hypothesis applied on S ∪ {e}yields the desired result.

Definition 2.22 (Cut). A cut is a non-empty set of darts C ⊆←→E such that there is a

set of vertices S ⊂ V with C = Γ+(S) := {d ∈←→E : tail(d) ∈ S, head(d) /∈ S}. A cut is

simple, if it contains no proper subset that is a cut.

The following lemma shows that cuts do as their name promises – they separate vertices

from each other.

Lemma 2.23. Let G = (V,E) be a connected graph, D ⊆←→E and S ⊆ V . D contains the

cut Γ+(S) if and only if for all x ∈ S, y ∈ V \ S and all x-y-paths P we have P ∩D 6= ∅.In particular, the subgraph G[F ] induced by some F ⊆ E is connected and contains all

vertices in G if and only if E(C) ∩ F 6= ∅ for all cuts C.

Proof.

“⇒”: Let D ⊆←→E with Γ+(S) ⊆ D. Let x ∈ S and y ∈ V \ S and P be an x-y-path

consisting of darts d1, . . . , dk. As tail(d1) ∈ S and head(dk) /∈ S but tail(di+1) =

head(di) for i ∈ {1, . . . , k − 1}, there must be at least one dart di with tail(di) ∈ Sand head(di) /∈ S, i.e., di ∈ Γ+(S).

“⇐”: Let S ⊂ V such that P ∩ D 6= ∅ for all x ∈ S, y ∈ V \ S and all x-y-paths P .

In particular D ∩ {d} 6= ∅ for all d ∈←→E with tail(d) ∈ S and head(d) /∈ S. Thus

Γ+(S) ⊆ D.

In connected graphs, simple cuts can be characterized by the connectedness of the

inducing vertex set and its complement.

18

Page 27: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Lemma 2.24. Let G = (V,E) be a connected graph and C ⊆←→E be a cut. C is simple if

and only if there is a set S ⊆ V with C = Γ+(S) such that S and V \ S are the connected

components of G[E \ E(C)].

Proof.

“⇒”: Let C be a simple cut and let S ⊂ V with C = Γ+(S). Assume by contradiction

S is not connected in G[E \ E(C)]. Then let S′ ⊂ S be a connected component of

G[E \ E(C)]. As there is no dart d ∈←→E with tail(d) ∈ S′ and head(d) ∈ S \ S′ the

cut Γ+(S′) is a subset of C. As G is connected and a path from S \ S′ to S′ must

enter V \ S first, there must be a dart d with tail(d) ∈ S \ S′ and head(d) ∈ V \ S.

Hence the subset is proper, a contradiction.

“⇐”: Let C = Γ+(S) for some vertex set S ⊂ V such that S and V \ S are connected

components in G[E\E(C)]. Let C ′ ⊂ C be any proper subset and let d ∈ C \C ′. Let

x, y ∈ V . By the connectedness of S and V \S there is an x-y-path in G[E \E(C ′)],

possibly using d or rev(d). Thus, C ′ is not a cut by Lemma 2.23.

2.2.3 The arc space and its subspaces

Definition 2.25. We call RE the arc space. For d ∈←→E we define δd ∈ RE by

δd(e) :=

1 if d = (e, 1)

−1 if d = (e,−1)

0 otherwise

and for any set D ⊆←→E we define δD :=

∑d∈D δd. Moreover, for a set of vertices S ⊆ V we

define δS :=∑

v∈S δv. For any vector x ∈ RE and any dart d ∈←→E we define x(d) := δd

Tx,

i.e., x(−→e ) = x(e) and x(←−e ) = −x(e) for all e ∈ E. For δ ∈ {−1, 0, 1}E , the set of darts

induced by δ is D(δ) := {d ∈←→E : δ(d) = 1}.2

Note that in the above definition of δD, anti-parallel darts of the same edge cancel out.

In this case, δD does not induce D. In fact, it is easy to check that D = D(δD) if and only

if D does not contain both darts of any edge, and that furthermore the vector inducing a

set is unique, i.e., if D = D(δ) then δ = δD. Another important observation, concerning

cuts and their inducing vertex sets, is stated in the following lemma.

Lemma 2.26. Let S ⊆ V and C be the cut induced by S. Then C = D(δS).

Proof. Let d ∈←→E with tail(d) 6= head(d). Then δtail(d)(d) = 1, δhead(d)(d) = −1, and

δv(d) = 0 for all v ∈ V \ {tail(d),head(d)}. So δS(d) =∑

v∈S δv(d) = 1 if and only if

tail(d) ∈ S and head(d) /∈ S, i.e., d ∈ C.

2Note the difference between D(δ) and support(δ). The first is a subset of←→E , while the latter is a subset

of E.

19

Page 28: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Definition 2.27 (Cut space, cycle space). The cut space of G is

Scut(G) := span{δC : C is a cut of G}.

The cycle space of G is

Scycle(G) := span{δC : C is a cycle of G}.

The elements of the cycle space are called circulations.

The following theorem establishes two bases for the cycle and the cut space. It also shows

that the cycle and the cut space are orthogonal complements. The proof is a streamlined

version of the proof given in [24].

Theorem 2.28 (Cycle/cut orthogonality). Let G = (V,E) be a graph with k connected

components V1, . . . , Vk ⊆ V . For i ∈ {1, . . . , k} choose one vertex vi ∈ Vi from every con-

nected component and an edge set Ti inducing a spanning tree in the connected component

Vi. For e ∈ E \⋃ki=1 Tk let Ce be the unique cycle consisting of −→e and some darts of the

spanning tree Ti in the connected component of e. Then

Bcut = {δv : v ∈ V \ {v1, . . . , vk}}

is a basis of Scut(G) and

Bcycle = {δCe : e ∈ E \k⋃i=1

Tk}

is a basis of Scycle(G).

Furthermore, cut space and cycle space are orthogonal complements, i.e., for all δcut ∈Scut(G) and all δcycle ∈ Scycle(G) we have δcut

T δcycle = 0 and span(Scut(G) ∪ Scycle(G)) =

RE.

Proof. It is clear that Bcut is contained in the cut space and so is its span. Furthermore,

the vectors in Bcut are linearly independent: Suppose∑

v∈V λvδv = 0 for a vector of

coordinates λ ∈ RV with λvi = 0 for all i ∈ {1, . . . k}. Then for every dart d ∈←→E we

have 0 =∑

v∈V λvδv(d) = λtail(d)−λhead(d), implying λhead(d) = λtail(d). So the coefficients

are constant within every connected component, and, as there is a vi in every connected

component with λvi = 0, they are all zero. Hence, dim(Scut(G)) ≥ |Bcut| = |V | − k.

It also is clear that Bcycle is contained in the cycle space and so is its span. Again, the

vectors in Bcycle are linearly independent: Suppose∑

e∈E\Sk

i=1 TkλeδCe = 0, then for all

e′ ∈ E \⋃ki=1 Tk we have 0 =

∑e∈E\

Ski=1 Tk

λeδCe(−→e ) = λe, as δCe is the only vector in

Bcycle with a nonzero entry for −→e . So dim(Scycle(G)) = |Bcycle| = |E| − (|V | − k).

Note that dim(Scut(G)) + dim(Scycle(G)) ≥ |V | − k + |E| − (|V | − k) = |E|. We now

show orthogonality of the two spaces, which implies that in fact equality holds for the

dimensions and hence Bcut and Bcycle indeed generate the respective subspaces.

20

Page 29: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

To show orthogonality, it suffices to show that δcutT δcycle = 0 for every δcut that is

induced by a cut and every δcycle that is induced by a cycle, as the set of all cuts generates

the cut space and the set of all cycles generates the cycle space. So let S ⊆ V and C be

a cycle consisting of the darts {d1, . . . , dl} ⊆←→E . We have δCT δS =

∑li=1 δS(di). Note

that δS(di) = 1 if tail(di) ∈ S and head(di) /∈ S, that δS(di) = −1 if tail(di) /∈ S and

head(di) ∈ S, and that δS(di) = 0 otherwise. As tail(di) = head(di+1) for i ∈ {1, . . . , l−1}and tail(d1) = head(dl), C must cross the cut from S to V \S as often as it crosses it from

V \ S to S, and hence∑l

i=1 δS(di) = 0.

Corollary 2.29. δ ∈ RE is a circulation in G if and only if δvT δ = 0 for all v ∈ V .

The following corollary gives some important insights on the coefficients representing

any vectors in the cut or cycle space, which were already used in the proof of Theorem

2.28. It also states that every vector of the cycle or cut space in fact contains a cycle or

cut, respectively.

Corollary 2.30. If δ ∈ Scut(G) with δ =∑

v∈V λ(v)δv for some λ ∈ RV , then

δ(d) = λ(tail(d))− λ(head(d))

for all d ∈←→E . In particular, the values λ(v) are constant within every connected compo-

nent of G[E \ support(δ)] and {d ∈←→E : δ(d) > 0} contains a cut if δ 6= 0.

If δ ∈ Scycle(G) with δ =∑

e∈E\Sk

i=1 Tkµ(e)δCe then µ(e) = δ(e) for all e ∈ E \

⋃ki=1 Tk.

Moreover, {d ∈←→E : δ(d) > 0} contains a cycle if δ 6= 0.

Proof. Let δ ∈ Scut(G) and λ ∈ RV with δ =∑

v∈V λ(v)δv. As δtail(d)(d) = 1, δhead(d)(d) =

−1 and δv(d) = 0 for all v ∈ V \ {tail(d),head(d)}, we have δ(d) =∑

v∈V λ(v)δv(d) =

λ(tail(d))−λ(head(d)) as claimed. If v, w ∈ V belong to the same connected component Viof G[E \ support(δ)], then there is a simple v-w-path consisting of darts with δ(d) = 0. As

δ(d) = 0 implies λ(tail(d)) = λ(head(d)), λ has the same value for all vertices on the path,

in particular λ(v) = λ(w). So λ is constant on connected components. If δ 6= 0, at least

one of the vertex sets Si := {v ∈ Vi : λ(v) = maxv′∈Viλ(v′)} is strictly contained in the

connected component Vi. Then Γ+(Si) is a cut that is contained in {d ∈←→E : δ(d) > 0}.

Let δ ∈ Scycle(G) with δ =∑

e∈E\Sk

i=1 Tkµ(e)δCe . For any e′ ∈ E \

⋃ki=1 Tk, we have

δCe′ (e′) = 1 and δCe(e′) = 0 for all e 6= e′, as the only non-tree dart contained in Ce is −→e .

So δ(e′) =∑

e∈E\Sk

i=1 Tkµ(e)δCe(e′) = µ(e′). If δ 6= 0, the set {d ∈

←→E : δ(d) > 0} contains

a cylce by the flow decomposition theorem we shall present in Section 2.4 (Theorem 2.55)

as δ can be interpreted as flow of value 0.

21

Page 30: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

2.3 Planar graphs

An important class of graphs is comprised by so-called planar graphs, i.e., graphs that

can be drawn on the plane without to edges intersecting each other. An alternative

characterization of planar graphs states that they have a dual graph whose simple cycles

are exactly the simple cuts of the primal graph and vice versa. This property gives rise

to many planarity exploiting algorithms, including the uppermost path algorithm for the

maximum flow problem. We will present a combinatorial definition of planar graphs,

introduce cycle/cut duality and several other important properties of planar graphs and

discuss some algorithmic aspects.

2.3.1 Embedded graphs

From a geometric point of view, a planar graph is a graph that can be drawn on the plane

(or, equivalently, on the sphere) without any two edges intersecting each other. This

drawing is called planar embedding of the graph, or plane graph, and can mathematically

be formalized by mapping the vertices to points on the plane and mapping the edges to

non-intersecting curves. However, this representation is not very suitable for being pro-

cessed on a computer, as it contains much more information than needed by planarity

exploiting algorithms, and makes proofs of combinatorial properties of planar graphs un-

necessarily complicated. Thus, instead of defining planar graphs by geometric means, we

shall introduce the more abstract notion of combinatorial embeddings.

Definition 2.31 (Combinatorial embedding). Let E be an edge set and π be a permuta-

tion on←→E . We define V (π) to be the set of orbits of π and call π combinatorial embedding

or rotation system of the graph G = (V (π), E). Moreover, (π,E) is called embedded graph.

Define π∗ := π ◦ rev and V ∗ := V (π∗). The dual of an embedded graph is the graph

G∗ = (V ∗, E) and the elements of V ∗ are called faces.

It is easy to see that the name dual is justified, as (π ◦ rev)◦ rev = π, i.e., the dual of the

dual graph is the original graph again. There is an intuitive relation between combinatorial

embeddings and drawings of a graph on some orientable surface. For simplicity, we will

from now on restrict ourselves to connected graphs.

• Given a drawing, we can obtain the rotation system π in the following way: For every

vertex drawn on the surface, imagine an observer standing on this vertex rotating

his sight counterclockwise. For every outgoing dart d of the vertex we define π(d) to

be the next outgoing dart that is seen by the observer in the course of his rotation.

In fact, every vertex then defines an orbit of π (and vice versa), and π∗ gives us the

borders of the regions in that the surface is partitioned by the drawing.

22

Page 31: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

e 1

e2

e3

e 4

e5 e6

Figure 2.2: The drawing of this graph and its dual (dashed edges) on the plane

corresponds to the combinatorial embedding π = (−→e1 ,−→e4 ,←−e3) (−→e2 ,

−→e5 ,←−e1)

(−→e3 ,−→e6 ,←−e2) (←−e4 ,

←−e5 ,←−e6) with π ◦ rev = (−→e1 ,

−→e2 ,−→e3) (←−e1 ,

−→e4 ,←−e5) (←−e2 ,

−→e5 ,←−e6)

(←−e3 ,−→e6 ,←−e4). Observe how the orbits of π comprise the vertices of the graph

and those of π ◦ rev the faces.

• Given a combinatorial embedding, we can use π∗ to construct 2-dimensional polyhe-

dra whose edges consist of the darts of the graph and whose vertices are the vertices

of the graph. Then, by glueing darts belonging to the same edge together, we obtain

an orientable and closed surface, on which G is drawn by the seams of our glueing

operation.

These descriptions already give us an understanding of the dual of a graph. The vertices

of the dual, the faces, are the connected components the surface is partitioned into when

we dissect it along the edges of the drawing. In the dual, the edges connect the same faces

they seperate in the drawing of the primal graph and π∗ traverses the boundary of each

face in clockwise order. Figure 2.3.1 gives an example of an embedded graph and its dual.

When drawing an embedded graph and its dual in one figure, as in the example, an

edge in the dual graph crosses its “alter ego” in the primal graph from right to left.3 This

motivates the following notation.

Notation. For an embedded graph G, we define leftG(d) := headG∗(d) to be the face left

of d and rightG(d) := tailG∗(d) to be the face right of d. If there is no ambiguity, i.e., there

is a clearly identified primal embedded graph G, we ommit the subscript G for head, tail,

left and right.

Note that the clockwise boundary of every face induces a circulation in Scycle(G), as

stated in the following lemma.

Lemma 2.32. Let G = (π,E) be an embedded graph and f ∈ V ∗. Then δf ∈ Scycle(G).

3Note that [24] and [3] use the opposite orientation for dual darts. However, the orientation used in this

thesis seems to be more consistent with the definition of π∗.

23

Page 32: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. Let v ∈ V . Then δfT δv = |{d ∈ f : d ∈ v}| − |{d ∈ f : rev(d) ∈ v}|. But as

tail(π∗(d)) = head(d) and hence d ∈ v if and only if rev(π∗(d)) ∈ v, the two sets have

equal cardinality.

Furthermore, we can introduce a notion of adjacency between vertices and faces.

Definition 2.33. Let G = (π,E) be an embedded graph and let v ∈ V (π) and f ∈ V (π∗).

v and f are adjacent if v ∩ f 6= ∅. In this case, v is also said to be on the boundary of f .

2.3.2 Planar graphs, cycle/cut duality and face potentials

As we are particularly interested in planar graphs, we would like to check whether it

is possible to draw a graph on a surface that is homeomorphic to a plane (or sphere,

equivalently). In fact, we can classify combinatorial embeddings by means of the Euler

characteristic χ of the corresponding surfaces. The Euler characteristic can be defined

by χ := |V | + |V ∗| − |E|, where |V | is the number of vertices, |V ∗| the number of faces,

and |E| the number of edges of a partition of the surface into polygonial faces. The Euler

characteristic of the sphere is 2, motivating the following definition. (cf. [37] for more

details on the Euler characteristic)

Definition 2.34 (Planar graph). A conntected graph G is called planar graph if there

exists an embedding π such that |V | + |V ∗| − |E| = 2. In this case π is called planar

embedding of G and the embedded graph (π,E) is called plane graph.

The following theorem gives very useful characterizations of planar graphs, the most

important of which probably is the duality of simple cycles and simple cuts. It builds

the foundation for many planarity exploiting algorithms and was already established by

Whitney in 1931 [36].

Theorem 2.35 (Characterization of planar graphs). Let G be a connected graph and π

be a combinatorial embedding of G. Then the following statements are equivalent.

(1) π is a planar embedding of G.

(2) Cycle/cut duality (vector space version):

Scycle(G) = Scut(G∗) and Scut(G) = Scycle(G∗).

(3) Cycle/cut Duality:

∀ C ∈←→E : C is a simple cycle in G iff C is a simple cut in G∗ and

C is a simple cut in G iff C is a simple cycle in G∗.

(4) Interdigitating spanning trees:

∀ T ⊆ E : T is a spanning tree in G iff E \ T is a spanning tree in G∗.

24

Page 33: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. 4

(1)⇒ (2): By Theorem 2.28 we know that Bcut = {δf : f ∈ V ∗\{f∞}} for some arbitrary

f∞ ∈ V ∗ is a basis of Scut(G∗). But as the clockwise boundary of every face induces a

circulation, Bcut ⊆ Scycle(G), and so is Scut(G∗) ⊆ Scycle(G). Theorem 2.28 also gives

us a basis Bcycle of Scycle(G), consisting of the cycles induced by the arcs not in some

fixed spanning tree. So dim(Scycle(G)) = |Bcycle| = |E| − (|V | − 1), which by (1) is

equal to |V ∗| − 1 = |Bcut| = dim(Scut(G∗)) and hence the two subspaces have same

dimension and must be equal. Scut(G) = Scycle(G∗) follows with the same argument

by using duality.

(2)⇒ (3): Let C be a simple cycle in G. Then δC ∈ Scut(G∗) by (2). Thus, D(δC) = C

contains a simple cut C ′ in G∗ by Corollary 2.30. Again by (2), δC′ ∈ Scycle(G) and so

D(δC′) = C ′ ⊆ C contains a cycle in G. As simple cycles are inclusionwise minimal,

C ′ = C. By the same line of arguments, every simple cut in G is a simple cycle in G.

The two converse directions follow by duality.

(3)⇒ (4): We only show sufficiency, necessity follows by duality. Let T ⊆ E be an edge

set that induces a spanning tree in G. By contradiction assume that E \ T contains

a cycle C in G∗, which w.l.o.g. is simple. Then, by (3), C is a simple cut in G. So

T cannot connect all vertices of G, a contradiction. Again by contradiction assume

E \ T does not connect all vertices of G∗. Then its complement T contains a simple

cut C in G∗. By (3), C ⊆ T is a simple cycle, a contradiction. So E \T is acyclic and

connects all vertices in G∗, i.e., it is a spanning tree in G∗.

(4)⇒ (1): For any pair of interdigitating spannig trees, we have |T | = |V | − 1 and

|E \ T | = |V ∗| − 1, implying |E| = |V |+ |V ∗| − 2.

An example of a pair of interdigitating spanning trees and an example of cycle/cut

duality can be seen in Figure 2.3. The vector space version of cycle/cut duality enables us

to represent any circulation in G by a face potential. The idea of face potentials was first

introduced by Hassin [17] and will play a central role in the definition of a partial order

on the set of paths in a planar graph later on.

Corollary and Definition 2.36. Let G = (π,E) be a connected plane graph. Let

f∞ ∈ V ∗ be a face of G, called the infinite face. Then {δf : f ∈ V ∗ \ f∞} is a basis of

Scycle(G). In particular, there is a unique linear mapping Φ : Scycle(G)→ RV ∗ , such that

Φ(δ)(f∞) = 0 and δ =∑

f∈V ∗ Φ(δ)(f)δf for all δ ∈ Scycle(G). The vector Φ(δ) is called

the face potential of δ.4The proof of the first implication completes the line of argumentation given in Lecture 4 of [24], which

was already elaborated in Theorem 2.28. However, the proofs of the other three implications are greatly

simplified compared to [24], making use of Corollary 2.30 and minimality of simple cycles and cuts,

both of which have not been established there.

25

Page 34: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

(a)

e 1

e2

e3

e 4

e5 e6

(b)

e1

e2

e4

e3

e 5

e6e 7

e8

Figure 2.3: (a) An example of interdigitating spanning trees in a planar graph. T =

{e1, e3, e4} forms a spanning tree in the primal, while its complement E\T =

{e2, e5, e6} forms a spanning tree in the dual graph. (b) An example for

cycle/cut duality in planar graphs. {−→e1 ,−→e5 ,−→e7 ,−→e8 ,−→e2} comprises a simple

cut in the primal and a simple cycle in dual graph.

Notation. From now on, whenever we encouter a plane graph G, we will implicitly assume

that some face f∞ ∈ V ∗ has been chosen as the infinite face.

Clockwise and counterclockwise, interior and exterior

Cycle/cut duality implies that every simple cycle partitions the set of faces into into two

subsets, and the set of edges and the set of vertices into three subsets each.

Definition 2.37. Let C be a simple cycle. By cycle/cut duality, G∗[E \ E(C)] consists

of two connected components. We call the component containing f∞ the exterior of C

and the other component the interior of C. Furthermore, we say an edge e ∈ E \ E(C)

is an interior or exterior edge of C if it is incident to a face of the interior or exterior,

respectively. A vertex is an interior or exterior vertex of C if it is incident to an interior

or exterior edge but not incident to any cycle edge e ∈ E(C).

The following lemma shows that this yields a partition of the edges and vertices respec-

tively.

Lemma 2.38. Let C be a simple cycle. An edge is either an interior edge of C, an exterior

edge of C, or on the cycle. A vertex is either an interior vertex of C, an exterior vertex

of C, or on the cycle.

26

Page 35: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. As every face is either interior or exterior, every edge and vertex is incident or

adjacent to an interior or exterior face. If an edge is incident to both an interior and

to an exterior face, one of its two darts must belong to the dual cut between interior

and exterior, i.e., to C. Thus it is a on the cycle in that case. If a vertex v is the tail

of a dart d1 of an interior edge and of a dart d2 of an exterior edge, one of the darts

d1, π(d1), . . . , πk(d1) = d2 must contain a dart whose right face is internal and whose left

face is external, i.e., either that dart or its reverse belongs to the cycle and so does v.

This helps to show that a path from the interior to the exterior must cross the cycle –

a fact demanded by any solid intuition.

Lemma 2.39. Let C be a simple cycle. Let u be an interior vertex of C and v be a

exterior vertex of C and let P be an u-v-path. Then there is a vertex w that is on C and

on P .

Proof. As P starts in the interior and ends in the exterior, there must be a dart d ∈ Pwith tail(d) in the interior, but head(d) not in the interior. Since tail(d) is interior, d can

only be the dart of an interior edge. Thus, head(d) cannot be exterior and must be on the

cycle.

We can now extend our notion of the boundary of a face being “clockwise” to general

circulations.

Definition 2.40 (Clockwise, counterclockwise). A circulation δ ∈ Scycle(G) is clockwise

if Φ(δ) ≥ 0, and counterclockwise if Φ(δ) ≤ 0.

This is particularly intuitive for simple cycles. The potential of such a cycle is 0 on

the exterior and −1 or 1 on the interior, determining the orientation of the cycle, i.e., on

which side of the cycle the infinite face is situated.

Lemma 2.41. Let C be a simple cycle and φ := Φ(δC). Then φ(f) = 0 for all faces

f of the exterior. If δC is clockwise, then φ(f) = 1 for all faces f of the interior and

right(d) is an interior face and left(d) an exterior face for every dart d ∈ C. If δC is

counterclockwise, then φ(f) = −1 for all faces f of the interior and right(d) is an exterior

face and left(d) an interior face for every dart d ∈ C.

Proof. The potential φ is constant on connected components of G[E \E(C)] by Corollary

2.30. As φ(f∞) = 0 by definition, φ(f) = 0 for all external faces f . The cut is either

directed from the interior to the exterior or from the exterior to the interior. In the first

case, tailG∗(d) = right(d) is internal, headG∗(d) = left(d) is external and, as φ(tailG∗(d)) =

φ(headG∗(d)) + 1, the potential of all internal faces is 1. In the latter case, tailG∗(d) =

right(d) is external, headG∗(d) = left(d) is internal and, as φ(headG∗(d)) = φ(tailG∗(d))−1,

the potential of all internal faces is −1.

27

Page 36: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Darts entering and leaving paths

The embedding π also induces a intuitive notion of a dart leaving a path to the left at a

certain vertex, i.e., the dart occurs in the counterclockwise sequence of outgoing darts at

that vertex strictly between the two darts belonging to the path.

Definition 2.42. Let d ∈←→E be a dart, P be a simple path or cycle and v ∈ V be a

vertex on the cycle such that head(d1) = v = tail(d2) for d1, d2 ∈ P . We say that d leaves

P to the left at v, if

min{k ∈ N : πk(d2) = d} < min{k ∈ N : πk(d2) = rev(d1)},

and d enters P to the left at v, if rev(d) leaves P to the left at v. Likewise, d leaves P to

the right at v if

min{k ∈ N : πk(d2) = d} > min{k ∈ N : πk(d2) = rev(d1)},

and d enters P from the right at v, if rev(d) leaves P to the left at v.

2.3.3 Subgraphs of planar graphs, deletion and contraction

Very often, we want to consider subgraphs of planar graphs. As already mentioned,

a subgraph of a graph can be obtained by a sequence of edge deletions. The following

theorem states that the deletion of an edge in an embedded graph is a planarity preserving

operation and corresponds to contraction of the same edge in the dual graph. It also

describes how to obtain an embedding of the subgraph and the contracted dual graph.

Again, we restrict ourselves to connected graphs. For more details see Lecture 3 of [24].

Theorem 2.43 (Duality of deletion and contraction). Let G = (V,E) and π be an embed-

ding of G. Let e ∈ E be an edge that is neither a loop in G nor in G∗. Let GS = (VS , ES)

be the subgraph of G obtained by deleting e. Define πS :←→ES →

←→ES by

πS(d) =

π(π(d)) if π(d) ∈ {−→e ,←−e }

π(d) otherwise.

Then πS is an embedding of GS, and the corresponding dual embedding

π∗S(d) =

π∗(rev(π∗(d))) if π∗(d) ∈ {−→e ,←−e }

π∗(d) otherwise

is obtained by contracting e in G∗. Furthermore, πS is a planar embedding iff π is a planar

embedding.

Proof. As e is not a loop π(−→e ) 6=←−e and π(←−e ) 6= −→e . Hence, πS indeed maps←→ES to itself

only. It also is clear that πS is a permutation, as it can be represented as the concatenation

28

Page 37: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

e 1e2

e3

e 4

e5 e6

delete/contract

e4

e1

e2

e3

e5 e6

Figure 2.4: This example depicts the duality of deletion and contraction. The edge e4

is deleted from the primal graph. This corresponds to a contraction of e4 in

the dual graph, i.e., the faces left and right of e4 are merged.

of permutations (−→e , π(−→e ))◦(←−e , π(←−e ))◦π restricted to←→ES . In fact, πS is obtained from π

by removing −→e and←−e from their corresponding orbits, leaving all other orbits unchanged.

Hence πS is an embedding of GS .

If π∗(d) ∈ {−→e ,←−e }, we have π∗S(d) = π(π(rev(d))) = π(π∗(d)) and if π∗(d) /∈ {−→e ,←−e }we have π∗S(d) = π(rev(d)) = π∗(d) as claimed. Consider the orbit of π∗S containing a

dart d ∈←→E . If d is not in the same orbit of π∗ as −→e or ←−e , then this orbit remains

unchanged as orbit of π∗S . Now let d be in the same orbit of π∗ as −→e , and let d′ be in the

same orbit of π∗ as ←−e , i.e., (π∗)k(d) = −→e and (π∗)k′(←−e ) = d′ for some k, k′ ∈ N. Then

(π∗S)k(d) = π∗(rev(−→e )) = π∗(←−e ) and hence (π∗S)k+k′−1(d) = d′. This means the orbits

containing −→e and ←−e are merged in π∗S , i.e., e is contracted in G∗S .

Note that GS is connected, as −→e cannot be a cut in G since it is not a loop in G∗.

Finally, the number of faces decreases by one when contracting e in G∗ to obtain G∗S as e

is no loop in G∗, but the number of vertices does not change – both tailG(−→e ) and tailG(←−e )

must contain other darts or e would be a loop in G∗. Hence, Euler’s formula holds for

(πS , ES) if and only if it holds for (π,E).

Remark 2.44. The theorem essentially holds for loops in G as well. The only problem

that can occur in the proof is that π(−→e ) = ←−e or π(←−e ) = −→e . But these cases can easily

be fixed by modifying the definition of πS slightly. However, as we most of the time will

be considering simple graphs, we will skip the details.

Remark 2.45. If e is a loop in G∗ and π is a planar embedding, then e is a cut in G by

cycle/cut duality. As we only want to consider connected graphs, we restrict ourselves to

the case where deleting e does not result inGS having more than one connected component.

But this is only the case if one side of the cut consists of one single vertex v, with e being

the only edge incident to v. Then deleting e causes v to vanish, not changing the number

of faces. So planarity is again maintained by Euler’s formula.

29

Page 38: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Corollary 2.46. Connected subgraphs of planar graphs are planar again.

2.3.4 Algorithmic aspects of planar graphs

We want to close this section by considering some algorithmic aspects of planar graphs.

First, we will show that the number of edges in simple planar graphs is bounded linearly

by the number of vertices. We then present a simple data structure to store planar graphs.

Finally, we discuss the problem of deciding planarity and finding a planar embedding of a

graph with the additional property that certain vertices lie on the boundary of the infinite

face.

A bound on the number of edges of simple planar graphs

When considering combinatorial optimization problems in planar graphs, we can most

of the time restrict ourselves to simple graphs, i.e., graphs that do not contain loops or

parallel edges. Here the number of edges can be bounded linearly in the number of vertices.

Theorem 2.47. Let G = (V,E) be a simple connected planar graph with |V | ≥ 3. Then

|E| ≤ 3|V | − 6.

Proof. Let π be a planar embedding of G. We first show that the clockwise boundary of

every face f ∈ V ∗ contains at least three darts, i.e., |f | ≥ 3.

Let f ∈ V ∗ and d ∈ f . Suppose π∗(d) = d, then d must be a loop, contradicting our

assumption. So let e := π∗(d) and suppose by contradiction π∗(e) = d. Then head(d) =

tail(e) and head(e) = tail(d). Hence, as there are no parallel or anti-parallel arcs, e and d

must belong to the same edge, i.e. e = rev(d). This means π(d) = π◦rev ◦ rev(d) = π∗(e) =

d as well as π(e) = e, implying that both head(d) and tail(d) are incident to one edge

only. As G is connected, these two vertices are the only two of the graph, contradicting

the requirement that there are at least three vertices. Hence d, π∗(d), π∗(π∗(d)) are three

pairwise distinct darts in the clockwise boundary of f .

Thus, as |f | ≥ 3 for all f ∈ V ∗ and V ∗ is a partition of←→E , we obtain 3|V ∗| ≤ |

←→E | =

2|E|. This, together with Euler’s formula yields |E| ≤ 3|V | − 6.

The bound is tight, as can be seen by the graph in Figure 2.3.4. If we allow the existence

of anti-parallel arcs, the bound increases by a factor of two. However, the number of edges

becomes unbounded in the number of vertices as soon as we allow multiple parallel arcs

or loops.

A data structure for planar graphs

Whenever we want to perform computations in planar graphs, a data structure is needed

that provides access to an embedding of the graph. A very basic approach using lists

30

Page 39: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

...

Figure 2.5: This example shows that the bound on the number of edges in a simple

planar graph given in Theorem 2.47 is tight. Each of the |V | − 2 vertices in

the center has three incoming arcs, thus |E| = 3|V | − 6.

suffices all our algorithmic needs. We store every vertex and face just by a link to one of

the darts it contains. With every dart d ∈←→E we store π(d) and π−1(d), thus obtaining

a doubly linked list for every vertex, and, additionally, rev(d), tail(d) and right(d). Using

this data structure, we can compute π, π−1, rev, tail in constant time and hence π∗,

(π∗)−1, head and left as well. If we can evaluate π in constant time, the data structure

can be constructed in time O(|V |). It also allows for edge deletions and contractions as

well as adding new edges in constant time.

An alternative approach is proposed in [24], identifying the darts with integers fulfilling

rev(i) = −i and storing them in two arrays for π and π−1. The use of an array allows for

instant access to an edge by its index. However, this is usually not needed, as we obtain

edges by traversal of paths or face boundaries in most cases. As a slight drawback, using

an array is also less flexible when it comes to edge additions.

s-t-planar graphs and embedding construction

Given a planar embedding of some graph, we can choose the infinite face f∞ among the

faces of the embedded graph arbitrarily. Therefore, given a vertex t, we can without loss

of generality assume that t is adjacent to the infinite face. However, this does not hold

for multiple vertices at the same time. In fact, there are planar graphs that contain pairs

of vertices which cannot be embedded in such a way that they are adjacent to a common

face (see Figure 2.6). This motivates the following definition.

Definition 2.48 (s-t-planar graph). Let G = (V,E) be a connected graph and let s, t ∈ V .

G is s-t-planar, if there is a planar embedding of G such that s and t are adjacent to f∞.

Such an embedding is called s-t-planar embedding and the corresponding embedded graph

is called s-t-plane graph.

We will encounter s-t-planar graphs in later sections and see that some combinatorial

31

Page 40: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

(a)

s t

(b)

s

t

Figure 2.6: (a) An s-t-planar embedding of a graph. (b) A planar embedding of a graph

that is not s-t-planar. In fact, this graph cannot be embedded with s and t

adjacent to the infinite face, as adding an edge from s to t yields the complete

bipartite graph with three vertices on each side, which is infamously known

to be non-planar [26].

problems become significantly easier on this class of graphs (even in comparision to planar

graphs in general). An example of a graph that is s-t-planar and a graph that is planar

but not s-t-planar for two particular vertices s, t is given in Figure 2.6.

A very natural task is to determine whether a graph is planar, and, if so, construct a

planar embedding. There is a wealth of algorithms that deal with this problem and both

tasks can be solved in time linear in the number of edges of the graph (note that for simple

planar graphs this is linear in the number of vertices by Theorem 2.47).

Theorem 2.49. There is an algorithm that, given a simple graph G = (V,E), determines

whether G is planar or not, and, if it is planar, returns a planar embedding of G in O(|V |).

We do not give a proof for this theorem here but refer the reader to [4] for a detailed

description of a linear time planarity checking algorithm by Boyer and Myrvold.

As already mentioned, s-t-planar graphs and s-t-planar embeddings are of particular

interest. In fact, the problem of finding an s-t-planar embedding can be reduced to the

problem of finding a general planar embedding in linear time. The reduction even works

if we require more than two vertices to be on the infinite face. The idea is simple: Just

add an artificial vertex to the graph and connect it with all vertices that are required to

be on the infinite face. The resulting graph is planar if and only if it is possible to embed

the original graph with all the desired vertices on the same face. The following theorem

gives the details.

Theorem 2.50. Let G = (V,E) be a connected graph and S = {v1, . . . , vk} ⊆ V . Let

G0 = (V0, E0) with E0 = E ∪ {e1, . . . , ek} and V0 = V \S ∪ {v0, v1 ∪ {−→e1}, . . . , vk ∪ {−→ek}},v0 = {←−e1 , . . . ,

←−ek}. Then G0 is planar if and only if there is a planar embedding of G

with v1, . . . , vk adjacent to f∞. Furthermore, any planar embedding π0 of G0 yields an

embedding of G with v1, . . . , vk adjacent to the same face by deleting all edges incident to

v0.

32

Page 41: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof.

“⇒”: Let π0 be a planar embedding of G0. As G is connected, D(δv0) is a simple cut in

G0, so ←−e1 , . . . ,←−ek is a simple cycle in G∗0. In particular, v0 is adjacent to exactly k

distinct faces. By duality of deletion and contraction, deleting e1, . . . , ek−1 results

in merging all those k faces to a single face, which v1, . . . , vk are adjacent to, and

which we choose to be f∞. Finally deleting ek also deletes v0 but does not change

the number of faces, so the resulting rotation system is a planar embedding of G

with v1, . . . , vk adjacent to f∞.

“⇐”: Let π be a planar embedding of G with v1, . . . , vk adjacent to f∞. For i ∈ {1, . . . , k},let di ∈ f∞ with rev(di) ∈ vi. Without loss of generality let d1, . . . , dk be the order

in which the darts occur in the orbit f∞ of π∗. Construct an embedding π0 of G0

from π in the following way: For every i ∈ {1, . . . , k}, insert −→ei into the permutation

π directly after rev(di). Then concatenate the resulting permutation with the cyclic

permutation (←−ek , . . . ,←−e1). π0 obviously is an embedding of G0. Moreover, π∗(d) =

π∗0(d) for all d ∈←→E \ {d1, . . . , dk}. Hence, all orbits of π except for f∞ remain

unchanged in π∗0. By construction, π∗0(di) = −→ei , π∗0(−→ei ) = ←−−ei−1 (with e0 := ek),

and π∗0(←−−ei−1) = π∗(di−1), so the orbit containing di,−→ei ,←−−ei−1 differs from the orbit

containing any other dj with j 6= i. Hence, f∞ is split up into k different faces in

the embedding of π0. As the number of faces has increased by k − 1, the number

of vertices has increased by 1 and the number of edges has increased by k, Euler’s

formula still holds for the embedding π0 and hence the embedding is planar.

The construction described above can obviously be executed in linear time. For k = 2,

v1 = s, v2 = t, using a linear time planarity checking algorithm on the modified graph

then instantly yields an s-t-planarity checking algorithm.

Corollary 2.51. There is an algorithm that, given a simple graph G = (V,E) and two

vertices s, t ∈ V , determines whether G is s-t-planar or not, and, if it is s-t-planar, returns

an s-t-planar embedding of G in O(|V |).

2.4 Flows

This section provides a brief introduction to network flows and their theory. We will

give a formal definition of flows, formulate the maximum flow problem, and state several

fundamental results in network flow theory.

Definition 2.52 (Flow). Let G = (V,E) be a graph, s, t ∈ V . An s-t-flow (or just flow)

is a vector x ∈ RE with δvTx = 0 for all v ∈ V \ {s, t} and δs

Tx ≥ 0. A capacity function

is a non-negative vector c ∈ R←→E+ . A flow respects the capacities c if x(d) ≤ c(d) for all

33

Page 42: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

darts d ∈←→E . We say a dart d ∈

←→E is residual with respect to x if x(d) < c(d). A path

or cycle is residual, if it only consists of residual darts.

We can interpret a flow as a quantity of goods transported through the darts of the

network from the source to the sink (or just along cycles). Consider an edge e ∈ E with

v = tail(−→e ) and w = head(−→e ). If x(e) > 0 then x(e) units of flow are transported from

v through −→e to w. If x(e) < 0 then −x(e) units of flow are transported from w through←−e to v. The flow conservation condition δv

Tx = 0 ensures that the amount of flow that

enters a vertex equals the amount that leaves it, i.e., no flow is generated or consumed on

its way from the source to the sink. A capacity function restricts the amount of flow that

can be transported through any dart. Note that a capacity function with c(←−e ) = 0 for all

e ∈ E forces the flow to be non-negative, i.e., it uses only arcs but no anti-arcs. Thus, the

model covers undirected (with c(−→e ) = c(←−e )) as well as directed (with c(←−e ) = 0) versions

of capacitated flow problems. Note that the set of flows x with δsTx = 0 is exactly the

cycle space of G and goods are not transported from sources to sinks but only circulate

in the network – hence the name “circulation”.

2.4.1 Flow decomposition and the maximum flow problem

An important insight of network flow theory is that flow cannot only be expressed by

values on the arcs but also by values on the set of source-sink-paths and cycles. We first

show that non-negative linear combinations of characteristic vectors of s-t-paths and cycles

are flows. This leads to two formulations of the maximum flow problem, which is one of

the most well-studied problems in combinatorial optimization and will also be a central

object of this thesis.

Lemma 2.53. Let G = (V,E) be a graph, s, t ∈ V and x, y be two s-t-flows with δsTx ≥δtT y and λ ∈ R+. Then λx, x + y and x − y are s-t-flows. If P be a simple s-t-path or

simple cycle in G, then δP is an s-t-flow.

Proof. As x, y ∈ {z ∈ RE : δvT z = 0 ∀v ∈ V \ {s, t}}, which is a subspace of the arc space,

the linear combinations λx, x+y and x−y are contained in the same subspace. Moreover,

δsT (λx) = λδs

Tx ≥ 0, δsT (x + y) = δsTx + δs

T y ≥ 0 and δsT (x − y) = δs

Tx − δsT y ≥ 0,

proving the first statement of the lemma. If P is a simple cycle, we have δvT δP = 0 for all

v ∈ V by the orthogonality of cycle and cut space. If P = {d1, . . . , dk} is a simple s-t-path,

let v ∈ V \ {s, t}. If there is a dart di in P with head(di) = v, then tail(di+1) = v and no

other darts of P are incident to v. So δvT δP = δP (di+1)− δP (di) = 0. Furthermore, d1 is

the only dart in P with tail(d1) = s, and there is no dart in P that has s as its head. So

δsT δP = 1 ≥ 0.

The above lemma in particular implies that the sum of several flows on s-t-paths and

flows on cycles is again an s-t-flow. We will now show that the converse is also true.

34

Page 43: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Definition 2.54 (Flow decomposition). Let G be a graph and P denote the set of simple

s-t-paths and C denote the set of simple cycles in G. Let x be an s-t-flow. A generalized

decomposition of x is a vector y ∈ RP∪C+ with

x(e) =∑

P∈P∪Cy(P )δP (e) =

∑P∈P∪C:−→e ∈P

y(P )−∑

P∈P∪C:←−e ∈P

y(P ).

If furthermore P ⊆ {d ∈←→E : x(d) > 0} for all P ∈ support(y), then y is simply called

decomposition of x. If y is a (generalized) decomposition of x and support(y) ⊆ P, then

y is called (generalized) path decomposition of x.

Theorem 2.55. For every s-t-flow x there is a decomposition y ∈ RP∪C+ of x such that

| support(y)| ≤ |E|.

Proof. Let x be an s-t-flow. We state a procedure that constructs a flow decomposition

with all desired properties. If x = 0 we are done. If x 6= 0, there is a dart d0 ∈←→E

with x(d0) > 0. If head(d) 6= t, there must be a dart d1 with tail(d1) = head(d0) and

x(d1) > 0 (otherwise δhead(d0)Tx < 0 violates flow conservation). We continue with the

same argument and iteratively extend the sequence of darts d0, . . . , dk with x(di) > 0

and tail(di+1) = head(di) until we encounter the sink t or a vertex that is the tail of

a dart we already considered. In the latter case we have found a simple cycle, in the

first case we use the flow conservation argument to construct a second sequence of darts

d−j , . . . , d−1, d0 with x(di) > 0 and head(di−1) = tail(di), stopping when we encounter

the source s or a vertex that is the head of a dart we already considered. Again, in the

latter case, we have found a simple cycle, in the first case we now have a simple s-t-path

d−j , . . . , d−1, d0, d1, . . . , dk. In any case, we have an element P ∈ P ∪ C with x(d) > 0 for

all d ∈ P . Setting y(P ) := mind∈P x(d) and x′ := x− y(P )δP , we get by Lemma 2.53 an

s-t-flow x′ with {d ∈←→E : x′(d) > 0} ⊂ {d ∈

←→E : x(d) > 0}. So iterating this procedure

at most | support(x)| times yields the desired flow decomposition.

Two formulations of the maximum flow problem

The value of an s-t-flow is the net amount of flow leaving the source δsTx, which is the

same as the net amount of flow entering the sink −δtTx. We now introduce the maximum

flow problem, which asks for a flow of maximum value.

Problem 2.56 (Maximum flow problem).

Given: a graph G = (V,E), a capacity function c :←→E → R+, two vertices s, t ∈ V

Task: Find an s-t-flow x in G respecting the capacities c that maximizes δsTx.

35

Page 44: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Before we discuss approaches to solve the maximum flow problem, we use the flow de-

composition theorem (Theorem 2.55) to (re-)formulate it and its dual, called the minimum

cut problem, as linear programs on the set of simple s-t-paths of the graph G.

Problem 2.57 (Maximum flow problem/minimum cut problem).

Given: a graph G = (V,E), c :←→E → R+, s, t ∈ V

Task: Find an optimal solutions to

(MF )max

∑P∈P y(P )

s.t.∑

P∈P:d∈P y(P ) ≤ c(d) ∀d ∈←→E

y(P ) ≥ 0 ∀P ∈ P

and

(MC)min

∑d∈←→Ec(d)x(d)

s.t.∑

d∈P x(d) ≥ 1 ∀P ∈ Px(d) ≥ 0 ∀d ∈

←→E

where P is the set of all simple s-t-paths in G.

Note that the above formulation of the maximum flow problem is in fact a path formu-

lation as it does not contain variables on cycles. These can be ignored, as they do not

contribute to the objective function value of the solution. Thus, the path formulation and

the formulation using variables on the arcs are equivalent.

Lemma 2.58.

(1) If y ∈ RP is a feasible solution of (MF ), then x :=∑

P∈P y(P )δP ∈ RE is a feasible

solution to the maximum flow problem with arc variables and δsTx =∑

P∈P y(P ).

(2) If x ∈ RE is feasible solution of the maximum flow problem and y ∈ RP∪C is a

generalized decomposition of x, then the restriction y ∈ RP with y(P ) := y(P ) for

all P ∈ P is a feasible solution of (MF ) and δsTx =∑

P∈P y(P ).

Proof.

(1) By Lemma 2.53, x is an s-t-flow. It also respects the capacities, as the feasibility

of y implies x(d) =∑

P∈P:d∈P y(P )−∑

P∈P:rev(d)∈P y(P ) ≤ c(d). The value of x is

δsTx =

∑P∈P y(P )δsT δP =

∑P∈P y(P ).

(2) y is feasible, as∑

P∈P:d∈P y(P ) ≤∑

P∈P∪C:d∈P y(P ) = x(d) ≤ c(d) for all d ∈←→E .

Furthermore, δsTx =∑

P∈P∪C y(P )δsT δP =∑

P∈P y(P ), as δsT δP = 0 for all P ∈ C.

36

Page 45: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

2.4.2 Max-flow/min-cut and the algorithm of Ford and Fulkerson

The max-flow/min-cut theorem is probably the most important result in network flow

theory. It states that the value of a maximum flow is equal to the value of a minimum

cut (which we shall define shortly). We give a constructive proof of the theorem due to

Ford and Fulkerson [12], which also gives rise to a general path augmenting algorithm for

solving the maximum flow problem.

The dual program (MC) of the maximum flow problem (MF ) presented above motivates

the idea of considering certain cuts, namely those that separate s from t, in order to get

an upper bound on the value of a maximum s-t-flow.

Definition 2.59 (s-t-cut). An s-t-cut is a cut set Γ+(S) for some S ⊆ V with s ∈ S and

t /∈ S. Its capacity is∑

d∈Γ+(S) c(d).

Lemma 2.60. Let Γ+(S) be an s-t-cut and x be an s-t-flow respecting capacities c. Then

δsTx ≤

∑d∈Γ+(S) c(d).

Proof. δsTx =∑

d∈S δvTx =

∑d∈Γ+(S) δd

Tx =∑

d∈Γ+(S) x(d) ≤∑

d∈Γ+(S) c(d)

Hence, the value of a maximum s-t-flow is less or equal to the capacity of a minimum

s-t-cut. In 1956, Ford and Fulkerson published their famous result that in fact equality

holds [10]. The proof we give here is based on the observation that the value of a flow can

be incremented by augmenting flow along a path of residual darts, which also inherits an

important algorithmic idea [12].

Theorem 2.61. Let x be an s-t-flow respecting capacities c. x has maximum value δsTx

if and only if there is no residual s-t-path with respect to x.

Proof.

“⇒”: Suppose x is optimal. By contradiction assume there is a simple residual path P .

Let γ := min{c(d) − x(d) : d ∈ P} > 0 and define x := x + γδP . x is an s-t-flow

by Lemma 2.53. Furthermore, x(d) = x(d) ≤ c(d) for all d with d, rev(d) /∈ P and

x(d) = x(d) + γ ≤ c(d) and x(rev(d)) = x(rev(d)) − γ < x(rev(d)) ≤ c(d) for all

d ∈ P . Finally δsT x = δsTx+ δs

T δP = δsTx+ γ, contradicting the optimality of x.

“⇐”: Suppose there is no residual s-t-path. Let R := {d ∈←→E : x(d) < c(d)} and

S := {v ∈ V : s →R v}. Clearly, s ∈ S but t /∈ S and hence S induces an s-t-cut

C. For every d ∈ C there is a residual s-tail(d)-path but no residual s-head(d)-path.

Suppose x(d) < c(d) for some d ∈ C, then we can extend the residual s-tail(d)-path

by d to a residual s-head(d)-path, a contradiction. So x(d) = c(d) for all d ∈ C

and hence δsTx = δSTx =

∑d∈C x(d) =

∑d∈C c(d), which by Lemma 2.60 is the

maximum value an s-t-flow can achieve, i.e., x is optimal.

37

Page 46: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

In the “⇐”-part of the above proof, we construct a cut of same value as a given maximum

flow, which hence is a minimum cut. Furthermore, by the compactness of the flow polytope

{x ∈ RE : δvTx = 0 ∀v ∈ V \ {s, t},−c(←−e ) ≤ x(e) ≤ c(−→e ) ∀e ∈ E}, there always exists

a maximum flow. This yields the famous max-flow/min-cut theorem:

Theorem 2.62 (Max-flow/min-cut). The value of a maximum s-t-flow is equal to the

capacity of a minimum s-t-cut.

This also implies that an s-t-cut of minimum capacity indeed is an optimal solution to

problem (MC). The publication of Max-Flow Min-Cut and the concept of residual paths

gave rise to a wealth of efficient algorithms solving the maximum s-t-flow problem. The

most basic version of this class of flow augmenting algorithms (though not being efficient

in its general formulation) was proposed by Ford and Fulkerson themselves in [12] as a

generalization of their uppermost path algorithm for maximum flow in s-t-planar graphs

[10].

Algorithm 2.63 (Algorithm of Ford and Fulkerson).

Input: a graph G = (V,E), s, t ∈ V , c ∈ R←→E+

Output: an s-t-flow x in G respecting the capacities c that maximizes δsTx

1: Initialize x = 0.

2: while there is a residual s-t-path P do

3: Augment x by min{c(d)− x(d) : d ∈ P} units of flow along P .

4: end while

5: return x.

Lemma 2.64. If the capacities c are integral, the algorithm of Ford and Fulkerson termi-

nates after at most T iterations, where T is the value of a maximum flow, and returns an

integral maximum flow.

Proof. After initialization, x is the zero-flow and hence respects the capacities. As seen in

the proof of Theorem 2.61, this invariant is maintained during the augmenting iterations.

Furthermore, if c is integral, the amount of flow that is augmented along P is integral,

and hence x stays integral and its value increases by at least 1 in every iteration. So after

at most T iterations, x has value T and hence there is no residual path by Theorem 2.61,

causing the algorithm to terminate. The algorithm cannot terminate before x reaches

38

Page 47: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

value T , because in this case x is not optimal and there is a residual path by Theorem

2.61.

Note that this description does not specify, which augmenting paths are chosen and in

which order. In fact, there exist examples with irrational capacities where the Ford and

Fulkerson algorithm does not terminate at all and the value of the flow does not even

converge to the maximum value possible. Even if we have integral capacities, and the

algorithm thus terminates, its running time can be exponential in the input size.

However, if we specify the way, in which the augmenting paths are chosen, we can

easily achieve polynomial running time. E.g., we could chose a residual path consisting

of the minimum number of darts. Then the algorithm terminates after at most |E||V |iterations [25]. For the case that the input graph is s-t-planar, Ford and Fulkerson showed

already in their first publication on the maximum flow problem [10], that by choosing

the “uppermost” residual path, the number of iterations can be bounded by the number

of edges. This uppermost path algorithm will play a central role in this thesis and in

later sections we shall see that it is indeed a special case of a general greedy algorithm

that solves a wide variety of problems and also allows for certain weight functions on the

paths. We shall also present a simple implementation of this algorithm with running time

O(|V | log(|V |)).

39

Page 48: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

3 Lattices and the two-phase greedy

algorithm

3.1 Introduction to lattices

Lattices are a special class of partially ordered set families. They are of particular interest

from an optimization point of view, as their structure can be exploited by greedy algo-

rithms to efficiently optimize linear objective functions over so-called lattice polyhedra.

We give a short introduction to lattices and some examples. We furthermore introduce

covering and packing problems and show that the inequality systems defining these prob-

lems are totally dual integral if the underlying structure is a submodular and consecutive

lattice, a result first presented by Hoffman and Schwartz [19].

3.1.1 Lattices

Before we can define lattices, we first need to introduce partial orders. They are a gener-

alization of total orders like, e.g., the familiar ≤-relation on R.

Definition 3.1 (Partial order). Let P be a set. A relation � on P is a partial order if it

fulfills the following three properties.

(1) ∀x ∈ P : x � x (reflexivity)

(2) ∀x, y ∈ P : x � y and y � x implies x = y. (anti-symmetry)

(3) ∀x, y, z ∈ P : x � y and y � z implies x � z. (transitivity)

We say two elements are comparable, if x � y or y � x. Otherwise, we say the elements

are incomparable. A chain is a subset S ⊆ P such that all elements of S are pairwise

comparable.

Notation. Instead of “x � y” we will also write “y � x”. Furthermore, we write “x ≺ y”

and “x � y” for “x � y and x 6= y” and “x � y and x 6= y”, respectively.

An example for a partial order, which will also play an important role in this chapter,

is the set inclusion relation.

Example 3.2. Given a set E, the set inclusion relation “⊆” is a partial order on 2E .

40

Page 49: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Note that in partially ordered sets not all pairs of elements must be comparable (as

this is what distinguishes partial orders from total orders). However, we sometimes wish

to have a least common upper and largest common lower bound on any pair of elements.

This leads to the definition of lattices. We restrict our definition of lattices to those that

are defined on set systems, as all lattices considered in the thesis have this form and it

furthermore allows us to define the notion of submodularity.

Definition 3.3 (Lattice). Let E be a finite set, L ⊆ 2E and � be a partial order on L.

(L,�) is a lattice, if there are two binary operators ∧,∨ : L × L → L, such that for all

R,S, T ∈ L the following conditions are fulfilled.

• S � S ∧ T and T � S ∧ T

• S � S ∨ T and T � S ∨ T

• R � S and R � T implies R � S ∧ T .

• R � S and R � T implies R � S ∨ T .

S ∧ T is called meet and S ∨ T is called join of S and T .

Note that ∧ and ∨ are uniquely determined by � due to anti-symmetry. They are the

unique maximal lower and minimal upper bound of two elements. It is also straightforward

to check that every lattice has a unique maximum and a unique minimum element.

Lemma 3.4. Let (L,�) be a lattice. Then there is a unique element maxL with maxL �S for all S ∈ L and a unique element minL with minL � S for all S ∈ L.

Proof. For S ∈ L define γ(S) := |{T ∈ L : T � S}|. As L is a finite set, there is an

element U with γ(U) = max{γ(S) : S ∈ L}. Assume by contradiction there is an element

S with S � U . Then S ∨ U � U and by transitivity γ(S ∨ U) > γ(U), a contradiction.

Thus U ≥ S for all S ∈ L. Furthermore U is unique with this property by antisymmetry,

and defining maxL := U fulfills all requirements. The existence and uniqueness of the

minimum element follow totally analogous.

Sub- and supermodularity of functions and lattices are important properties that often

occur in combinatorial optimization problems and help to solve these problems efficiently.

They can be seen as a form of discretized convexity or concavity, respectively.

Definition 3.5. Let (L,�) be a lattice. A function f : L → R is called

• submodular, if f(S ∧ T ) + f(S ∨ T ) ≤ f(S) + f(T ) for all S, T ∈ L.

• supermodular, if f(S ∧ T ) + f(S ∨ T ) ≥ f(S) + f(T ) for all S, T ∈ L.

• modular, if it is submodular and supermodular.

41

Page 50: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

f is monotone increasing, if S � T implies f(S) ≤ f(T ) for all S, T ∈ L. A lattice is

submodular (supermodular, modular), if for every x ∈ RE , x ≥ 0, the function x(S) :=∑e∈S x(e) is submodular (supermodular, modular). A lattice (or a partial order on a set

system in general) is consecutive if S ∩ U ⊆ T for all S, T, U ∈ L with S � T � U .

One of the most important examples for lattices is the boolean lattice of a finite set.

Example 3.6 (boolean lattice). For any finite set E, the boolean lattice (2E ,⊆) is a

lattice with meet operator ∩ and join operator ∪. It is consecutive, as S ⊆ T ⊆ U implies

S ∩ U = S ⊆ T . It also is modular, as∑e∈S

x(e) +∑e∈T

x(e) =∑e∈S\T

x(e) +∑

e∈S∩Tx(e) +

∑e∈T\S

x(e) +∑

e∈S∩Tx(e)

=∑

e∈S∪Tx(e) +

∑e∈S∩T

x(e)

for all S, T ⊆ E.

The following lemma gives a useful characterization of sub- and supermodularity of

lattices.

Lemma 3.7. A lattice (L,�) is submodular if and only if

(S ∧ T ) ∩ (S ∨ T ) ⊆ S ∩ T and (S ∧ T ) ∪ (S ∨ T ) ⊆ S ∪ T

for all S, T ∈ L. It is supermodular if and only if

S ∩ T ⊆ (S ∧ T ) ∩ (S ∨ T ) and S ∪ T ⊆ (S ∧ T ) ∪ (S ∨ T )

for all S, T ∈ L.

Proof. We only prove the statement on submodular lattices, the statement for supermod-

ularity follows analogously by flipping � and ⊆ to � and ⊇.

“⇒”: Let S, T ∈ L. For e ∈ E define xe ∈ RE by xe(e) := 1 and xe(e) = 0 for all e ∈ E\{e}.By submodularity xe(S ∧ T ) + xe(S ∨ T ) ≥ xe(S) + xe(T ). If e ∈ (S ∧ T ) ∪ (S ∨ T ),

we have xe(S) + xe(T ) ≥ x(S ∧ T ) + x(S ∨ T ) ≥ 1 and hence e ∈ S or e ∈ T , i.e.,

e ∈ S ∪T . If e ∈ (S ∧T )∪ (S ∨T ), we have xe(S) +xe(T ) ≥ x(S ∧T ) +x(S ∨T ) = 2

and hence e ∈ S and e ∈ T , i.e., e ∈ S ∩ T .

“⇐”: Let x ∈ RE with x ≥ 0. Let S, T ∈ L. Define I := (S ∧ T ) ∩ (S ∨ T ) and

U := (S ∧ T ) ∪ (S ∨ T ). Then

x(S ∧ T ) + x(S ∨ T ) = x(I) + x(U) ≤ x(S ∩ T ) + x(S ∪ T ) = x(S) + x(T )

where the two equalities are due to the modularity of the boolean lattice, and the

central inequality follows from I ⊆ S∩T , U ⊆ S∪T and the non-negativity of x.

42

Page 51: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Corollary 3.8. A consecutive lattice (L,�) is submodular if and only if (S∧T )∪(S∨T ) ⊆S ∪ T for all S, T ∈ L.

Proof. S ∧ T � S, T � S ∨ T implies (S ∧ T ) ∩ (S ∨ T ) ⊆ S ∩ T by consecutivity.

We apply the above corollary on another important example for lattices, the cut lattice

of a graph.

Example 3.9 (cut lattice). Let G = (V,E) be a connected graph and s, t ∈ V . Let

L = {Γ+(S) : S ⊆ V, s ∈ S, t /∈ S} be the set of all s-t-cuts in G. As G is connected,

{δv : v ∈ V \ {t}} is a basis of Scut(G), implying that the vertex set S inducing a cut C is

unique. Thus, we can define the relation � for C1, C2 ∈ L by

C1 � C2 :⇔ S1 ⊆ S2

for the unique vertex sets S1, S2 ⊆ V with C1 = Γ+(S1) and C2 = Γ+(S2), and get a

lattice with meet C1 ∧C2 = Γ+(S1 ∩ S2) and join C1 ∨C2 = Γ+(S1 ∪ S2). (L,�) is called

the cut lattice. Its minimum element is called leftmost cut, its maximum element is called

rightmost cut.

If S1 ⊆ S2 ⊆ S3, then d ∈ Γ+(S1) ∩ Γ+(S3) implies tail(d) ∈ S1 ⊆ S2 and head(d) /∈S3 ⊇ S2. Hence, Γ+(S1) ∩ Γ+(S3) ⊆ Γ+(S2), i.e., the cut lattice is consecutive.

If d ∈ Γ+(S1 ∩ S2) ∪ Γ+(S1 ∪ S2), then tail(d) ∈ S1 ∪ S2 and head(d) /∈ S1 ∪ S2. Hence

d ∈ Γ+(S1) or d ∈ Γ+(S2). This implies Γ+(S1 ∩ S2) ∪ Γ+(S1 ∪ S2) ⊆ Γ+(S1) ∪ Γ+(S2),

and, by consecutivity and Corollary 3.8, the cut lattice is submodular.

For later algorithmic use, it will be important that the restriction of a submodular

lattice is again a lattice.

Lemma 3.10. Let (L,�) be a submodular lattice on a finite ground set E. Let F ⊆ E.

Define L[F ] := {S ∈ L : S ⊆ F}. Then (L[F ],�) also is a submodular lattice.

Proof. We only need to show that S∧T, S∨T ∈ L[F ] for all S, T ∈ L[F ]. Let S, T ∈ L[F ].

Then S, T ⊆ F and hence, by submodularity, S ∧ T, S ∨ T ⊆ S ∪ T ⊆ F .

3.1.2 Covering and packing problems

Covering and packing problems comprise a very general class of combinatorial optimization

problems. Among them there are many well-known problems, as, e.g., minimum vertex

cover and maximum matching. In particular, we shall see that the maximum s-t-flow

problem can be formulated as a packing problem. This will give rise to a correctness proof

of the uppermost path algorithm based on lattice structure, as well as a generalization of

the problem to a weighted version.

The general formulation of the covering problem and its dual, the packing problem, in

terms of linear programming is the following.

43

Page 52: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Problem 3.11 (Covering problem/packing problem).

Given: a finite set E, a set system L ⊆ 2E , c ∈ RE , r ∈ RL

Task: Find optimal solutions to

(C)min

∑e∈E c(e)x(e)

s.t.∑

e∈S x(e) ≥ r(S) ∀S ∈ Lx(e) ≥ 0 ∀e ∈ E

and

(P )max

∑S∈L r(S)y(S)

s.t.∑

S∈L:e∈S y(S) ≤ c(e) ∀e ∈ Ey(S) ≥ 0 ∀S ∈ L

.

If, additionally, x and y are required to be integral, the covering (packing) problem is

called integral covering (packing) problem.

From the primal (covering) point of view, c is a cost vector on the elements of the

ground set E, while r gives the required amount by which a set in the family L must be

covered. From the dual (packing) point of view, r can be seen as a reward vector on the

set family and c can be interpreted as a capacity function on the ground set. We state

two examples of integral covering problems, the minimum vertex cover problem and the

shortest s-t-path problem, which is based on the cut lattice introduced in Example 3.9.

Example 3.12 (Minimum vertex cover). In the definition of (C), let E be the set of

vertices of a graph, L be the set of its edges (each edge interpreted as set of its two

endpoints), and r ≡ 1, c ≡ 1. If we require x to be integral, an optimal solution to (C) is

a minimum vertex cover.

This example already yields an NP -hardness result for the integral covering problem,

as the vertex cover problem is NP -hard.1

Corollary 3.13. The integral covering problem is NP -hard.

According to the above corollary, we need to restrict ourselves to special cases of the

problem in order to find a polynomial time algorithm. A still very general class of covering

problems is comprised by lattice polyhedra, i.e., those instances of (P ) and (C), where Lis a consecutive and submodular lattice and r is a supermodular function on L. In fact,

1There also is a straightforward direct reduction of the satisfiability problem to a covering problem,

choosing the set of literals as ground set and including all clauses and all pairs of literals corresponding

to the same variable into the set system.

44

Page 53: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

we will show later, that the covering and packing problem in this case can be solved in

polynomial time, whenever r monotone increasing. Many natural problems fulfill these

requirements.

Example 3.14 (Shortest s-t-path). For some graph G = (V,E) with s, t ∈ V let L be

the cut lattice of G. Consider the integral covering problem on the lattice L with ground

set←→E , r ≡ 1 and c :

←→E → R+. Let x be a feasible solution. As support(x) intersects

with every s-t-cut, it must contain a simple s-t-path P . Define x by x(d) = 1 if d ∈ P and

x(d) = 0 if d /∈ P , then x is a feasible solution (as P covers all cuts) of at most the same

objective value as x (as c is non-negative). Hence, the covering problem on the cut lattice

corresponds to the problem of finding an s-t-path P with minimal cost∑

d∈P c(d).

In the next subsection we will present a result on lattice polyhedra by Hoffman and

Schwartz that implies that the inequality system underlying the shortest s-t-path problem

(and many other problems) is totally dual integral, and thus we actually do not need to

require integrality of the solution explicitly.

3.1.3 Total dual integrality of lattice polyhedra

We have seen that we can describe the NP -hard vertex cover problem as an integral

covering problem. It is also easy to see that the polyhedral description we have given for

this problem is not integral in general (take for example just a complete graph of three

vertices). However, there is a large class of packing and covering problems with integral

polyhedra. Hoffman and Schwartz have shown that if the underlying set system L is a

consecutive and submodular lattice and the function r is supermodular and integral, the

inequality system describing the polyhedron is totally dual integral [19]. Thus, (P ) and

also (C) have integral solutions in this case whenever c is integral. The main idea of the

proof is a classical uncrossing technique that yields an optimal solution whose support

is a chain in the lattice. The version of the proof we cite here is taken from a lecture

on integer linear programming at TU Berlin [28], but refined by a secondary objective

function argument that ensures the existence of an uncrossed optimal solution.

Theorem 3.15. Let (L,�,∧,∨) be a consecutive and submodular lattice on a finite ground

set E and r : L → R be supermodular. Then the system∑e∈S x(e) ≥ r(S) ∀S ∈ L

x(e) ≥ 0 ∀e ∈ E

is totally dual integral.

Proof. We show that for any c ∈ ZE+ the dual program

(P )max

∑S∈L r(S)y(S)

s.t.∑

S∈L:e∈S y(S) ≤ c(e) ∀e ∈ Ey(S) ≥ 0 ∀S ∈ L

45

Page 54: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

has an integral optimal solution. We first show that there always exists an optimal solution

y∗ such that support(y∗) is a chain.

For an arbitrary constant Z > 2, define z ∈ RL by z(S) := Zγ(S) with γ(S) := |{T ∈L : S � T}| for all S ∈ L. Note that z is “strictly” supermodular, as

z(S) + z(T ) = Zγ(S) + Zγ(T ) < Zmax{γ(S),γ(T )}+1 ≤ Zγ(S∨T ) ≤ z(S ∧ T ) + z(S ∨ T )

for two incomparable elements S, T ∈ L with S, T ≺ S ∨ T . Now let y∗ be an optimal

solution of

max

{∑S∈L

z(S)y(S) : y is an optimal solution of (P )

}.

In particular, y∗ is an optimal solution of (P ). By contradiction assume there are two

incomparable elements T1, T2 ∈ support(y∗). Define ε := min{y∗(T1), y∗(T2)} and

y(S) :=

y∗(S)− ε if S ∈ {T1, T2}

y∗(S) + ε if S ∈ {T1 ∧ T2, T1 ∨ T2}

y∗(S) otherwise

Clearly, y is non-negative, and as submodularity of L implies∑S∈L:e∈S

y(S) =∑

S∈L:e∈Sy∗(S) + ε · (1T1∧T2(e) + 1T1∨T2(e)− 1T1(e)− 1T2(e))︸ ︷︷ ︸

≤0

≤∑

S∈L:e∈Sy∗(S) ≤ c(e)

y is a feasible solution to (P ). Similarly,∑

S∈L r(S)y(S) ≥∑

S∈L r(S)y∗(S) by super-

modularity of r. Therefore y is an optimal solution of (P ). Finally, again by the same

argument, the strict supermodularity of z implies∑

S∈L z(S)y(S) >∑

S∈L z(S)y∗(S),

which is a contradiction to the maximality of y∗ w.r.t. z. Thus, L′ := support(y∗) must

be a chain in L.

Now consider the matrix A ∈ {0, 1}L′×E with AS,e = 1 if e ∈ S and AS,e = 0 if e /∈ S.

As L′ is a chain, we can order the rows with respect to �. Then A is an interval matrix

by consecutivity of L, and as such it is totally unimodular (cf. Example 2.10). Hence,

the linear program max{rT y : AT y ≤ c, y ≥ 0} has an integral optimal solution y′ ∈ ZL′+

with∑

S∈L′ r(S)y′(S) ≥∑

S∈L′ r(S)y∗(S). Consequently, extending y′ by y′(S) = 0 for

all S ∈ L \ L′ yields an integral optimal solution of (P ).

3.2 The two-phase greedy algorithm

We now want to present a two-phase greedy algorithm that exploits the structure of

consecutive and submodular lattices to solve covering and packing problems efficiently

46

Page 55: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

whenever the function r is supermodular and monotone increasing and the lattice is given

by a certain oracle.

The idea goes back to Frank [13], who presented an algorithm for an even more general

framework than discussed here and applied it to solve certain connectivity problems on

directed graphs. Amongst other things, Frank’s algorithm can solve the minimum spanning

arborescence problem, using a maximum flow computation as oracle. In [9], Faigle and Peis

later established a similar algorithm for supermodular lattices, which generalizes Edmonds’

greedy algorithm for polymatroids (cf. [7]). Besides this extension, the concepts of [9]

can be used to show correctness of a simpler but slightly less general version of Frank’s

algorithm, fitting into the framework of submodular lattice polyhedra discussed here.2

We will first discuss this algorithm based on a simple implementation and later present

an improved implementation that achieves subquadratic running time of all non-oracle

operations.

3.2.1 A simple implementation

We will now present a simple implementation of the two-phase greedy algorithm that

provides an intuitive understanding of its modus operandi. The basic concept of the

algorithm relies on complementary slackness. In the first phase, the greedy algorithm

constructs a feasible dual solution, in the second, it constructs feasible primal solution. The

construction is performed in such a manner that the two solutions fulfill complementary

slackness conditions and are thus optimal.

Oracles

We have seen that covering and packing problems on consecutive and submodular lattices

always bear integral solutions if r is supermodular and r and c are integral. Thus, integral

solutions can be obtained by solving the underlying linear programs. However, the time

required for this approach is exponential in |E| if the size of |L| is exponential in |E|.In order to solve the problem in time polynomial in |E|, we require L and r not to be

given explicitly but in form of two oracles instead. We assume there is one oracle that

returns r(S) for any given S ∈ L, and another one that, given a subset F ⊆ E, returns

the maximum element of the restricted lattice L[E \ F ] (cf. Lemma 3.10). In most cases,

this is a modest requirement, as these two oracles can often be replaced by polynomial

time algorithms. As an example for such oracles, we consider once again the cut lattice.

Example 3.16. Let L be the cut lattice of a graph G = (V,E) and let F ⊆←→E . Then the

maximum element of L[←→E \F ] is the cut induced by the set V \ {v ∈ V : v →F t} (unless

s→F t – in this case the restricted lattice is empty, as F intersects with all s-t-cuts). This2In contrast to [9], the original presentation of the algorithm in [13] allows for “intersecting” submodular

lattices and generates multiple parallel chains in its computation.

47

Page 56: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

vertex set can, e.g., be computed in time linear in |F | by a backwards depth first search

starting at t.

The implementation

Unfortunately, even with the oracles given, there is no known algorithm that can solve

packing and covering problems on lattice polyhedra in time polynomial in |E|. However,

if we additionally require that r is monotone increasing, a combinatorial greedy algorithm

exists. For better understanding, we first give a simple implementation of the algorithm,

similar to the one given in [9], where also a more general version for supermodular lattices

is established. Later we will show how a better running time can be achieved by a more

sophisticated implementation.

Algorithm 3.17 (Two-phase greedy algorithm (basic implementation)).

Input: a consecutive, submodular lattice L ⊆ 2E \{∅} (given by an oracle that returns

maxL[S] for S ⊆ E), a supermodular, monotone increasing function r : L → R+

(given by an oracle), c ∈ RE

Output: an optimal solution x of (C), an optimal solution y of (P )

1: Initialize k = 0, x = 0, y = 0, c = c.

2: // Phase 1

3: while L[E \ {e1, . . . , ek}] 6= ∅ do

4: k ← k + 1

5: Let Mk = maxL[E \ {e1, . . . , ek−1}].6: Choose ek with c(ek) = min{c(e) : e ∈Mk}.7: y(Mk)← c(ek)

8: for all e ∈Mk do c(e)← c(e)− c(ek)9: end while

10: // Phase 2

11: for i = k downto 1 do x(ei)← r(Mi)−∑

j>i:ej∈Mix(ej)

12: return (x, y)

Remark 3.18. Note that there is yet another requirement on the input of the algorithm:

L must not contain the empty set as an element. This requirement is of purely technical

nature and can easily be achieved by adding a new artificial element e of sufficiently high

cost/capacity to every set in the lattice, i.e., L := {S∪{e} : S ∈ L}, r(S) := r(S \ {e})

48

Page 57: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

for S ∈ L, c(e) := c(e) for e ∈ E and c(e) := |E| ·max{c(e) : e ∈ E}+ 1. We can assume

r(∅) = 0 as otherwise (C) is infeasible. But then x(e) = 0 in any optimal solution of the

modified problem, implying that it is equivalent to the original.

Intuitive understanding

Before we start with the analysis of the algorithm, we first want to give some intuition on

how it works. The first phase of the algorithm iteratively extracts the maximum element

from the lattice, deleting all lattice sets that contain a certain bottleneck element. It

increases the dual variable corresponding to this set as much as possible without violating

any capacity constraint. An element whose inequality becomes tight first, called the bottle-

neck element, is determined by the minimality of its residual capacity (or reduced cost, see

below), which is maintained for any element throughout Phase 1. It then is removed from

the lattice and the procedure starts anew, until the lattice is empty. The second phase

then simply constructs a primal solution by setting the values of the bottleneck elements

in such a way that the inequalities corresponding to the sets chosen in Phase 1 are fulfilled

with equality – yielding complementary slackness of the returned pair of solutions.

The values c(e) can be interpreted as residual capacity or reduced cost of an element,

depending on whether we take the packing or covering point of view. For example, assume

for a moment, we want to use the algorithm to solve the maximum flow problem in its

path formulation (Problem 2.57; in s-t-planar graphs, the set of paths can actually be

equipped with a consecutive and submodular lattice, as we shall see later). Then, for

some dart d, from the packing point of view, c(d) is the residual capacity of the dart, i.e.,

the amount we can send through d in addition to the flow that is already on it. From the

covering point of view, we want to cover every path as “cheaply” as possible. Then c(d)

tells us, what we actually have to pay (in terms of capacity) for including the dart in our

cut. Choosing d to cover some path P means increasing the capacity of our cut by c(d).

In turn, we possibly can remove other darts from the cut, which previously were chosen

to cover paths that are now collaterally covered by d as well. This reduces the cost we

actually pay for the dart – thus, c can also be interpreted as reduced costs.

3.2.2 Correctness of the algorithm

We validate our intuition by proving the correctness of the simple implementation of the

algorithm, following the line of argumentation given in [9]. We start with a lemma that

states important properties of the elements computed in Phase 1, which will be used to

establish the feasibility of the primal solution computed in Phase 2. The most important

insight is that the lattice elements M1, . . . ,Mk comprise a chain, as this yields the potential

for a substantial improvement of the performance of the algorithm later on.

49

Page 58: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Lemma 3.19. After the termination of Phase 1, the elements M1, . . . ,Mk, e1, . . . , ek

fulfill the following properties.

(1) M1 � . . . �Mk

(2) ei ∈Mi and ∀j < i : ej /∈Mi

(3) ∀j > i : ej ∈Mi ⇒ ej ∈Mi+1

(4) ∀i < k : ∀S ∈ L : Mi � S �Mi+1 ⇒ Mi ∩ {e1, . . . , ek} ⊆ S ∩ {e1, . . . , ek}

(5) ∀S ∈ L : Mk � S ⇒ S ∩ {e1, . . . , ek} = {ek} = Mk ∩ {e1, . . . , ek}

Proof.

(1) Mi+1 ∈ L[E \ {e1, . . . , ei}] ⊆ L[E \ {e1, . . . , ei−1}], implying that Mi+1 ≺ maxL[E \{e1, . . . , ei−1}] = Mi.

(2) Clear by construction.

(3) Mi �Mi+1 �Mj implies ej ∈Mi ∩Mj ⊆Mi+1 by consecutivity.

(4) By (3) and consecutivity, ej /∈ S for j < i. So S ∈ L[E \ {e1, . . . , ei−1}]. But as

S �Mi+1 = maxL[E \ {e1, . . . , ei}], this sublattice must not contain S, i.e., ei ∈ S.

Furthermore, if ej ∈ Mi for some j > i, then ej ∈ Mi+1 by (1). Thus ej ∈ S by

consecutivity.

(5) For j < k, consecutivity applied on Mj �Mk � S with ej ∈Mj \Mk yields ej /∈ S.

But L[E \ {e1, . . . , ek}] = ∅, so ek ∈ S.

Theorem 3.20. The two-phase greedy algorithm for submodular lattices correctly returns

optimal solutions x of (C) and y of (P ). If r is integral, then x is integral. If c is integral,

then y is integral.

Proof. We first show that the dual solution y returned by the algorithm is a feasible

solution of (P ). To see this, we show that c(e) = c(e) −∑

S∈L:e∈S y(S) for all e ∈ E at

any point of the excecution of the algorithm except for the time after procession of line 7

and before procession of line 8. This is clearly true initially with y = 0 and c = c. Now

consider iteration i of Phase 1. The value of∑

S∈L:e∈S y(S) only changes if e ∈ Mi and

if so, it is increased by exactly the same value as c(e) is decreased. So the invariant still

holds true after the iteration is processed. Also note that by non-negativity of c = c after

initialization and choice of ei as element of minimum residual capacity in the ith iteration,

c(e) ≥ 0 throughout the algorithm. So y ≥ 0 and∑S∈Le∈S

y(S) = c(e)− c(e) ≤ c(e)

50

Page 59: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

after termination of the algorithm, i.e., y is feasible.

We now show that x is non-negative throughout the course of the algorithm. Af-

ter initialisation this is true. Consider the iteration with index i of Phase 2. Then∑j>i:ej∈Mi

x(ej) ≤∑

j:ej∈Mi+1x(ej) = r(Mi+1) by (3) of Lemma 3.19 and the non-

negativity of x before the assignment. Thus, after the assignment,

x(ei) = r(Mi)−∑

j>i:ej∈Mi

x(ej) ≥ r(Mi)− r(Mi+1) ≥ 0

by monotonicity of r.

Furthermore, we have ∑S∈L:ei∈S

y(S) = c(ei) ∀i ∈ {1, . . . , k}

as this is true at the end of iteration i of Phase 1 and still holds true at the termination

of Phase 1, as ei /∈Mj for j > i. We also have∑e∈Mi

x(e) = x(ei) +∑

j>i:ej∈Mi

x(ej) = r(Mi) ∀i ∈ {1, . . . , k}

as this is true at the end of the iteration with index i of Phase 2 and the values of x(ej)

are not changed for j ≥ i after the iteration. This yields optimality of x and y once we

have shown that x is also feasible.

To show the feasibility of x, we define the submodular function h(S) :=∑

e∈S x(e)−r(S)

and show it is non-negative, implying feasibility of x. By contradiction assume that there

is an S ∈ L with h(S) < 0. Without loss of generality, let S be maximal with this property.

Let i := max{j ∈ {1, . . . , k} : Mj � S} (the maximum exists as M1 � S). Then

• h(Mi) = 0, as∑

e∈Mix(e) = r(Mi).

• h(S ∨Mi) ≥ 0 by choice of S.

• h(S ∧Mi) ≥ 0 in any of the possible cases:

– If i < k and S ∧Mi = Mi+1 then∑

e∈Mi+1x(e) = r(Mi+1).

– If i < k and S ∧ Mi > Mi+1 then∑

e∈S∧Mix(e) ≥

∑e∈Mi

x(e) = r(Mi) ≥r(S ∧Mi) by (4) of Lemma 3.19 and non-negativity of x.

– If i = k, then∑

e∈S∧Mix(e) = x(ek) = r(Mk) ≥ r(S ∧Mi) by (5) of Lemma

3.19.

Thus, by submodularity, h(S) = h(S)+h(Mi) ≥ h(S∧Mi)+h(S∨Mi) ≥ h(S∧Mi) ≥ 0,

a contradiction. So x and y are feasible and fulfill complementary slackness conditions,

implying optimality (cf. Theorem 2.4). The integrality results follow directly from the

construction of x and y.

51

Page 60: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

If we want to analyze the running time of the algorithm, the first difficulty we encounter

is the encoding of y. As |L| may very well be exponential in |E|, an explicit encoding of

y would render the algorithm highly inefficient. Only storing the sets in the support of y

and the values assigned to them seems to be more reasonable. Using this idea, the basic

implementation can easily be seen to run in time O(|E|2 + TL + Tr), where TL and Tr

are the overall running time of all oracle calls. However, by exploiting the consecutive

structure of the solution, we can do even better.

3.2.3 An implementation with improved running time

We now present a new improved implementation of the two-phase greedy algorithm, which

achieves subquadratic running time for all non-oracle operations. The main idea of the

improvement is based on the fact that the dual solution returned by the algorithm forms

a chain in a consecutive lattice, allowing a very compact (linearly sized) encoding of the

solution and a faster procession of all non-oracle operations in the algorithm.

The basic version of the algorithm expects the lattice to be given by an oracle that

returns the maximum element of the restriction of L to a given set S ⊆ E (cf. Example

3.16). However, to achieve the possibility of sub-quadratic running time, the oracle is

from now on not expected to give its return value explicitly, but rather as the symmetric

difference to the previously returned element. Consequently, the improved algorithm will

not return the optimal dual solution explicitly, but as a list of set differences that can

be used to reconstruct the elements in the support of the optimal dual solution along

with the values for those elements. The idea behind this is that the overall length of the

difference lists is linearly bounded by |E|, as we shall see later, allowing us to perform

most of the needed operations in overall linear time. In order to determine the bottleneck

element efficiently, the elements of the current set are stored in a heap and an offset value

is maintained to avoid the necessity of updating the residual capacity of all elements in

the heap in every iteration. See Algorithm 3.21 for a pseudo-code listing of the improved

implementation.

Equivalence of the implementations

We will now show that the improved implementation of the two-phase greedy algorithm

(Algorithm 3.21) comes to exactly the same decisions as the basic implementation (Algo-

rithm 3.17). We start with establishing that Phase 1 actually constructs a chain in the

same way as the basic version does.

Lemma 3.22. Let L+1 , . . . , L

+k , L−1 , . . . , L

−k , e1, . . . , ek be as computed by Algorithm 3.21

and let Mi ⊆ E be the set of elements that is contained in the heap in iteration i of Phase

52

Page 61: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Algorithm 3.21 (Two-phase greedy algorithm (improved implementation)).

Input: a consecutive, submodular lattice L ⊆ 2E \{∅} (given by an oracle that returns

maxL[S] for S ⊆ E), a supermodular, monotone increasing function r : L → R+

(given by an oracle), c ∈ RE

Output: an optimal solution x of (C), an optimal solution y of (P ) (alternatively

encoded, see text)

1: // Phase 1

2: Initialize k = 0, h = 0, M = ∅ and let H be an empty heap with capacity |E|.3: while L[E \ {e1, . . . , ek}] 6= ∅ do

4: k ← k + 1

5: Obtain L+k = M ′ \M and L−k = M \M ′ for M ′ = maxL[E \ {e1, . . . , ek−1}].

6: M ← (M ∪ L+k ) \ L−k

7: for all e ∈ L−k do Remove e from H.

8: for all e ∈ L+k do Insert e into H with key c(e) + h.

9: ek ← getMinimumElement(H)

10: yk ← getMinimumKey(H)− h11: h← h+ yk

12: end while

13:

14: // Phase 2

15: Initialize x = 0, δ = 0.

16: for i = k downto 1 do

17: x(ei)← r(M)− δ18: δ ← δ + x(ei)

19: for all e ∈ L+i do δ ← δ − x(e)

20: M ← (M ∪ L−i ) \ L+i

21: end for

22:

23: return (x, (L+, L−, y))

53

Page 62: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

1 after execution of line 8. Then

Mi =

i⋃j=1

L+j

\ i⋃j=1

L−j

= maxL[E \ {e1, . . . , ei−1}]

for i ∈ {1, . . . , k}.

Proof. By induction on i we can easily check that Mi = (Mi−1 ∪L+i ) \L−i (with M0 = ∅).

Furthermore, M = Mi−1 at the beginning of iteration i and so maxL[E \{e1, . . . , ei−1}] =

(Mi−1 ∪ L+i ) \ L−i = Mi.

Now, M1 � . . . � Mk by the same arguments as used for the basic implementation.

Thus, by consecutivity e ∈ L−i implies e /∈ L+j for j > i and we can write

Mi = ((((L+1 \ L

−1 ) ∪ . . .) \ . . .) ∪ L+

i ) \ L−i =

i⋃j=1

L+j

\ i⋃j=1

L−j

.

Now we only need to show that the improved implentation chooses the same bottleneck

elements as the basic version. The main insight is that for every element in the heap,

the difference of its key and the current offset value is equal to its residual capacity. The

intuitive reason for this is the following: The offset value h stores at any point in time the

value of the current dual solution. An element e ∈ E has to carry exactly the amount by

which h is increased while e is in the heap (e.g., think again of flow computations). As the

key stored with the element is the sum of its capacity and the value of h when it entered

the heap, the residual capacity can be obtained by substracting the current value of h.

Theorem 3.23. Let (x, (L+, L−, y)), e1, . . . , ek be as computed by Algorithm 3.21 and

M1, . . . ,Mk be as in Lemma 3.22. Let (x′, y′), e′1, . . . , e′k′, M

′1, . . . ,M

′k′ be the results

computed by Algorithm 3.17 on the same input. Then k = k′,

Mi = M ′i , y′(Mi) = yi ∀i ∈ {1, . . . , k}

and x = x′.

Proof. Suppose we run both the basic and the improved implementation simultaneously

and we are in iteration i of Phase 1, just immediately before the elements ei and e′i are

chosen (line 8 of 3.21 and line 6 of 3.17). We show by induction on i that then Mi = M ′i

and for every element e ∈ Mi the key γ(e) stored in the heap fulfills γ(e) − h = c(e)

and thus ei = e′i. This is clearly true for i = 1 with M1 = maxL = M ′1 and h = 0,

γ(e) = c(e) = c(e) for all e ∈ M1. Now let i > 1. In the previous iteration, ei−1 = e′i−1

has been chosen and so Mi = M ′i . Furthermore, at the end of the previous iteration

h has been increased by γ(ei) − h = c(ei) by induction hypothesis, while c(e) has been

decreased by the same value for all e ∈ Mi−1. Thus, all elements e ∈ Mi−1 ∩Mi fulfill

54

Page 63: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

γ(e) − h = c(e). Moreover, every element e ∈ Mi \Mi−1 is newly inserted into the heap

with γ(e)− h = c(e) = c(e) as it has not occured in the algorithm before.

So ei = e′i and thus Mi = M ′i in every iteration and the results of Phase 1 of the

improved implementation and of the basic implementation match.

We now show that in the iteration with index i of Phase 2 of the improved implementa-

tion x(ei) = r(Mi)−∑

j>i:ej∈Mix(ej) is set. To see this, we prove the following invariant:

At the beginning of each iteration (with index i) of the loop in Phase 2, we have δ =∑j>i:ej∈Mi

x(ej) and M = Mi. This is true at the beginning, where i = k, M = Mk (from

the end of Phase 1) and δ = 0. To see that it holds for i < k, consider the previous iteration

(index i + 1). Here δ is set to∑

j>i+1:ej∈Mi+1x(ej) + x(ei+1) −

∑j≥i+1:ej∈Mi+1\Mi

x(ej),

which is equal to∑

j>i:ej∈Mix(ej) by (3) of Lemma 3.19. Furthermore, M is set to

(Mi+1 \ L+i+1) ∪ L−i+1 = (Mi+1 \ (Mi+1 \Mi)) ∪ (Mi \Mi+1) = Mi.

Running time analysis

We can now deduce the running time of the algorithm with relative ease. The main insight

is that, due to the consecutive structure of the lattice, the overall size of the set differences

is bounded linearly.

Lemma 3.24. The sets L+1 , . . . , L

+k are pairwise disjoint. The sets L−1 , . . . , L

−k are pair-

wise disjoint.

Proof. Assume by contradiction there is an element e ∈ E with e ∈ L+i ∩ L

+j for i < j.

Then e ∈Mi ∩Mj but e /∈Mj−1, a contradiction to the consecutivity of L. Analogously,

there can be no element e ∈ E with e ∈ L−i ∩ L−j .

This in particular implies that any of the lattice sets in support(y) can be constructed

in linear time from the output of the improved implementation. We will use Lemma 3.24

to show that all instructions within the loops can be carried out in amortized logarithmic

and some even in constant time thanks to our use of set differences. Note that this would

not have been possible if we had stored each element in support(y) explicitly (possibly

requiring quadratic space and time).

Theorem 3.25. The two-phase greedy algorithm for submodular lattices (Algorithm 3.21)

has a running time of O(|E| log(|E|) +TL+Tr), with TL being the overall running time of

all calls of the oracle for L and Tr being the overall running time for all calls of the oracle

for r.

Proof. Initialization can be done in O(|E|). After at most |E| iterations of the outer loop

of Phase 1 the sub-lattice under consideration is empty and the loop terminates. The

instructions in lines 4, 9, 10 and 11 can be performed in constant time. Line 5 is an oracle

call. For the remaining lines 6 to 8 of Phase 1 we apply amortized analysis: Line 6 can

55

Page 64: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

be performed in amortized constant time, as every element of E enters and leaves M at

most once by Lemma 3.24. By the same argument, the overall number of heap insertions

and removals is bounded by |E|. As any insertion or removal takes time O(log(|E|)), this

is also the amortized time for these two lines. Thus Phase 1 can be performed in time

O(|E| log(|E|) + TL).

As k ≤ |E|, the loop in Phase 2 iterates at most |E| times. Line 17 can be done in

constant time except for the oracle call. Line 18 can be done in constant time. Lines 19

and 20 can be done in amortized constant time by Lemma 3.24.

As there is an oracle call in every iteration of Phase 1 and 2, respectively, one might

assume these calls will largely dominate the running time of the other instructions in the

algorithm. However, a closer look reveals that the oracle for L only has to identify the

change of the maximum set after removing one of the elements of the previous maximum

set from the ground set. If we do not assume that the incarnation of the oracles is

“oblivious”, i.e., if we are able to access data computed in previous calls, there is good

reason to hope for an amortized logarithmic or even constant running time per iteration

for most problems we cast into the covering/packing framework.

An example: shortest paths

We now apply the two-phase greedy algorithm on the shortest s-t-path problem introduced

earlier (Example 3.14). In this case, the algorithm resembles the behaviour of Dijkstra’s

well-known algorithm [6].

Let G = (V,E) be a graph with s, t ∈ V and let L be the s-t-cut lattice of G. For

simplicity, we invert the order of the lattice (this causes no trouble as r ≡ 1 is constant,

and as such both monotone increasing and decreasing).

In Phase 1 the algorithm constructs a sequence of s-t-cuts M1, . . . ,Mk, which we can

identify with their inducing vertex sets S1 ⊂ . . . ⊂ Sk. The algorithm starts with the

maximum element of the lattice, the leftmost cut with S1 = {s}. In iteration i of Phase

1, it chooses a dart di with tail(di) ∈ Si and head(di) /∈ Si. The subsequent cut then

is induced by Si+1 = Si ∪ {head(di)}. Thus, in every iteration, the algorithm adds a

vertex vi := head(di) to the s-side of the cut. Consequently, L+i+1 = {d ∈

←→E : tail(d) =

vi, head(d) /∈ Si} and L−i+1 = {d ∈←→E : tail(d) ∈ Si, head(d) = vi}. These sets can be

computed easily in amortized constant time.

At the end of every iteration of Phase 1 the following three invariants hold. The darts

d1, . . . , di form an arborescence rooted at s containing all vertices in Si, the unique s-v-

path in this arborescence is the shortest s-v-path in G, and after iteration i, the length of

the shortest s-vi-path is stored in h. All of this is clearly true after the first iteration. In

any following iteration, the key inserted with each new dart d into the heap is the length of

a shortest s-tail(di−1)-path plus the cost of d, which is the same as the length of a shortest

56

Page 65: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

s-head(d)-path using the dart d. Thus, the choice of di as minimum element of the heap

ensures that the s-head(di)-path in the arborescence is the shortest among all paths from s

to some vertex on the other side of the cut containing only the vertices v1, . . . , vi. However,

also any other s-head(di)-path using arbitrary vertices cannot be shorter, as it has to use

a longer path to leave Si before, increasing the length since the costs are non-negative.

Hence, it is the shortest s-head(di)-path. At the end of the iteration h is set to the length

of this path, and the invariant is maintained.

We now show that the solution x constructed in Phase 2 is the incidence vector of the

minimum s-t-path in the arborescence. As r ≡ 1, x is a 0-1-vector, and, as its support

intersects with all s-t-cuts, it contains the unique s-t-path P ⊆ support(x) ⊆ {d1, . . . , dk}.Let i ∈ {1, . . . , k} with di /∈ P . As P crosses all s-t-cuts, it crosses Mi and there is an

l with dl ∈ Mi ∩ P . In particular x(dl) = 1, as dl ∈ P , and i < l, as d1, . . . , di−1 /∈ Mi.

Thus, x(di) = 1−∑

j>i:dj∈Mix(dj) ≤ 0 and di /∈ support(x). So support(x) is exactly the

unique shortest s-t-path in the arborescence.

As we can perform all oracle operations in overall linear time, the running time of the

algorithm is O(|←→E | log(|

←→E |)), which is the same as O(|E| log(|V |)). The most efficient

implementation of Dijkstra’s algorithm known has a running time ofO(|V | log(|V |)), which

is achieved by using a so-called fibonacci heap – a data structure that can perform the

operation of decreasing an element’s key in amortized constant time (cf. Section 6.1 of

[25]). Moreover, note that the correctness of the algorithm heavily depends on the non-

negativity of c.

Remark 3.26. We can use the procedure described above to compute the shortest path

from a vertex s to every vertex in one go by adding an artificial vertex t and an edge

from s to t with cost larger than the sum of all other dart costs. Then the algorithm

computes the length of all shortest s-v-paths before it considers the edge from s to t. We

can also write down the length of the shortest s-v-path for every vertex at the moment

when it is added to the s-side of the cut. We then obtain a shortest path potential Φ,

fulfilling Φ(tail(d)) + c(d) ≥ Φ(head(d)) for all d ∈←→E . We will see later that a shortest

path potential in the dual of an s-t-plane graph can be used to solve the maximum flow

problem on the primal graph.

57

Page 66: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

4 The path lattice and maximum flow in

planar graphs

4.1 The left/right relation and the path lattice of a plane graph

Ford and Fulkerson showed how to exploit the special structure of s-t-planar graphs for

maximum flow computation when they first proposed a special version of their path aug-

menting algorithm, which is now known as the uppermost path algorithm [10]. Based on

a dominance relation for circulations introduced by Khuller, Naor and Klein [22], Weihe

[35] presented the “is left of” relation for s-t-flows on a plane graph. This was again

specialized by Klein [23] to obtain a partial order on the set of s-t-paths that finally led

to a straightforward generalization of the uppermost path algorithm to planar graphs in

general [3].

In this section, we will present a intuitive characterization of the left/right relation

for the special case of s-t-plane graphs based on the uppermost path idea of Ford and

Fulkerson and show that it actually yields a consecutive and submodular lattice. We then

extend this result by showing that the left/right relation induces a submodular (but not

consecutive) lattice in plane graphs in general. Finally, we will show that several intuitive

properties of the path lattice in s-t-plane graphs no longer hold in general plane graphs.

Remark 4.1. The left/right relation we define in this section clearly depends on the

embedding of the planar graph. For this reason we cannot talk about the left/right relation

in an (s-t-)planar graph but will most of the time refer to an (s-t-)plane graph, meaning a

particular embedding of that graph. Note that some special results on the path lattice in

s-t-plane graphs do no longer hold if the relation is defined on an embedding that is not

s-t-planar – even if there exists an s-t-planar embedding of the same graph.

Assumption. Throughout this section, let G = (V,E) be a graph, s, t ∈ V and π be a

planar embedding of G such that t is adjacent to f∞. Furthermore, let P be the set of all

simple s-t-paths in G.

4.1.1 The left/right relation

The left/right relation goes back to Weihe [35] and Klein [23]. It yields useful applications

for shortest path and maximum flow computations in planar graphs (cf. [23] and [3],

58

Page 67: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

respectively) and is based on the face potentials introduced in Definition 2.36 and the fact

that the “difference” of two paths is a circulation.

Lemma 4.2. Let P,Q ∈ P. Then δP − δQ ∈ Scycle(G).

Proof. Interpret δP , δQ as two s-t-flows of value 1. Then δP − δQ is an s-t-flow of value 0

by Lemma 2.53 and thus a circulation.

Definition 4.3 (Left/right relation). Let P,Q ∈ P. If Φ(δP − δQ) ≥ 0, we say that P is

left of Q and write P � Q. If Φ(δP − δQ) ≤ 0, we say that P is right of Q and write

P � Q.

It is easy to verify that this relation indeed is a partial order on the set of simple

s-t-paths.

Lemma 4.4. The relation � is a partial order on P.

Proof. We show reflexivity, anti-symmetry and transitivity. Let P,Q,R ∈ P. Note that

P = Q if and only if δP = δQ, as a path does not contain anti-parallel darts.

Reflexivity: Φ(δP − δP ) = Φ(0) = 0

Anti-symmetry: If P � Q and Q � P , then Φ(δP −δQ) = 0, implying δP −δQ = 0. Thus,

P = Q.

Transitivity: Assume P � Q � R. Then Φ(δP − δR) = Φ(δP − δQ) + Φ(δQ− δR) ≥ 0 and

thus P � R.

An example of paths for which the left/right relation holds or does not hold, respectively,

is given in Figure 4.1. Two further examples, each including a complete enumeration of

all simple s-t-paths of a graph and a Hasse diagram of the partial order on that graph,

are provided later in Figure 4.2 in Subsection 4.1.2 and Figure 4.5 in Subsection 4.1.4.

Remark 4.5. When analyzing a circulation and its face potential, we can ignore edges

that are not in the support of the circulation. The potential is equal on both sides of

such an edge, so removing it from the graph – which is contracting it in the dual graph by

Theorem 2.43 – yields a subgraph, on which we basically can apply the same face potential.

In particular, P � Q in G if and only if P � Q in every subgraph of G containing P and

Q.

The left/right relation also is a partial order on circulations. In [22], Khuller, Naor

and Klein showed that it induces a lattice on the set of circulations by defining the meet

and join of two circulations to be the circulation induced by the componentwise minimum

and maximum of their face potentials, respectively. Meet and join even fulfill the same

upper and lower capacity bounds the original circulations respected. However, this idea

cannot be directly applied on paths, as a path itself does not have a face potential. To

our knowledge, no research on lattices on paths has been published so far.

59

Page 68: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

P

rev(Q)

φ(f)

(a)

s t0 1

0

(b)

s

t

1

0−1

0

Figure 4.1: Two examples for the left/right relation on simple s-t-paths with face poten-

tial φ := Φ(δP − δQ). (a) P is left of Q, as φ(f) ≥ 0 for all f ∈ V ∗. Note

that one dart cancels out in the circulation as it is used by both P and Q.

(b) P and Q are not comparable. Note that one of the darts occurs twice in

the circulation, as it is used by P and its reverse is used by Q.

4.1.2 The uppermost path and the path lattice of an s-t-plane graph

Intuitively speaking, the uppermost path of an s-t-plane graph, is the s-t-path forming its

“upper” boundary in a drawing where s is on the very left and t is on the very right of

the drawing. We will give a definition of this uppermost path in combinatorial terms and

use it to characterize the left/right relation in s-t-plane graphs. This yields the desired

lattice properties of the partial order.

Assumption. Throughout this subsection we assume that the embedding of G is s-t-

planar.

Theorem and Definition 4.6 (Uppermost path, lowermost path). There is a unique

path U ∈ P such that left(d) = f∞ for all d ∈ U . It is called uppermost path of G. There

also is a unique path L ∈ P such that right(d) = f∞ for all d ∈ L. This path is called

lowermost path of G.

Proof. By s-t-planarity, there must be a dart ds ∈ s ∩ f∞ and a dart dt ∈ t ∩ f∞. The se-

quence ds, π∗(ds), . . . , (π∗)−1(dt) is an s-t-path with right(d) = f∞ for all darts it contains.

Choose L as a simple sub-path of this path, which exists by Lemma 2.14. Similarly, the

sequence dt, π∗(dt), . . . , (π∗)−1(ds) is a t-s-path with right(d) = f∞, and thus the reverse

path is an s-t-path with left(d) = f∞. Choose U as a simple sub-path of this path.

Suppose by contradiction there are two paths U1, U2 ∈ P with the uppermost path

property. Then consider the circulation δU1 − δU2 . Its support contains a simple cycle C,

which contains at least one dart d1 ∈ U1 and one dart d2 ∈ rev(U2), as neither of the

paths contains a cycle on its own. Thus left(d1) = f∞ and right(d2) = left(rev(d2)) = f∞,

a contradiction. By the same argument the lowermost path is unique.

60

Page 69: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

The proof given for Theorem 4.6 yields a linear time algorithm to construct the up-

permost path when combined with the constructive proof of Lemma 2.14. Starting with

dt, we reversely traverse the boundary of f∞, skipping any cycles on the path, until we

encounter a dart leaving s. This procedure is also known as right first search. We will

discuss it later in more detail. Furthermore, note that the uppermost path property is

naturally inherited on subgraphs containing the uppermost path.

Corollary 4.7. If U is the uppermost path of G, then U is the uppermost path of any

connected subgraph of G that contains all edges of U . If L is the lowermost path of G,

then L is the lowermost path of any connected subgraph of G that contains all edges of U .

The uppermost path of a graph is of particular interest for maximum flow computations.

This was already pointed out by Ford and Fulkerson, when proving that what they then

had called the “top-most chain” crosses every simple s-t-cut exactly once [10]. Their proof

is based on geometric observations. We give an alternative proof using cycle/cut duality.

Lemma 4.8. Let C be a simple s-t-cut. There is exactly one dart dl ∈ C with left(dl) = f∞

and exactly one dart dr ∈ C with right(dr) = f∞.

Proof. As the uppermost and lowermost path both cross C, two darts dr and dl with the

desired properties exist. By cycle/cut duality, C is a simple cycle in G∗. In particular,

there is only one occurrence of f∞ as head and one occurrence as tail of a dart in this

simple cycle. This implies the uniqueness of dr and dl.

An immediate consequence of the above observation is that the uppermost and lower-

most paths do not contain anti-parallel darts. We call this important structural insight

the orientation lemma.

Lemma 4.9 (Orientation lemma). Let U be the uppermost path of G and L be the lower-

most path of G. If d ∈ U then rev(d) /∈ L.

Proof. Assume by contradiction there is a dart d in U with rev(d) ∈ L. Let T ⊆ E

be a spanning tree containing U . Then G[T \ E({d})] partitions the vertex set into two

connected components, a set S containing s and the set E \ S containing t. Thus, Γ+(S)

is a simple s-t-cut in G that contains d. However, as L starts at s ∈ S, it must cross the

cut once before it uses rev(d) to go back from V \S to S, and cross it a second time before

it ends at t ∈ V \ S, a contradiction to Lemma 4.8.

Many results concerning uppermost paths make use of another simple observation, which

we shall state in the following lemma, called the bridge lemma for obvious reasons.

Lemma 4.10 (Bridge lemma). Let d ∈←→E . If left(d) = right(d), then d is either contained

in all simple s-t-paths or in none.

61

Page 70: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. Since d is a loop in the dual graph (and thus a simple cycle in particular), it forms

a simple cut in G, i.e., {d} = Γ+(S) for some S ⊆ V . Assume d ∈ P for some P ∈ P.

The path crosses the cut from S to V \ S, but it cannot go back as P is simple and the

only dart leaving V \ S is rev(d). So P starts in S and ends in V \ S, implying s ∈ S and

t ∈ V \ S. Thus, every s-t-path has to use d.

This suffices to characterize the left/right relation in an s-t-plane graph in terms of the

uppermost path property.

Theorem 4.11. Let, P,Q ∈ P. Then the following statements are equivalent.

(1) P is the uppermost path in G[E(P ∪Q)].

(2) Q is the lowermost path in G[E(P ∪Q)].

(3) P is left of Q.

Proof. We can assume that φ := Φ(δP − δQ) is a potential in G[E(P ∪Q)] by Remark 4.5.

(1)⇔ (2) : Suppose P is the uppermost path of G[E(P ∪ Q)]. We show that then the

lowermost path L of G[E(P ∪ Q)] only uses edges of E(Q) and is thus equal to Q.

Let d ∈ L. If d belongs to an edge of E(P ), the orientation lemma ensures that

d ∈ P . Consequently, d ∈ P ∩L, and thus leftG[E(P∪Q)](d) = f∞ = rightG[E(P∪Q)](d),

implying d ∈ Q by the bridge lemma. Hence E(L) ⊆ E(Q), and as the two paths are

simple, they are equal. The converse follows by symmetry.1

(2)⇒ (3) : Suppose Q is the lowermost path of G[E(P ∪Q)] and thus P is its uppermost

path. Let f be a face of G[E(P ∪Q]) and let d ∈ f , i.e., right(d) = f . If rev(d) ∈ P or

d ∈ Q, then f = right(d) = f∞, implying φ(f) = 0. Otherwise, d ∈ P or rev(d) ∈ Q,

implying left(d) = f∞ and φ(f) = φ(f∞) + δP (d)− δQ(d) ≥ 0. Thus, φ ≥ 0.

(3)⇒ (1) : Suppose φ ≥ 0. Let U be the uppermost path of G[E(P ∪Q)] and d ∈ U . By

the orientation lemma, rev(d) /∈ P and rev(d) /∈ Q. So d ∈ P or d ∈ Q. But d ∈ Q\Pis not possible, as δP (d)− δQ(d) = φ(right(d))− φ(f∞) ≥ 0. So d ∈ P for all d ∈ U ,

i.e., U = P .

Thus, the left/right order in s-t-plane graphs is in fact an uppermost/lowermost path

order. Before we can show that this indeed yields a lattice, we need a final auxiliary result,

which states that we can add a path to a subgraph without changing its uppermost path,

as long as there already is a path left of the path we add.

1This equivalence implies that no path P can use a reverse dart of the uppermost path U , as P is the

lowermost path in G[E(U ∪P )]. From now on, we will implicitly use this stronger result when referring

to the orientation lemma.

62

Page 71: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Lemma 4.12. Let E ⊆ E be an edge set, such that G[E] is connected and let P ∈ P be

an s-t-path with E(P ) ⊆ E. Let Q ∈ P. If Q is left of P , then the uppermost path of

G[E ∪ E(Q)] and the uppermost path of G[E] are equal.

Proof. Let U be the uppermost path of G[E] and U ′ be the uppermost path of G[E∪E(Q)].

Assume by contradiction that U 6= U ′. As U ′ cannot be contained in G[E] (otherwise it

would be the uppermost path of this graph), it uses an edge in E(Q) \ E, and by the

orientation lemma, it even uses the same dart d of the edge that is used by Q. For this

dart, leftG[E∪E(Q)](d) = f∞, and hence leftG[E(P∪Q)](d) = f∞, as E(P ∪ Q) ⊆ E ∪ E(Q).

But as Q is the lowermost path in G[E(P ∪Q)], also rightG[E(P∪Q)](d) = f∞. Thus, d ∈ Pby the bridge lemma, a contradiction, as d was chosen as a dart not in G[E].

Finally, we can show the existence of a consecutive and submodular path lattice in an

s-t-plane graph. After most of the work has been done in the preceding lemmas, we can

prove the result with relative ease.

Theorem 4.13. (P,�) is a consecutive and submodular lattice with P ∧ Q being the

lowermost path in G[E(P ∪Q)] and P ∨Q being the uppermost path in G[E(P ∪Q)].

Proof. We first show that meet and join can indeed be defined as claimed. Then we deduce

consecutivity and submodularity. Let P,Q,R ∈ P.

Meet and join: Let U be the uppermost path in G[E(P ∪ Q)]. Then P,Q � U . Let

U ′ ∈ P with P,Q � U ′. Then U ′ is the uppermost path of G[E(P ∪ U ′) ∪ E(Q)]

by Lemma 4.12. As U is contained in this graph, U � U ′. Thus, U is the least

upper bound on P and Q with respect to �. By symmetry, the lowermost path of

G[E(P ∪ Q)] is the largest lower bound on P and Q. So lowermost and uppermost

path of G[E(P ∪Q)] define meet and join of the lattice.

Consecutivity: Suppose P � Q � R. Then P is the lowermost path and R is the

uppermost path of G′ := G[E(P ∪ R) ∪ E(Q)] by Lemma 4.12. Thus, rightG′(d) =

f∞ = leftG′(d) for all d ∈ P ∩R. By the bridge lemma, this implies d ∈ Q.

Submodularity: As we have proven consecutivity, it suffices to show P∧Q,P∨Q ⊆ P∪Q.

This immediately follows from the definition of P ∧ Q and P ∨ Q as lowermost and

uppermost path of G[E(P ∪Q)] and the orientation lemma.

An example of a graph and its path lattice is depicted in Figure 4.2. Our result implies

total dual integrality of the maximum flow problem on s-t-planar graphs, even when we

introduce supermodular weights on the paths (cf. Section 4.3). We will show later that

applying the two-phase greedy algorithm on the lattice yields an implementation of the

uppermost path algorithm that solves the maximum flow problem in O(|V | log(|V |)), again

also for the case of supermodular and monotone increasing path weights (cf. Algorithm

4.23).

63

Page 72: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

s t

e1 e3

e2 e4

e5

P1

P2 P3

P4

(P,�)G = (V,E)

P1 P2 P2 P2

Figure 4.2: An s-t-plane graph and the lattice of its simple s-t-paths.2 The paths P2

and P3 are not comparable, but P2 ∧ P3 = P4 and P2 ∨ P3 = P1. Note that

this example also shows that the uppermost path lattice is not modular in

general.

4.1.3 The path lattice of a general plane graph

We will now show that the left/right relation also defines a lattice in general plane graphs.

The proof will however require significantly more effort in this case. Our results on s-t-

plane graphs suggest that meet and join might be the minimum (rightmost) and maximum

(leftmost) path3 in G[E(P ∪ Q)]. However, this does not always yield a maximum lower

and minimum upper bound in non-s-t-planar embeddings, as Figure 4.3 shows. The reason

for P not being left of Q in Figure 4.3 is that P passes the face adjacent to t on the right.

We could correct this by substracting the “negative part” of the potential φ from P , i.e.,

the counterclockwise boundary of the face with potential −1. By doing so, we actually

obtain the join in our example. Unfortunately, the method just described does not yield

a simple s-t-path in general. Still we can show that we always obtain a set of darts D∨ in

this way and we will see later that this set actually contains a path that is the join of P

and Q. We formalize this idea in the following lemma.

Lemma 4.14. Let P,Q ∈ P and φ := Φ(δP − δQ).

• Let S+ := {f ∈ V ∗ : φ(f) > 0} and δ∧ := δP−∑

f∈S+ φ(f)δf . Then δ∧ ∈ {−1, 0, 1}E

and D∧ := D(δ∧) ⊆ P ∪Q.

• Let S− := {f ∈ V ∗ : φ(f) < 0} and δ∨ := δP−∑

f∈S− φ(f)δf . Then δ∨ ∈ {−1, 0, 1}E

and D∨ := D(δ∨) ⊆ P ∪Q.3under the premise that these exist3The partial order is visualized by a so-called “Hasse diagram”. For every element P ∈ P there is a vertex

vP . Two vertices vP , vQ are connected by an edge if P ≺ Q, with the larger element being drawn more

“on top” and edges that are implied by transitivity being ommited.

64

Page 73: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

P

rev(Q)

φ(f)

s t-1

0

10

least common

upper boundlargest common

lower boundleftmost path rightmost path

Figure 4.3: The depicted graph shows that the least common upper bound of two paths

P and Q is not necessarily the leftmost path of G[E(P ∪Q)] if the embedding

of the graph is not s-t-planar.

Proof. We only show the first statement, the second one follows analogously.

Let d ∈←→E and r := right(d), l := left(d). We show that δ∧(d) ∈ {−1, 0, 1} and that

δ∧(d) = 1 implies d ∈ P ∪Q. Note that∑f∈S+

φ(f)δf (d) =∑f∈V ∗

1S+(f)φ(f)δf (d) = 1S+(r)φ(r)− 1S+(l)φ(l).

Consider the four possible cases:

(1) l, r ∈ S+: We have δ∧(d) = δP (d)− (φ(r)− φ(l)) = δP (d)− (δP − δQ)(d) = δQ(d) ∈{−1, 0, 1}.

(2) l ∈ S+, r /∈ S+: In this case δ∧(d) = δP (d)+φ(l). On the one hand, δP (d)+φ(l) > −1

as δP (d) ≥ −1 and φ(l) > 0. On the other hand, δP (d)+φ(l) ≤ δP (d)−(φ(r)−φ(l)) =

δP (d)− (δP − δQ)(d) = δQ(d) ≤ 1, as φ(r) ≤ 0. So δ∧(d) ∈ {0, 1}, and, in particular,

δ∧(d) = 0 if d /∈ Q.

(3) r ∈ S+, l /∈ S+: This is equivalent to case (2) holding for rev(d). In particular

δ∧(d) = −δ∧(rev(d)) ∈ {−1, 0}.

(4) l, r /∈ S+: We have∑

f∈S+ φ(f)δf (d) = 0, and hence δ∧(d) = δP (d) ∈ {−1, 0, 1}.

Note that all cases with δ∧(d) = 1 require d ∈ P or d ∈ Q, so D∧ ⊆ P ∪Q.

It is straightforward to check that if P � Q, then P = D∧ and Q = D∨. Unfortunately,

D∧ and D∨ are not s-t-paths in general. However, we will show by a simple flow decompo-

sition argument that D∧ and D∨ each consist of a unique simple s-t-path and some cycles

65

Page 74: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

and that these paths are meet and join of P and Q, respectively. The key idea is to show

that D∧ contains no counterclockwise cycles and that D∨ contains no clockwise cycles.

However, the proof will be of a certain length and not always be of the most intuitive kind.

Simplifying assumptions and notations

For simplicity, we will now restrict ourselves to D∧. The argumentation for D∨ is totally

analogous. Again, we will need several auxilliary results.

The upcoming lemmata will exclusively deal with the structure of D∧. Their proofs

will only use darts of P and Q and the face potential φ. For this reason it is convenient

to assume that G only contains the edges of P and Q, as we can identify φ with a face

potential in this graph by Remark 4.5. We will drop this assumption once we consider

paths other than P and Q again.

Assumption. Until further notice, we assume that G = G[E(P ∪Q)].

In order to keep the proofs understandable, we introduce some helpful notations. It is

important to notice that some darts of P ∪Q are in D∧, while others may not.

Notation. For d ∈←→E we define φl(d) := φ(left(d)) and φr(d) := φ(right(d)). We call a

dart d ∈ D∧ solid and a dart d ∈ (P ∪Q) \D∧ hidden. A dart in P \Q is called P -dart,

while a dart in Q \ P is called Q-dart.

Note that both solid and hidden darts can influence the face potential. Darts in P ∩Qwhich are neither P - nor Q-darts have no influence on the potential, so they are rather

uninteresting – we will even eleminate them in a later assumption. Note that a dart can

be a P -dart while its reverse can be a Q-dart, but only one of the two darts can be solid.

Crossing a P -dart from right to left means that the potential decreases by 1, and crossing a

Q-dart from right to left means that the potential increases by 1 (so in the case of crossing

an edge with anti-parallel P - and Q-darts the potential changes by 2, accordingly). We

motivate our definition with an insight that can be obtained from the proof of Lemma

4.14.

Lemma 4.15. If d is a solid P -dart, then φl(d) < 0. If d is a solid Q-dart, then φl(d) > 0.

Proof. If d is a solid P -dart, i.e., d ∈ D∧ \Q, then case (4) must apply to d in the proof

of Lemma 4.14. Thus right(d) /∈ S+ and φl(d) = φr(d) − 1 < 0 in this case. If d is solid

Q-dart, i.e., d ∈ D∧ \P , then case (1) or (2) must apply to d in the proof of Lemma 4.14.

Thus left(d) ∈ S+ and φl(d) > 0 in this case.

A more intuitive reason for the above result is, that a Q-dart can only become solid if it

is part of a clockwise (positive potential) cycle in δP − δQ, which is then substracted from

δP to obtain δ∧. We will show how Lemma 4.15 implies certain rules solid darts have to

66

Page 75: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

obey, which however do not apply on hidden darts. E.g., two dart-disjoint paths of solid

darts cannot cross (cf. Lemma 4.20).

The following lemma translates the straightforward observation that δ∧ is a unit s-t-flow

of value 1 into results on the incidence of vertices and solid darts.

Lemma 4.16. There is exactly one solid dart with tail(d) = s and no solid dart with

head(d) = s. There is exactly one solid dart with head(d) = t and no solid dart with

tail(d) = t. For any other vertex v ∈ V \ {s, t} exactly one of the following three cases

applies:

(1) There is no solid dart d with head(d) = v or tail(d) = v.

(2) There is exactly one solid dart d with head(d) = v and exactly one solid dart with

tail(d) = v.

(3) There are exactly two solid darts with head(d) = v and exactly two solid darts with

tail(d) = v.

Proof. Since δ∧ ∈ {−1, 0, 1}, for every vertex v ∈ V the value δvT δ∧ = |{d ∈ D∧ : tail(d) =

v}| − |{d ∈ D∧ : head(d) = v}| is the difference of the number of outgoing solid darts and

the number of incoming solid darts at v. It is easy to check that

δvT δ∧ = δv

T δP −∑f∈S+

φ(f) δvT δf︸ ︷︷ ︸=0

= δvT δP =

1 if v = s

−1 if v = t

0 otherwise

As s has no incoming darts in P ∪ Q, there can only and must exactly be one outgoing

solid dart at s. As t has no outgoing darts in P ∪Q, there can only and must exactly be

one incoming solid dart at t. As every other vertex v ∈ V \{s, t} has at most one incoming

dart in P , one incoming dart in Q, one outgoing dart in P and one outgoing dart in Q,

and the number of incoming and the number of outgoing solid darts must be equal, either

case (1) or (2) or (3) applies.

“Change of tracks” and the non-existence of solid counterclockwise cycles

We now want to investigate those vertices at which solid P -darts and Q-darts meet more

closely. We call the result the “change of tracks” lemma, as this is the exact situation on

which we will apply it – a change of tracks on a path or cycle, when a P -dart is followed

by a Q-dart or vice versa. It basically states the following observation: In case there is

at least one solid P -dart and one solid Q-dart at a vertex, we can deduce that the left

potential of the P -dart is −1 and the left potential of the Q-dart is 1. This change of

potential implies that there are two additional (possibly hidden) darts of P ∪ Q at the

particular vertex, one entering and one leaving the path to the left (cf. Figure 4.4 (a) for

an illustration of this constellation).

67

Page 76: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

(a)

dP dQ

d′Pd′Q

−1 1

(b)dPin

dQin

dPout

dQout

−1

1

(c)

dP

dQ dCd′C

d′

d ′′

t

1 ≥ 2 ≥ 1

C

P -dart Q-dart solid dart solid orhidden dart

φ(f)

Figure 4.4: (a) A “change of tracks”: both paths must occur between dQ and rev(dP )

in the counterclockwise order of the incident darts at the particular vertex

(cf. Lemma 4.17). (b) The order at a vertex with four solid darts must

be exactly as illustrated, thus paths and cycles of a decomposition of D∧

cannot cross (cf. Lemma 4.20). (c) A simplified4 illustration of the situation

in the proof of Lemma 4.18. When a change of tracks from P to Q occurs on

the counterclockwise cycle C, P enters the interior. It can only leave again

towards the exterior if the potential is at least 2 as the Q-darts must continue

the solid cycle.

Lemma 4.17 (Change of tracks lemma). Let dP be a solid P -dart and dQ be a solid

Q-dart. If head(dP ) = tail(dQ), then {dQ, π(dQ), . . . , rev(dP )} contains a P -dart and the

reverse of a Q-dart. If head(dQ) = tail(dP ), then {dP , π(dP ), . . . , rev(dQ)} contains a

Q-dart and the reverse of a P -dart. In both cases, φl(dP ) = −1 and φl(dQ) = 1.

Proof. We only show the statement for the case v := head(dP ) = tail(dQ). Since φl(dP ) <

0 and φl(dQ) > 0, the potential decreases by at least 2 when traversing the faces left(dQ),

left(π(dQ)), . . ., right(rev(dP )). However, {dQ, π(dQ), . . . , rev(dP )} ⊆ v contains at most

four darts, as there can be only each one incoming and each one outgoing dart from P and

Q at v, respectively. As the incoming dart from P is dP and the outgoing dart from Q is

dQ, there is only an outgoing dart d′P ∈ P and an incoming dart d′Q ∈ Q that can decrease

the potential each by 1 on the counterclockwise traversal of the faces from left(dQ) to

left(dP ). So d′P and rev(d′Q) must both be contained in {dQ, π(dQ), . . . , rev(dP )} (note

that d′P = rev(d′Q) is perfectly possible). As they cannot change the potential by more

than 2, we have φl(dP ) = −1 and φl(dQ) = 1 as claimed.

We can use Lemma 4.17 to deduce a contradiction from the existence of any solid

68

Page 77: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

counterclockwise cycle. The intuitive argument is that a change of tracks from P to Q

must happen on the cycle since none of the paths can comprise the cycle on its own, and

thus a part of P must leave from that counterclockwise cycle towards its interior. As t

is an exterior vertex of the cycle (it is at the infinite face), P crosses the cycle again to

continue on the exterior. At the vertex at which P crosses the cycle the potential left of the

cycle darts must be greater or equal than 2, as the subsequent Q-darts forming the cycle

are solid and the P -dart entering the cycle from the left reduces the potential by 1 (cf.

Figure 4.4 (c)). A contradiction can now be obtained by counting the entering and leaving

P -darts on the left of the cycle segment. Be aware that the line of argumentation just

described is severely simplified and there are several special cases that need to be handled.

Thus, we need to introduce a further simplifying assumption before we can proceed.

Suppose we contract an edge with a dart d ∈ P ∩ Q. It is easy to check that in the

resulting graph both P \ {d} and Q \ {d} still are simple s-t-paths (unless s and t were

merged, which implies the trivial case P = Q) and that the circulation δP\{d} − δQ\{d} in

the new graph is represented by the same face potential φ (the faces are still the same after

the contraction, but the edge is deleted in the dual). Thus, the construction performed

in Lemma 4.14 yields D∧ \ {d} instead of D∧ for the contracted graph. Finally, if D∧

contains a counterclockwise simple cycle C, then D∧ \ {d} contains the counterclockwise

simple cycle C \ {d} in the graph after the contraction (note that C always contains at

least two edges that cannot be contracted, as it contains darts of P \ Q and Q \ P ).5

Consequently, we can assume w.l.o.g. that P ∩Q = ∅. In particular, every dart on a solid

cycle is a P -dart or a Q-dart.

Assumption. For the proofs of Lemma 4.18 and Lemma 4.19, we assume that P ∩Q = ∅.

Lemma 4.18. Let C be a solid counterclockwise cycle. There is a dart d1 ∈ C with

φl(d1) = 1 and a dart d2 ∈ C with φl(d2) ≥ 2 such that tail(d1) and head(d2) are on P

and C[tail(d1), head(d2)] ◦ rev(P [tail(d1), head(d2)]) is a simple counterclockwise cycle.

Proof. C must contain at least one solid P -dart dP and one solid Q-dart dQ, as the paths

are simple and none of them can make up the cycle on its own. W.l.o.g., we can assume

head(dP ) = tail(dQ) and apply the change of tracks lemma, which implies that φl(dQ) = 1

and gives us a dart d′ ∈ P that leaves tail(dQ) to the left, i.e., towards the interior of C.

As P ends in t, which is neither in the interior nor on the cycle, P [tail(d′), t] contains a4In the illustration, dQ and dC already fulfill the requirements imposed on d1 and d2 by Lemma 4.18. In

the general case however, d1 and d2 can differ from dQ and dC , as, e.g., P [tail(dQ), head(dC)] could

enter and leave intermediate vertices on C[tail(dQ),head(dC)] or even contain darts anti-parallel to

those on the cycle.5The simplicity of P , Q is preserved beause d ∈ P ∩ Q implies that a cycle in P \ {d} or Q \ {d} that

occurs after merging two vertices must have already been a cycle in P or Q before contracting the edge.

Simplicity of C is maintained as a dart in P ∩ Q connecting two vertices on the cycle can only be a

dart of the cycle (d is the only possible way the cycle can continue at tail(d)).

69

Page 78: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

dart on the exterior of C. Let d′′ be the last dart of P [tail(d′), t] that is in the interior of C

before it first exterior dart occurs. As head(d′′) is on the cycle, it must have an incoming

cycle dart and an outgoing cycle dart. As d′′ ∈ P is not a cylce dart, the incoming cycle

dart is a solid Q-dart dC . If the outgoing cyle dart is a P -dart, P continues on the cycle

after d′′ until the cycle changes tracks again from a P -dart to a Q-dart. But then P enters

the interior of the cycle again by the change of tracks lemma. This is a contradiction to

the choice of d′′. So the outgoing cycle dart is also a solid Q-dart d′C . As there is no other

interior P -dart at head(d′′) by choice of d′′, the faces left of dC and d′C are only separated

by d′′ with right(d′′) = left(dC) and left(d′′) = left(d′C). Thus φl(dC) = φl(d′C) + 1 ≥ 2.

(also cf. Figure 4.4 (c))

We now choose d2 to be the first dart on C[tail(dQ), head(dC)] with φl(d2) ≥ 2 and

head(d2) on P [tail(dQ), t]. We then choose d1 to be the last dart on C[tail(dQ),head(d2)]

with tail(d1) on P [tail(dQ),head(d2)]. Clearly, d1 and d2 are Q-darts. If the predecessor d′1of d1 on C is a P -dart, φr = 1 by the change of tracks lemma. Otherwise, 2 > φl(d′1) > 0

by choice of d2, and there is a P -dart entering tail(d1) from the interior by choice of d1 and

construction of P [tail(dQ), t], implying φl(d′1) ≥ φl(d1). Thus, 1 = φl(d′1) ≥ φl(d1) > 0 in

this case.

We now apply a counting argument on the P -darts that enter or leave C[tail(d1), head(d2)]

from the left to show that φl(d2) can in fact not be larger than φl(d1). This yields a con-

tradiction, proving the non-existence of counterclockwise cycles in D∧.

Lemma 4.19. There are no solid counterclockwise simple cycles.

Proof. Let d1 and d2 be as asserted by Lemma 4.18. We will traverse C from v := tail(d1)

to w := head(d2) and show that φl(d1) ≥ φl(d2).6 Note that C[v, w] starts and ends with

a Q-dart. Furthermore, whenever a change of tracks on C[v, w] from Q to P and back

to Q occurs, the potential to the left must be equal to 1 at the last Q-dart on the cycle

before the P -darts and at the first Q-dart on the cycle after them. Moreover, these two

changes correspond to exactly one P -dart entering C[v, w] from the left and exactly one

P -dart leaving it to the left, with no P -darts entering or leaving C[v, w] inbetween. So

the total increase of the potential of the face left of us on our way from d1 to d2 on the

cycle is the number of P -darts that leave C[v, w] to the left minus the number of P -darts

that enter C[v, w] from left. However, the number of P -darts entering C[v, w] from the

left must greater or equal to the number of P -darts leaving it to the left because t is on

the exterior of C[v, w] ◦ rev(P [v, w]) and P cannot cross P [v, w], such that any part of P

6It is actually more convenient to imagine we traverse the interior faces adjacent to the vertices on C[v, w]

from left(d1) to right(d2) in the dual. We then cross every dart that leaves C[v, w] to the left from left

to right, decreasing the potential by 1, and every dart that enters C[v, w] from the left from right to

left, increasing the potential by 1.

70

Page 79: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

that enters the interior from C[v, w] must also leave it later on to C[v, w].7 This implies

1 = φl(d1) ≥ φl(d2) = 2, a contradiction.

Impossibility of crossings and the final result

Before we can obtain our final result, we need an additional observation. It will be impor-

tant for us to show that in a flow decomposition of δ∧, no s-t-path crosses a cycle. This

can be achieved by analyzing those vertices that are incident to four solid darts – a special

case of Lemma 4.17, in which we can specify the rotation system at v more exactly (cf.

Figure 4.4 (b)).

Lemma 4.20. If a vertex v ∈ V \ {s, t} is incident to four solid darts, then v corresponds

to the orbit(dPout, rev(dPin), dQout, rev(dQin)

)of the embedding π for two solid P -darts dPin

and dPout and two solid Q-darts dQin and dQout.

Proof. Since v has each one incoming and each one outgoing P - andQ-dart by Lemma 4.16,

the orbit of π corresponding to v must be a cyclic permutation of dPout, rev(dPin), dQout, and

rev(dQin). As dPout is a P -dart, we have φl(dPout) < 0 on the one hand. On the other hand, as

dQin and dQout areQ-darts, we have φr(rev(dQin)) = φl(dQin) > 0 and φr(d

Qout) = φl(d

Qout)−1 ≥ 0.

So left(dPout) = right(π(dPout)) implies that π(dPout) = rev(dPin), as any other choice of π(dPout)

yields a contradiction. Analogously we can show that π(dQout) = rev(dQin). As all four darts

are in the same orbit, this orbit must be(dPout, rev(dPin), dQout, rev(dQin)

)as claimed by the

lemma.

Theorem 4.21. (P,�) is a submodular lattice with P ∧Q being the unique simple s-t-path

contained in D∧ and P ∨Q being the unique simple s-t-path contained in D∨.

Proof. As δ∧ ∈ {−1, 0, 1} is the sum of δP and some circulations, δ∧ is a unit s-t-flow of

value 1. Thus, there is a flow decomposition δR +∑k

i=1 δCi = δ∧ for a simple s-t-path

R ⊆ D∧ and some – by Lemma 4.19 clockwise – simple cycles C1, . . . , Ck ⊆ D∧ with

E(R), E(C1), . . . , E(Ck) pairwise disjoint.

We first show that for any dart d ∈ R neither left(d) nor right(d) is in the interior of

any of the cycles Ci. By contradiction assume d ∈ R is incident to a face in the interior of

a cylce Ci. W.l.o.g., d is the last dart on R with this property. As t is on the exterior of

Ci, there is a successor d′ of d on R. As E(R) and E(Ci) are disjoint, d is an interior dart

and d′ is an exterior dart of Ci. head(d) is incident to four solid darts – d (incoming), d′

(outgoing), and an incoming and an outgoing dart from the cycle Ci. Applying Lemma

4.20 on head(d) yields π(rev(d)) = d′ or π−1(rev(d)) = d′. Thus, d and d′ are incident to

a common face, a contradiction.7The vertex t is on the exterior as it is adjacent to the infinite face and can also not be on the cycle as it

has no outoing darts. In fact, one can also show that s is on the exterior of any cycle either, but this

is not necessary for the proof.

71

Page 80: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

We now show R � P and R � Q. This is directly implied by the two inequalities

stated below, which follow from the linearity of Φ, the equality φ =∑

f∈S+ φ(f)Φ(δf ) +∑f∈S− φ(f)Φ(δf ), and the fact that Φ(δC) ≥ 0 and Φ(δf ) ≥ 0 for any clockwise simple

cycle C and any face f ∈ V ∗.

Φ(δP − δR) = Φ

(δP −

(δ∧ −

k∑i=1

δCi

))= Φ

δP − δP +∑f∈S+

φ(f)δf

+k∑i=1

Φ(δCi)

=∑f∈S+

φ(f)Φ(δf ) +k∑i=1

Φ(δCi) ≥ 0

Φ(δQ − δR) = Φ

(δQ −

(δ∧ −

k∑i=1

δCi

))= Φ

δQ − δP +∑f∈S+

φ(f)δf

+k∑i=1

Φ(δCi)

= −φ+∑f∈S+

φ(f)Φ(δf ) +k∑i=1

Φ(δCi) = −∑f∈S−

φ(f)Φ(δf ) +k∑i=1

Φ(δCi) ≥ 0

Now let S ∈ P be a path with S � P and S � Q.8 First, we consider only faces incident

to R. So let f ∈ {left(d) : d ∈ R}∪{right(d) : d ∈ R}. We know that f is not in the

interior of any of the cycles Ci and thus Φ(δCi)(f) = 0. This yields

Φ(δR − δS)(f)

= Φ

δP − ∑f∈S+

φ(f)δf −k∑i=1

δCi − δS

(f)

= Φ

δP − ∑f∈S+

φ(f)δf − δS

(f)

=

Φ(δP − δS)(f)

if f /∈ S+

Φ(δQ − δS)(f)

if f ∈ S+≥ 0.

Now let f ∈ V ∗ be any face. As S does not contain a cycle in the primal graph, it

does not contain a cut in the dual graph. Thus there is a path in the dual that does

not intersect S or R and leads from f to some face f incident to a dart of R. Thus

Φ(δR − δS)(f) = Φ(δR − δS)(f) ≥ 0. Consequently, S � R.

We have shown that any simple s-t-path R contained in D∧ is a common lower bound

of P and Q w.r.t. � that is larger than any other common lower bound. Anti-symmetry

of � implies that D∧ contains only a unique simple s-t-path. Likewise, we can show

that D∨ contains a unique s-t-path that is the least common upper bound of P and Q.

Thus, (P,�) is a lattice with meet and join as described above. Yet, we have to show8At this point we have to drop our assumption G = G[E(P ∪Q)].

72

Page 81: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

submodularity. D∧ ∪D∨ ⊆ P ∪ Q follows directly from Lemma 4.14. Let d ∈ D∧ ∩D∨

and assume by contradiction d ∈ P \Q or d ∈ Q\P . In the first case, Lemma 4.15 implies

φl(d) < 0 as d ∈ D∧ \Q. An analogue version of Lemma 4.15 for the join further implies

φr(d) > 0 as d ∈ D∨ \Q. However, 1 = (δP −δQ)(d) = φr(d)−φl(d) ≥ 2 – a contradiction.

The case d ∈ Q \ P leads to the same contradiction. Thus d ∈ P ∩Q.

Our result implies the existence of a unique leftmost simple s-t-path in every plane graph,

i.e., a path that is left of any other simple s-t-path in the graph. It is relatively easy to

construct an s-t-path such that no other s-t-path is left of it (cf. Lemma 4.37). However,

without the above result there still could be other paths that have the same property and

are incomparable to the constructed path. The existence of a unique leftmost path in the

above sense has not yet been established to our knowledge, and the notion of “leftmost

path” has so far been applied without formal definition or proof of existence – or under

the premise of the existence of a leftmost walk and the absence of clockwise cycles (as,

e.g., in [3]).

4.1.4 Negative results on non-s-t-planar graphs

We now want to discuss which properties of the left/right order in s-t-plane graphs also

hold in general (possibly non-s-t-planar) plane graphs. It will turn out that many intuitive

results of Subsection 4.1.2 lose their validity once we drop the restriction of s-t-planarity,

the most striking result being that there is no consecutive and submodular lattice of paths

in a planar graph in general – no matter how we define the order. Most negative results

can be shown on the same example graph, which is depicted in Figure 4.5 along with a

complete enumeration of its simple s-t-paths and a Hasse diagram of its partial order.

We first give a short list of simple properties that the left/right relation in general plane

graphs lacks in contrast to s-t-plane graphs and point out a counterexample in the graph

of Figure 4.5.

• The left/right order is not consecutive in general.

Example: P1 � P2 � P3, but −→e1 ∈ (P1 ∩ P3) \ P2.

• The orientation lemma does not hold in general plane graphs.

Example: ←−e5 ∈ P1, but −→e5 /∈ P8.

• P ∨Q is not the leftmost path in G[E(P ∪Q)] and P ∧Q is not the rightmost path

in G[E(P ∪Q)].

Example: As seen in Figure 4.3.

• Not every subpath of a leftmost path is a leftmost path – a property that comes for

free with the definition of the uppermost path in s-t-plane graphs.

73

Page 82: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

se2

e1

e5

e4

e3

t

e7

e8

e6

P1

P2

P3 P4

P5 P6

P7

P8

G = (V,E)

(P,�)

P1 P2 P3 P4

P5 P6 P7 P8

Figure 4.5: The partial left/right order on the set of simple s-t-paths in a graph. Several

negative results can be obtained from this example.

Example: The arc −→e1 forms an s-head(−→e1)-subpath of the leftmost path P1, but it is

right of the path −→e2 ,−→e6 ,←−e3 .

• Let P,Q ∈ P. If P is the leftmost path in G[E(P ∪Q)], this does not imply that Q

is the rightmost path in G[E(P ∪Q)]. The converse does not hold as well.

Example: Although P1 is the leftmost path in G[E(P1 ∪ P4)], its rightmost path is

P7.

Apparently, several basic properties of the left/right order on s-t-plane graphs do not

extend to general plane graphs, most prominently among which consecutivity. Even worse,

also no other partial order that defines a submodular lattice can fulfill consecutivity in

general.

Theorem 4.22. The set of s-t-paths in the planar graph depicted in Figure 4.5 cannot be

equipped with a partial order �, such that (P, �) is a consecutive and submodular lattice.9

9A possible way of extending this result to any non-s-t-planar graph using Kuratowski’s theorem [26] is

74

Page 83: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. Assume by contradiction �, ∧, ∨ are defined such that (P, �) is a consecutive and

submodular lattice with meet ∧ and join ∨. Consider the four paths P2, P3, P6, P7. By

submodularity, Pi ∨ Pj ⊆ Pi ∪ Pj for all i, j ∈ {1, . . . , 4}. It can easily be checked

that Pi and Pj are the only two s-t-paths in G[E(Pi ∪ Pj)] for i 6= j. It follows that

either Pi ∧ Pj = Pi, which implies Pi ≺ Pj , or Pi ∧ Pj = Pj , which implies Pi � Pj .

Thus, the paths form a chain w.r.t. �. Since P2 and P3 are the only two of the paths

sharing −→e7 as common dart, and P3 and P6 are the only two of the paths sharing −→e1 , and

P6 and P7 are the only two of the paths sharing −→e8 , consecutivity demands that either

P2 � P3 � P6 � P7 or P2 ≺ P3 ≺ P6 ≺ P7. In both cases −→e2 ∈ P2 ∩ P7 \ P3 yields a

contradiction to consecutivity.

Our example graph has another interesting property. For every path in the graph there

is a simple s-t-cut that is crossed multiple times by the path. This has consequences for

algorithmic approaches to solve the maximum flow problem. An algorithm that, like the

two-phase greedy algorithm, chooses the augmenting path in the first iteration “blindly”,

i.e., without considering the capacities before, might be forced to take back flow from some

dart in a later iteration, as it could turn out that the path it has chosen in the beginning

crosses a minimum cut backwards. As the two-phase greedy algorithm never takes flow

back once it is assigned, it will not work for the maximum flow problem in planar graphs in

general. However, as we already know that this algorithm works on s-t-planar graphs, this

is a clear indicator that the maximum flow problem actually is harder on non-s-t-planar

graphs.

4.2 Algorithms for maximum flow in planar graphs

Applying the two-phase greedy algorithm on the path lattice yields Ford and Fulkerson’s

uppermost path algorithm for maximum flow computations. We will show the equivalence

of the two algorithms and present an O(|V | log(|V |)) implementation by showing how a

sequence of uppermost paths can be computed in linear time. We also show how to use

the algorithm for obtaining a succinctly represented path decomposition of a given flow.

The fastest algorithm for maximum flow in s-t-planar graphs needs only time O(|V |). In

an astonishingly short article, Hassin [17] describes how to obtain a maximum flow in an

s-t-plane graph from the face potential of a shortest path computation in the dual graph

in linear time. Using a linear time shortest path algorithm for planar graphs presented by

Henzinger, Klein, Rao and Subramanian [18], maximum flow computations in s-t-planar

graphs can be performed in linear time. We will slightly extend Hassin’s result by showing

that the path lattice in the primal graph corresponds to a sublattice of the cut lattice in

the dual graph.

pointed out in Chapter 5.

75

Page 84: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

For the case of general planar graphs, Borradaile and Klein presented a generalization of

the uppermost path algorithm based on the left/right relation [3]. We will shortly discuss

it at the end of the section.

4.2.1 The uppermost path algorithm for s-t-planar graphs

In [10], Ford and Fulkerson showed that in s-t-planar graphs iteratively choosing the up-

permost residual path with respect to the current flow ensures that their path augmenting

algorithm (Algorithm 2.63) takes at most |E| iterations. It is easy to see, that this upper-

most path algorithm is in fact a special case of Phase 1 of the two-phase greedy algorithm

(Algorithm 3.17 and 3.21) applied on the path formulation of the maximum flow problem

equipped with the path lattice. In Phase 2, the two-phase greedy algorithm also constructs

a minimum s-t-cut from the bottleneck elements computed in Phase 1.

Algorithm 4.23 (Uppermost path algorithm).

Input: an s-t-planar graph G = (V,E), s, t ∈ V , c ∈ R←→E+

Output: an s-t-flow x in G respecting the capacities c that maximizes δsTx

1: Compute an s-t-planar embedding π of G.

2: Initialize x = 0.

3: while there is a residual s-t-path do

4: Let P = max{Q ∈ P : Q is residual}.5: Augment x by min{c(d)− x(d) : d ∈ P} units of flow along P .

6: end while

7: return x.

Note that the maximum chosen in line 4 is unique by Lemma 3.10. An example of an

application of the algorithm has already been given in the Chapter 1 (cf. Figure 1.2).

We now verify that the two-phase greedy algorithm indeed behaves like the uppermost

path algorithm, i.e., in every iteration it picks the uppermost path residual with respect to

the current flow (given that it augments flow along this path). Intuitively, the algorithm

maintains the residual capacity c(d) = c(d) −∑

P∈P:d∈P y(P ) for every dart to identify

the bottleneck dart in every iteration and deletes it from the lattice. The orientation

lemma ensures that the sub-lattice (obtained by prohibiting some non-residual darts)

considered by the algorithm actually corresponds to the path lattice of a certain subgraph

76

Page 85: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

(not containing the underlying edges of the darts). This subgraph contains all residual

simple s-t-paths. The following lemma formalizes our observations.

Lemma 4.24. In iteration i of Phase 1, the two-phase greedy algorithm chooses the path

Pi that is the uppermost path of G[E \ E({d1, . . . , di−1})]. If y(Pi) > 0, then Pi is the

maximum path (w.r.t. �) among all simple s-t-paths residual w.r.t. xi−1 :=∑i−1

j=1 y(Pj)δPj .

Proof. We use induction on i. In the first iteration, P1 is chosen as uppermost path of G.

For i ≥ 1, by the orientation lemma, there is no simple s-t-path in G[E\E({d1, . . . , di−1})]using rev(di), so deleting all paths containing di from the lattice is the same as deleting its

edge from the graph. Thus, Pi+1 is chosen as uppermost path of G[E \ E({d1, . . . , di})].If y(Pi) > 0, then also c(d) −

∑j<i:d∈Pj

y(P ) = c(d) ≥ c(di) > 0 for all d ∈ Pi, i.e.,

Pi is residual w.r.t. xi−1. After the augmentation, the bottleneck dart di is non-residual

w.r.t. xi. By the orientation lemma, there is no simple s-t-path using rev(di), thus di will

not become residual again in a later iteration and P[←→E \ {d1, . . . , di}] contains all paths

residual with respect to xi.

Note that in some iterations the algorithm may choose a non-residual path, but then

no flow is augmented. An important by-product of the above proof is the insight that

flow once assigned to a dart is never taken back as, due to the orientation lemma, no two

augmenting paths contain anti-parallel darts. Consequently, the uppermost path algorithm

terminates after at most |E| iterations, even outperforming our bound for the two-phase

greedy algorithm in general, which is |←→E | = 2|E| in this case, by a constant factor of 2.

Also note that we can compute the flow value on each arc by a simple modification of the

algorithm.

Remark 4.25. The arc flow x ∈ RE can be obtained without increasing the running

time of the algorithm. Let γ(d) be the value of the key stored with element d in the

heap. By assigning either x(e) = h + c(−→e ) − γ(−→e ) when −→e is ejected from the heap

or x(e) = −(h + c(←−e ) − γ(←−e )) when ←−e is ejected from the heap (only one of these

two cases is possible for every edge by the orientation lemma), we ensure that −→e or←−e , respectively, is assigned the increment of flow during the period in time that it was

contained in the uppermost path. In terms of the simple implementation, this is the same

as assigning x(e) = c(−→e )− c(−→e ) when −→e leaves the uppermost path. We can also commit

the assignment a posteriori and construct the arc flow in time O(|E|) from the output

(L+, L−, y) of the improved two-phase greedy algorithm.

Implementation and running time analysis

We now give an implementation of the oracle needed by the improved two-phase greedy

algorithm, which efficiently obtains the current uppermost path from the previous one. We

already know how to do this from scratch in time linear in the number of edges. However,

77

Page 86: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Pi ∩ Pi+1

L+i+1

L−i+1s

dk1

didk2

t

f

Figure 4.6: Computing subsequent uppermost paths (cf. Lemma 4.26). After deleting

di, the new uppermost path can be obtained by removing the subpath from

dk1 to dk2 and closing the gap by traversing the counterclockwise boundary

of f from tail(dk1) to head(dk2). The two darts dk1 and dk2 can be identified

in amortized constant time by simultaneous bidirectional traversal of the

boundary (cf. Listing 4.27).

the special structure of the sequence of paths the algorithm considers and the planarity of

the graph enable us to perform the task in amortized constant time.

The key idea is based on the following observation. If we delete the edge corresponding

to the bottleneck dart on the current uppermost path, the face f right of that dart is

merged with the infinite face. Hence, the new uppermost path after the deletion can only

contain darts of the current uppermost path and darts whose left face is f (now becoming

part of the infinite face). The following lemma gives the details. Also see Figure 4.6 for

an example.

Lemma 4.26. For some i ∈ {1, . . . , k − 1} let Pi = {d1, . . . , dk} be the uppermost path

of G[E \ E({d1, . . . , di−1})] and di be the bottleneck dart as computed by the two-phase

greedy algorithm. Define f := right(di), k1 := min{j : tail(dj) is adjacent to f} and

k2 := max{j : head(dj) is adjacent to f}. Then

Pi+1 = {d1, . . . dk1−1, d′1, . . . , d

′l, dk2+1, . . . , dk}

for the uppermost path Pi+1 of G[E \ E({d1, . . . , di})], where d′1, . . . , d′l is the simple

tail(dk1)-head(dk2)-path on the reverse (counterclockwise) boundary of f . In particular,

L+i+1 = {d′1, . . . , d′l} and L−i+1 = {dk1 , . . . , dk2}.

Proof. k1 and k2 exist, as both tail(di) and head(di) are adjacent to f . Also the simple

tail(dk1)-head(dk2)-path on the reverse boundary of f is in G[E \ E({d1, . . . , di})], as no

edge of f has been removed before – otherwise f would have been merged with f∞ already,

but then removing di means disconnecting s and t by the bridge lemma, implying i = k.

Furthermore, the path is simple as all three parts comprising it are simple and their only

common vertices are tail(dk1) and head(dk2) by choice of k1 and k2. Finally, the left face

78

Page 87: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

of every dart on that path is the infinite face in G[E \ E({d1, . . . , di})], as removing di

from G[E \ E({d1, . . . , di−1})] implies merging f∞ and f .

This will allow us to construct the sequence of succeeding uppermost paths in overall

linear (or amortized constant) time. We can construct the simple path from some vertex

x to some vertex y on the reverse boundary of some face f by traversing this reverse

boundary, starting at some common dart in x ∩ {rev(d) : d ∈ f}, marking every vertex

we visit, and, whenever we encouter a vertex again, deleting all darts from the path we

passed since the last occurence of that vertex, finally stopping when we reach y. This can

for example be used to compute the uppermost path of G in the first iteration with x = s,

y = t and f = f∞. We maintain for every dart in the current uppermost path its successor

and its predecessor and also a label at every vertex that tells whether it is on the path or

not and update this data whenever a dart is added to or removed from the path.

Yet, in any later iteration, we first have to identify dk1 and dk2 . Note that we cannot

afford to traverse Pi completely or from di to either of its endpoints in every iteration.

Also the desired darts cannot be found by traversing the boundary of f , as we cannot

determine whether a vertex in Pi comes before or after di in the path.10 Maintaining

numbered labels at the vertices in Pi for this purpose is not an option in general, as we

cannot afford to re-enumerate in each iteration and leaving sufficiently large gaps between

the numbers for inserting new vertices may cause the labels to be too large.

This problem can be overcome by simultaneous forward and backward traversal of Pibeginning at di, which allows us to stop in time without traversing too many darts (cf.

Listing 4.27 for a pseudo-code listing of the uppermost path computation procedure, which

replaces the oracle). We assume i is the number of previous calls of the procedure in the

course of the flow computation. The procedure first marks and counts all common vertices

of both the path and the face. The first two of these vertices are the tail and head of di.

In the while-loop, we simultaneously traverse Pi backwards and forwards starting at di,

advancing by one dart in each direction with every step. For both ends of the path, we

save the last visited dart whose head or tail, respectively, is adjacent to f , and we decrease

the counter by 1 with every encounter of such a vertex. Thus, when the counter reaches

0, we terminate the traversal and dl = dk1 and dr = dk2 . Now L+i+1 can be computed by

traversing the reverse boundary of f from tail(dk1) to head(dk2), skipping any cycle we

encounter, and L−i+1 can be obtained by traversing Pi from dk1 to dk2 .

We show that the overall running time of the oracle calls indeed is linearly bounded

in |←→E |. The update of all successor, predecessor and vertex labels takes at most time

O(|←→E |), as each dart inserted into or removed from the uppermost path causes only the

10To understand the difficulty, look at Figure 4.6. When traversing the reverse boundary of f from tail(di)

to head(di), we encouter three subpaths (a black, a green, and again a black), all of which could possibly

join the disconnected parts of Pi \{di} as long as we do not know which side of the cut a vertex belongs

to.

79

Page 88: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Listing 4.27 (Uppermost path computation ((i+1)th call)).

1: if i = 0 then

2: Compute L+1 = P1 by traversing the reverse boundary of f∞ from s to t,

skipping cycles. Let L−1 = ∅ and update predecessors, successors and vertex

labels.

3: else

4: Mark and count all vertices that are adjacent to f = right(di) and lie on Pi by

traversing the boundary of f . Set m to the number of these vertices decreased

by 2.

5: Initialize dl = dr = dl = dr = di.

6: while m > 0 do

7: dl ← predecessor(dl)

8: if tail(dl) is adjacent to f then dl ← dl and m← m− 1

9: dr ← successor(dr)

10: if head(dr) is adjacent to f then dr ← dr and m← m− 1

11: end while

12: Compute L+i+1 by traversing the reverse boundary of f from tail(dl) to

head(dr), skipping cycles. Compute L−i+1 by traversing Pi from dl to dr.

Update predecessors, successors and vertex labels.

13: end if

14: return (L+i+1, L

−i+1)

update of one successor, one predecessor and at most two vertex labels. Furthermore,

we consider every face in at most one iteration (afterwards, it is merged with the infinite

face). Thus, all traversals of face boundaries can be done in time O(|←→E |). We also know

from the analysis of the two-phase greedy algorithm, that the overall size of all sets L−jis linearly bounded in |

←→E |. Finally, also determining dk1 and dk2 in each iteration takes

overall linear time: When the while-loop terminates, either the subpath from dl to di or

from di to dr is contained in L−i+1, so the number of iterations of the loop cannot exceed

|L−i+1|.

A more general approach for computing subsequent uppermost or leftmost paths, even

in general plane graphs, can be found in Subsection 4.2.3. The technique is based on

interdigitating spanning trees. However, more sophisticated data structures are necessary

for the implementation in contrast to our method.

Plugging the achieved time bound of O(|←→E |) into the result on the general running time

of the greedy algorithm and using the facts that |←→E | = 2|E| and that the number of edges

80

Page 89: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

is linearly bounded by the number of vertices by Theorem 2.47, we conclude the following

running time for our implementation of the uppermost path algorithm.

Theorem 4.28. The uppermost path algorithm for maximum flow on s-t-planar graphs

can be implemented to run in time O(|V | log(|V |)).

Application: succinct path decomposition

Our implementation of the uppermost path algorithm enables us to compute a decomposi-

tion of a given s-t-planar flow in time O(|V | log(|V |)) and to store it on linear space using

the succinct representation of the uppermost path sequence. This can, e.g., be applied in

the flow over time setting.

Suppose we are given an arc flow x ∈ RE in an s-t-planar graph. Set the capacities to

c(d) := min{0, x(d)}. Clearly, x is a maximum flow with respect to c. Thus, running the

uppermost path algorithm we obtain a path flow y ∈ RP+ of same value as x. Actually,

y is the “path-based fraction” of a decomposition of x, i.e., x(d) =∑

P∈P:d∈P y(P ) +∑C∈C:d∈C y(C) for some y ∈ RC .The uppermost path algorithm returns a compact encoding (L+, L−, y) of y that only

takes linear space. If we are not interested in flow on cycles (e.g., as it does not contribute

to the flow value), or if there is no flow on cycles in x, we have a succinct representation

of a path decomposition of x.

An application of the described technique is the path decomposition for temporally

repeated flows (cf. Section 4.3 for some details on temporally repeated flows). Here,

cycles only occur if they have transit time 0, and even then they do not contribute to the

value of the flow over time. Thus, (L+, L−, y) is a O(|V |)-space representation of a path

decomposition of a maximum temporally repeated flow.

4.2.2 The duality of shortest path and minimum cut in s-t-plane graphs

The duality of shortest path and minimum cut computations in s-t-plane graphs was

already pointed out in [10] and then extended by Hassin [17] to maximum flow computa-

tions. The main idea is that splitting up the infinite face in two faces and then computing

a shortest path between those faces yields a cycle in the original dual graph – thus a

minimum cut in the primal.

Suppose we have a graphG = (V,E) with an s-t-planar embedding and capacity function

c :←→E → R+. By Theorem 2.50, we can add an edge e0 with tail(−→e0) = t and head(−→e0) = s

into the embedding while maintaining planarity, thus obtaining a graph G0 = (V0, E0).

We let left(−→e0) be the infinite face of the new graph and define f0 := right(−→e0). We assign

the capacities c(←−e0) = 0 and c(−→e0) =∑

d∈←→Ec(d) + 1 to the darts of the new edge.

Lemma 4.29. Let C ⊆←→E . Then C is a simple s-t-cut in G if and only if C is a simple

f0-f∞-path in G∗0.

81

Page 90: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Proof. C is a simple s-t-cut in G if and only if C0 := C ∪ {←−e0} is a simple s-t-cut in G0.

By cycle/cut duality, this is the case if and only if C0 is a simple cycle in G∗0. This, in

turn, is true if and only if C = C0 \ {←−e0} is a simple f0-f∞-path in G∗0.

Now interpret the capacity of a dart as a cost of the dart in the dual. As a shortest

simple f0-f∞-path in G∗0 does not contain any dart of e0 by choice of c(−→e0), such a shortest

path is a minimum s-t-cut in G. If we calculate the length of all shortest f0-f -paths we

can even use this to compute a maximum s-t-flow.

Definition 4.30. A vector φ ∈ RV ∗ is called shortest path potential if φ(f∞) = 0 and

φ(f) is the length of a shortest simple f -f∞-path in G∗0 for any face f ∈ V ∗0 .

Lemma 4.31. Let φ ∈ RV ∗0 be a shortest path potential. Then φ(rightG0(d))−φ(leftG0(d)) ≤

c(d) for all d ∈←→E0. If d ∈ Pf for a shortest simple f -f∞-path Pf , then φ(rightG0

(d)) −φ(leftG0(d)) = c(d).

Proof. Let d ∈←→E0 with f := tailG∗0(d) = rightG0

(d) and g := headG∗0(d) = leftG0(d). Let

Pg be a shortest simple g-f∞-path in G∗0. Appending d to Pg yields an g-f∞-path, which

contains a simple f -f∞-path P . As P ⊆ Pg ∪ {d} and the capacities are non-negative,

φ(f) ≤∑

d′∈P c(d′) ≤

∑d′∈Pg

c(d′) + c(d) = φ(g) + c(d). Now, if there is a shortest simple

f -f∞-path Pf containing d, then Pg\{d} contains a simple g-f∞path, so φ(g) ≤ φ(f)−c(d)

in this case, implying equality.

Thus, a circulation defined with respect to the shortest path potential is feasible with

respect to the capacities c and even yields a maximum flow.

Theorem 4.32. Define x ∈ RE by x0(e) := φ(rightG0(−→e ))− φ(leftG0(−→e )). Then x0 is a

maximum s-t-flow with respect to the capacities c.

Proof. We extend x by defining x0(e0) := φ(f0) − φ(f∞). Then x =∑

f∈V ∗0φ(f)δf ∈

Scycle(G0) is a circulation. This implies that x0 is an s-t-flow in G, as removing e only

changes the excess of s and t. By Lemma 4.31, x0 respects the capacities. Let C be any

minimal s-t-cut in G. As C is a shortest f0-f∞-path, x0(d) = c(d) for all d ∈ C. Thus, x0

is maximum.

This shows that the maximum flow problem in s-t-planar graphs can be reduced to

the single target shortest path problem11 in planar graphs. Henzinger, Klein, Rao and

Subramanian showed that this problem can be solved in O(|V |) [18]. Their algorithm

ommits the computation of the minimum reduced cost element, tolerating multiple updates

of vertex labels in contrast to Dijkstra’s algorithm. However, the number of those updates

is bounded linearly in total by a sophisticated approach of partitioning the graphs into

regions called r-division. We simply apply their result without proof.11or equivalently and more common: single source shortest path problem

82

Page 91: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Theorem 4.33. The maximum flow problem on s-t-planar graphs can be solved in time

O(|V |).

Hassin’s method and the uppermost path algorithm

We want to investigate how the flow x0 computed by Hassin’s method corresponds to

a flow computed by the uppermost path algorithm. In fact, it will turn out that the

flow assigned by the shortest path potential sends the same flow as the uppermost path

algorithm, but additionally saturates all clockwise cycles in the network.

The following lemma is a slight extension to Hassin’s result. It can be seen as a dual

version of Lemma 4.29 and shows that the left/right relation in G corresponds to the order

of the cut lattice in G0.

Lemma 4.34. P ⊆←→E is a simple s-t-path in G if and only if P ∪ {−→e0} is a simple

f0-f∞-cut in G∗0. Let P,Q ∈ P. Then P � Q w.r.t. to the uppermost path order if and

only if P ∪ {−→e0} � Q ∪ {−→e0} with respect to the cut order.

Proof. P is a simple s-t-path if and only if P ∪ {−→e0} is a simple cycle in G∗0, which is the

case if and only if P ∪{−→e0} is a simple cut in G∗0. The cut is an f0-f∞-cut as tail(−→e0) = f0

and head(−→e0) = f∞.

We show that the partial orders are equivalent. Let P,Q ∈ P and SP , SQ the vertex

sets with P0 := P ∪ {−→e0} = Γ+(SP ) and Q0 := Q ∪ {−→e0} = Γ+(SQ), respectively.

Suppose Q is the lowermost path in G[E(P ∪Q)]. We show that this implies SQ ⊆ SP .

By contradiction assume there is an f ∈ SQ\SP . Without loss of generality we can assume

f = tailG∗0(d) = rightG0(d) for some d ∈ Q. As f /∈ SP , the cut P0 separates f from f0. As

f ∈ SQ, the cut Q0 separates f from f∞. Thus, every f∞-f -path in G∗0 and also in G∗ has

to use an edge of E(P ∪Q) and so the infinite face is not merged with f in G[E(P ∪Q)].

But this is a contradiction as f = right(d) and d ∈ Q, which is the lowermost path of

G[E(P ∪Q)].

Suppose SQ ⊆ SP . Let d ∈ Q and f = rightG0(d) ∈ SQ. There is an f0-f∞-path in G∗0

not using any edge in E(Q) and only using vertices in SQ. As SQ ⊆ SP , the path also

uses no edge of E(P ). So f∞, f0 and f are merged in in G0[E(P ∪ Q)] = G[E(P ∪ Q)].

As this holds for every face to the right of Q, this is the lowermost path in G[E(P ∪Q)]

and P � Q.

Note that although the restriction of the partial order to simple cuts is equivalent to

the left/right relation, the meet and join operation of the complete cut lattice do not

coincide with those of the path lattice, as the meet or join of two simple cuts is not

necessarily simple. Lemma 4.29 also yields a connection between the dual of the shortest

path problem, called cut packing problem, and the maximum flow problem. The dual

solution of the two-phase greedy algorithm applied on the f0-f∞-cut lattice of G∗0 is a

83

Page 92: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

packing of cuts respecting the capacity bounds of each dart. It corresponds, up to a

certain degree, to the result of the uppermost path algorithm on the maximum s-t-flow

problem in G. However, the crucial difference is that the cuts chosen by the first algorithm

are not necessarily simple. Still, every cut in the packing corresponds to the union of a

path in the uppermost path sequence and possibly some clockwise cycles. So the two-

phase greedy shortest path algorithm tries to saturate the uppermost residual path in

every iteration, while sending flow along some clockwise cycles as well. However, not

all cycles are necessarily saturated, as the algorithm stops once an f0-f∞-path has been

computed.

In view of this behaviour, Hassin’s method goes even further by saturating all clockwise

cycles in the graph. This interesting property of flow assigned by shortest path potentials

was pointed out by Borradaile and Klein in [3].

Lemma 4.35. There is no simple clockwise cycle in G that is residual with respect to x0.

Proof. Let C be a clockwise cycle in G. As C is a cut in the dual and the infinite face

is on the exterior of C, there must be a dart d that is on a shortest f -f∞-path for some

interior face f of the cycle. Thus, x0(d) = φ(right(d))− φ(left(d)) = c(d) by Lemma 4.31

and the cycle is not residual.

In fact, Hassin’s method sends at least the amount of flow along every dart that the

uppermost path algorithm sends, and in addition, saturates all clockwise cycles.

Lemma 4.36. Let y ∈ RP be the flow computed by the uppermost path algorithm and

x =∑

P∈P y(P )δP . Then x(d) ≤ x0(d) for all d ∈←→E with x(d) > 0.

Proof. Let d ∈←→E with x(d) > 0. There is a simple left(d)-f∞-path Pl in the dual graph

that only uses darts that are saturated by x as otherwise d would not have occurred in an

uppermost residual path. Let Pr be a shortest right(d)-f∞-path in G∗0 and let f be the first

vertex (face) on Pr that is also on Pl. Then C := rev(Pl)[f, left(d)]◦ rev(d)◦Pr[right(d), f ]

is a simple cycle in G∗0 and thus a simple cut in the G0. As neither −→e0 nor ←−e0 are in C,

the vertices s and t are on the same side of the cut and C is either a simple s-tail(d)-cut

or a simple head(d)-t-cut. W.l.o.g. we assume the latter case. As s and t are on the same

side of the cut,∑

d′∈C x(d′) = 0. By the orientation lemma, flow sent by the uppermost

path algorithm entering the head(d)-side of the cut can only leave it through the darts in

Pr[right(d), f ]. Thus∑d′∈Pl[left(d),f ]

x(d′) + x(d) =∑

d′∈Pr[right(d),f ]

x(d′) ≤∑

d′∈Pr[right(d),f ]

c(d′)

and, as Pl and Pr both are shortest paths with respect to c,

φ(right(d)) =∑

d′∈Pr[right(d),f ]

c(d′) + φ(left(d))−∑

d′∈Pl[left(d),f ]

c(d′)

84

Page 93: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

implying x0(d) = φ(right(d))− φ(left(d)) ≥ x(d).

4.2.3 The leftmost path algorithm of Borradaile and Klein

As the left/right relation is defined for any plane graphs in general, the uppermost path

algorithm for s-t-planar graphs straightforwardly extends to a leftmost path algorithm for

planar graphs. Borradaile and Klein [3] showed how to implement the iterations of the

algorithm efficiently and proved that the number of iterations is still bounded linearly in

the number of edges, presenting an O(|V | log(|V |)-algorithm, which is the fastest known

algorithm for maximum flow in planar graphs in general. Instead of calculating and

updating the leftmost residual s-t-path in every iteration, it initially calculates the leftmost

residual path from every vertex to the sink using a method called right first search due to

Ripphausen-Lipa, Wagner and Weihe [29]. By doing so, the algorithm creates a leftmost

path tree, and its complement, which is a spanning tree in the dual. Using a dynamic tree

data structure [33], each flow augmentation, including the resulting updates on the trees,

can be executed in logarithmic time. We shall describe the right first search procedure

and prove that it actually yields the leftmost path, as there seems to be no formal proof in

the literature so far (the existence of a unique maximum path w.r.t. the left/right relation

has not yet been formally proven to our knowledge). We furthermore will briefly present

Borradaile and Klein’s algorithm and state their proof of correctness, which is a beautiful

application of cycle/cut duality and the interdigitating spanning trees property.

Right first search

The method of right first search was introduced by Ripphausen-Lipa, Wagner and Weihe

[29]. It basically is a specialization of depth first search, with the additional restriction

that the darts at every vertex are chosen in counterclockwise order. If we start at t with

some dart whose right face is f∞ and after the computation reverse all darts in the right

first search tree, we get an in-tree T rooted at t with the property that for every vertex

v ∈ V \ {t} the v-t-path T [v] in the tree is the leftmost path in G. We shortly prove the

correctness of the method.

In fact, it is absolutely straightforward to check that right first search computes a v-

t-path that is not right of any other residual v-t-path in G. Our result on the lattice

structure of the left/right relation (Theorem 4.21) implies that this actually is the unique

leftmost residual path, i.e. it is left of all other residual paths.

Lemma 4.37. The right first search method described above computes the unique maxi-

mum simple v-t-path w.r.t. the left/right relation for every v ∈ V \ {t}.

Proof. Assume by contradiction there is a residual simple v-t-path P that is strictly left

of L := T [v]. Let φ := Φ(δP − δL). By Remark 4.5, we can once again assume that

85

Page 94: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

G = G[E(P ∪ L)]. Choose w ∈ V to be the last common vertex of P and L such that

P [w, t] = L[w, t] = {d1, . . . , dk} but dP 6= dL for the unique darts dP ∈ P and dL ∈ L with

head(dP ) = w = head(dL) (this is possible as P 6= L and both paths end in t). By choice

of dk through right first search we obtain left(dk) = f∞. As there can be no darts of P or

L entering or leaving L[w, t] from the left, we conclude that left(d1) = f∞ as well. Now

assume by contradiction π(d1) = rev(dP ). Then the right first search algorithm would

have visited tail(dP ) before tail(dL). Moreover, P [v, tail(dP )] is a simple v-t-path that

does not intersect dP ◦ P [w, t]. Thus, the algorithm would not have constructed T [v] = L

but P in this case, a contradiction. So π(d1) = rev(dL). This implies left(dL) = f∞ and

φ(right(dL)) = 0 + (δP − δL)(dL) = −1, a contradiction.

We have shown that there is no v-t-path that is strictly left of L. As there is a unique

leftmost path that is left of every other path due to Theorem 4.21, this path must be equal

to L.

Note that restricting the search to darts with positive capacity will yield the leftmost

residual path for every vertex (which exists due to the submodularity of the path lattice).

Implementation of the leftmost path algorithm

We now describe Borradaile and Klein’s algorithm and cite their proof of correctness.

W.l.o.g., we can assume that for every vertex v ∈ V , there is a simple v-t-path in G whose

darts all have positive capacity (otherwise that vertex can be deleted). At initialization,

the algorithm computes a shortest path potential in the dual for f∞ and obtains similarly

to Hassin’s method (just without splitting up f∞) a circulation that saturates all clockwise

cycles – hence called leftmost circulation.

Algorithm 4.38 (Leftmost path algorithm of Borradaile and Klein).

Input: a planar graph G = (V,E), s, t ∈ V , c ∈ R←→E+

Output: an s-t-flow x in G respecting the capacities c that maximizes δsTx

1: Compute an embedding π of G such that t is adjacent to f∞.

2: Compute a leftmost circulation x saturating all clockwise cycles.

3: Let T be the leftmost residual path t-directed in-tree.

4: Let T ∗ be the dual f∞-directed in-tree in E \ E(T ).

5: loop

6: if T [s] is residual then saturate T [s] by augmenting x.

7: Let d be the rootmost non-resiual dart in T [s].

8: if left(d) is a descendant of right(d) in T ∗ then return x.

86

Page 95: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

9: Let e be the first dart of T ∗[right(d)].

10: Remove e from T ∗ and insert d into T ∗.

11: Remove d from T and insert rev(e) into T .

12: Replace all darts in T [head(e), tail(d)] with their reverses.

13: end loop

We first show that the in-tree property of T and T ∗ is maintained by the algorithm.

Lemma 4.39. Throughout the algorithm, T is an in-tree in G rooted at t and T ∗ is an

in-tree in G∗ rooted at f∞.

Proof. The invariant holds after initialization (note that E \E(T ) is a spanning tree in G∗

by the interdigitating spanning trees property of planar graphs, so T ∗ can be constructed

by choosing the darts directed towards f∞). In any later iteration, if fl := left(d) is not

a descendant of fr := right(d), then e /∈ T ∗[fl]. So for any descendant f of fr the path

T ∗[f, fr] ◦ d ◦ T ∗[fl] is a simple f -f∞-path contained in T ∗new := (T ∗ \ {e}) ∪ {d}. Thus,

T ∗new still is an in-tree rooted at f∞ and the invariant is maintained.

Now let v be the youngest common ancestor of fl and fr in T ∗, i.e., the first vertex

on T ∗[fr] that also is on T ∗[fl]. The dual simple cycle C := T ∗[fl, v] ◦ rev(T ∗[fr, v]) ◦ dcontains rev(e) as fl is not a descendant of fr. Thus C as a cut in G separates head(e)

from t. This implies that head(e) is a descendant of tail(d) in T as d is the only dart of T in

the cut. We now show that the new primal dart set Tnew as constructed by the algorithm

actually is an in-tree as well. As E(Tnew) is the complement of E(T ∗new), the edge set

is a spanning tree in G. In T , every vertex but t has exactly one outgoing dart. After

reversing T [head(e), tail(d)] in T , the vertex head(d) suddenly has no outgoing darts in T

and tail(d) has two outgoing darts in T (the number of outgoing darts of all other vertices

remains unchanged). Thus, after inserting rev(e) and removing d, the in-tree property is

maintained.

Before we can show the correctness, we need only one other straightforward invariant.

Lemma 4.40. Throughout the algorithm, every dart d ∈ T ∗ is non-residual with respect

to x.

Proof. Assume by contradiction there is a residual dart d ∈ T ∗ at initialization. Then

P := d◦T [head(d)] is a residual tail(d)-t-path. Let v be the first vertex on T [head(d)] that

is also on T [tail(d)]. If v = tail(d), then d ◦ T [head(d), v] is a residual simple cycle, which

is clockwise as d is the only dart of the cycle in the path T ∗[right(d)] to the infinite face12,

12It also is important that the path does not cross the cycle/cut in the other direction, but this follows

directly from E(T ) ∩ E(T ∗) = ∅, so we will not further mention it when using the argument.

87

Page 96: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

a contradiction as all clockwise cycles were saturated in the initialization. So v 6= tail(d)

and P is a simple residual tail(d)-t-path. It is easy to check that δT [tail(d)] − δP induces

the simple cycle T [tail(d), v] ◦ rev(T [head(d), v]) ◦ rev(d), which must be counterclockwise

as it contains rev(d) with d again being the only dart of the cycle that is in T ∗[right(d)].

However, it must also be clockwise as T [tail(d)] is left of P – yet another contradiction. So

the invariant is true at initialization, and maintained by the algorithm by choice of d.

Deducing the maximality of x is now easy by observing that the cycle in G∗ induced by

e at termination is a minimum cut in the G – a great example of how duality in planar

graphs can be used for designing efficient algorithms.

Theorem 4.41. When the algorithm terminates, x is a maximum flow. Furthermore,

T ∗[left(d), right(d)] ◦ d is a minimum cut.

Proof. C := T ∗[left(d), right(d)] ◦ d is a simple cycle in G∗ and thus a simple cut in G.

Moreover, C is an s-t-cut, as d is the only dart of C in the s-t-path T [s]. As all darts of

T ∗[left(d), right(d)] are non-residual and d is non-residual, x is maximum flow and C is a

minimum cut by the max-flow/min-cut theorem.

The leftmost path algorithm constructs a chain

We are not going to discuss the runnning time of the algorithm. The main idea of the

proof given in [3] is to show that an arc cannot be included in an augmentation once flow

was augmented along the corresponding anti-arc. Instead, we will show that the sequence

of paths constructed by the algorithm actually comprises a chain in the path lattice.

Lemma 4.42. Let T1, . . . , Tk be the leftmost residual path trees occuring in the course of

the leftmost path algorithm. Then T1[s] � . . . � Tk[s].

Proof. For i ∈ {1, . . . , k} let T ∗i be the dual tree constructed by the algorithm in iteration

i and let di, ei be the corresponding darts exchanged in the trees. Furthermore, define

Pi := Ti[s] to be the augmenting path in the respective iteration. As removing di from Ti

disconnects s and t in the tree, Pi+1 must contain rev(ei). We represent the circulation

δPi − δPi+1 with respect to the basis of the cycle space induced by Ti (cf. Theorem 2.28)

and conclude that it corresponds to the unique simple cycle Cei in E(Ti ∪ {ei}) that uses

ei, as (δPi − δPi+1)(ei) = 1 and every other edge in the support of the circulation is in

E(Ti) (cf. Corollary 2.30). Since T ∗i [right(ei)] is a right(ei)-f∞-path in the dual graph that

intersects the cycle only in ei, the exterior must be to left of the cycle and the circulation

must be clockwise. Thus Pi � Pi+1.

There is hope that this insight might lead to a simplified proof of the running time of

the algorithm based solely on the structure of the path lattice, e.g., by finding an upper

bound on the length of chains in this lattice.

88

Page 97: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Remark 4.43. We can easily deduce a quadratic bound on the number of iterations

from a simple consideration of the face potentials. Let φi := Φ(δP1 − δPi). Since φi+1 =

φi+Φ(δPi−δPi+1) ≥ φi with strict inequality for at least one face, the sum of the potentials

of all faces must increase by at least 1 in every iteration. Thus∑

f∈V ∗ φi(f) ≥ i − 1.

Furthermore, (δP1 − δPi)(f) ≤ |E| as this is the maximum length of a simple f -f∞-path

in G∗. As both the number of faces and the number edges are linearly bounded in the

number vertices, this immediately implies that both the length of a chain in the path

lattice and the number of iterations of the leftmost path algorithm is bounded by O(|V |2).

A more careful analysis of this approach might yield the linear bound proven by in [3].

4.3 Weighted flows

We have shown that the maximum flow problem in s-t-planar graphs actually is a packing

problem on a consecutive and submodular lattice and that the uppermost path algorithm

corresponds to the application of the two-phase greedy algorithm on this packing prob-

lem. However, the results in Chapter 3 allow for a supermodular reward function on the

lattice elements, which has not yet been considered by us. Translated to the maximum

flow problem this means we can introduce a reward value or weight for every simple s-t-

path in the graph. This leads to the maximum weighted flow problem and its dual, the

minimum weighted cut problem. Weighted flows have also been investigated by Martens

and McCormick [27] in the more general setting of abstract flows and using a different

notion of supermodularity. In this section, we present the problem formulation and apply

our previous results on it. Furthermore, we will give a motivating example (temporally

repeated flows) and characterize those instances of the problem for which the value of the

solution only depends on the flow on the arcs and not on the particular path decomposi-

tion. For this case, we will also give a criterion for increasing monotonicity of the reward

function.

Problem 4.44 (Maximum weighted flow/minimum weighted cut problem).

Given: a graph G = (V,E) s, t ∈ V , c :←→E → R+, r : P → R

Task: Find optimal solutions to

(MWF )max

∑P∈P r(P )y(P )

s.t.∑

P∈P:d∈P y(P ) ≤ c(d) ∀d ∈←→E

y(P ) ≥ 0 ∀P ∈ P

and

(MWC)min

∑d∈←→Ec(d)x(d)

s.t.∑

d∈P x(d) ≥ r(P ) ∀P ∈ Px(d) ≥ 0 ∀d ∈

←→E

89

Page 98: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

s tc ≡ 1

r(P1) = r(P2) = r(P3) = 1 and r(P ) = 0 otherwise

P1 P2 P3

Figure 4.7: The weighted maximum flow problem with integral capacities does not always

yield an integral solution. In the depicted network, y(P1) = y(P2) = y(P3) =12 and y(P ) = 0 for all other P ∈ P is an optimal solution, despite the

integral capacities.

where P is the set of all simple s-t-paths in G.

Since we have proven the consecutive and submodular lattice structure of the left/right

order in s-t-plane graphs, we can now directly transfer Hoffman and Schwartz’ total dual

integrality result (Theorem 3.15) to the weighted maximum flow problem in s-t-planar

graphs with supermodular reward functions. For the case of monotone increasing func-

tions, our presentation of the uppermost path algorithm as application of the two-phase

greedy algorithm (cf. Section 4.2) even yields polynomial solvability. Note that the inte-

grality result does not hold for arbitrary weights, as the example in Figure 4.7 shows.

Corollary 4.45. If G is s-t-planar and r is supermodular w.r.t. the path lattice for an s-

t-planar embedding of G, then (MWC) is totally dual integral and (MWF ) has an integral

solution whenever c is integral. If r is furthermore non-negative and monotone increas-

ing with respect to the left/right order, the uppermost path algorithm computes optimal

solutions to (MWF ) and (MWC) in O(|V | log(|V |)).

Example 4.46 (Temporally repeated flows). As a motivating example for weighted flows

we want to consider the problem of sending a maximum temporally repeated flow through

a network. The notion of temporally repeated flows arises in the theory of flows over

time. The idea of flows over time is to define flow as a dynamic rate that may vary over

time instead of a single static value and to impose transit times τ : E → R+ on the arcs,

i.e., the time needed by flow particles to travel along an arc. Furthermore, there usually

is a time horizon T ∈ R+ up to which any flow must have left the network. We will

not formally introduce flows over time in all their generality but restrict to temporally

repeated flows instead. A concise introdcution to flows over time can be found [32]. A

temporally repeated flow is a flow over time obtained from the path decomposition of a

90

Page 99: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

static flow by sending flow along each path as long as possible such that the flow reaches

its destiny within the time horizon.13 Ford and Fulkerson [11] have shown that there

always is a maximum flow over time that is a temporally repeated flow. As the time

that a flow particle needs for traversing the path P ∈ P is∑

d∈P τ(d), we can send flow

for T −∑

d∈P τ(d) time units along that path. Thus, the maximum temporally repeated

flow problem corresponds to the maximum weighted flow problem with reward function

r(P ) = T −∑

d∈P τ(d). The interpretation for the dual minimum weighted cut problem

is the following in this case: given a minimum cut over time, one can obtain an optimal

solution of (MWC) by setting each x(d) to the amount of time that the dart d is in the

cut.14 If P is equipped with the order of a submodular lattice, e.g., the left/right order in

a plane graph, the restriction of the reward function r to directed paths is supermodular

as∑

d∈P τ(d) is submodular for non-negative τ .

Independency of decomposition and arc weighted flows

The above example points to an interesting observation. Consider again the maximum

flow over time problem. If y ∈ RP is a decomposition of a flow x ∈ RE+, then∑P∈P

(T −

∑d∈P

τ(d)

)y(P ) = T

∑P∈P

y(P )−∑d∈←→E

τ(d)

( ∑P∈P:d∈P

y(P )

)

= T∑P∈P

y(P )−∑e∈E

τ(e)x(e).

Thus, the weighted value of the flow does not depend on the decomposition we choose

– a fact well-known in the theory of flows over time. It is easy to see that this observation

can be generalized to any instance of the weighted maximum flow problem that allows

expressing the path weights by weights on the arcs.15 Unfortunately, not all instances of

the maximum weighted flow problem have this property.

Example 4.47. The optimal solution of the (WMF ) instance in Figure 4.7 containing

three paths with total reward value 32 leads to arc flow values of 1 on each upper and 1

2

on each lower arc. Yet, this flow can also be decomposed into flow on the uppermost and

the lowermost path of the graph, both of which have reward value 0.

A dependency of the weighted value of a flow on the decomposition is quite undesirable,

as it severely reduces our capability to adapt arc based flow algorithms for solving maxi-

mum weighted flow problems. Even worse, it already is NP -hard to determine a maximum

weight decomposition of a given flow.13By setting the capacities of anti-arcs to 0, we can assume our flow only uses arcs, which have non-negative

transit time, so we do not have to take care of “time traveling”.14Consult [32] for a formal definition of cuts over time. Unfortunately, the converse of the statement does

not hold, i.e., not every solution of (MWC) corresponds to a cut over time.15Note that it is possible to model constants that are added to all path weights as arc weights (e.g., the

T in case of the maximum flow over time problem) by adding this constant to all outgoing darts of s.

91

Page 100: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Theorem 4.48. The following problem is NP -hard: given a flow x and r ∈ RE, find a

path decomposition y ∈ RP of x that maximizes∑

P∈P r(P )y(P ).

Proof. 16 We provide a reduction from the partition problem, in which we are given

n natural numbers a1, . . . , an ∈ N and have to decide whether there is a subset I ⊆N := {1, . . . , n} such that

∑i∈I ai =

∑i∈N\I ai. Apparantly, this is only possible if∑

i∈N ai = 2B for some B ∈ N. For the reduction we use a graph with 2n edges E :=

{e1, . . . , en, f1, . . . , fn} and vertices s := {−→e1 ,−→f1}, t := {←−en,

←−fn} and vi = {←−ei ,

←−fi ,−−→ei+1,

−−→fi+1}

for i ∈ {1, . . . , n − 1}, i.e., each vertex except t has two outgoing arcs to the next ver-

tex. There are 2n paths, each path P corresponding one-to-one to a subset I(P ) := {i ∈N : −→ei ∈ P} such that N \ I = {i ∈ N :

−→fi ∈ P}. We define the reward function by

r(P ) = max{B+1−∑

i∈I(P ) ai, 0}·max{B+1−∑

i∈N\I(P ) ai, 0} for each P ∈ P. Clearly,

r(P ) = 1 if and only if∑

i∈I(P ) ai = B and r(P ) = 0 in any other case. Let x with x(e) = 1

for all e ∈ E. If there is a decomposition y of x such that∑

P∈P y(P ) > 0, then there must

be a path P with r(P ) > 0, thus∑

i∈I(P ) ai = B. Conversely, if I ⊆ N with∑

i∈I ai = B,

then the paths {−→ei : i ∈ I} ∪ {−→fi : i ∈ N \ I} and {−→ei : i ∈ N \ I} ∪ {

−→fi : i ∈ I} yield a

decomposition of x with reward value 2. Thus the partition instance is a “yes”-instance if

and only if there is a path decomposition of x with positive reward value.

Accordingly, we are more interested in those instances of the maximum weighted flow

problem for which the value of the objective function only depends on the flow on the arcs.

Moreover we would also prefer instances for which we are able to express the path weights

in terms of arc weights, as seen for the maximum temporally repeated flow problem above.

The following theorem states that these two properties actually go hand in hand.

Theorem 4.49. Let G = (V,E) be a graph, s, t ∈ V and r : P → R. Then the following

two statements are equivalent.

(1) For every flow x ∈ RE there is a value R(x) ∈ R such that∑

P∈P r(P )y(P ) = R(x)

for every generalized path decomposition y ∈ RP+ of x.

(2) There is an r ∈ RE such that r(P ) =∑

d∈P r(d) for all P ∈ P.

Proof. The equivalence is a direct implication of the fundamental theorem of linear algebra,

which states that a system of linear equalities Ax = b with A ∈ Rm×n, b ∈ Rm has a

solution if and only if yTA = 0 implies yT b = 0 for all y ∈ Rm (cf. [15], also see Corollary

3.1b in [30] for a proof). We show that (1) is equivalent to

(3)

∑P∈P:−→e ∈P

y(P )−∑

P∈P:←−e ∈P

y(P ) = 0 ∀e ∈ E

(∑P∈P

r(P )y(P ) = 0

)

for all y ∈ RP , which is equivalent to (2) by the fundamental theorem.16The proof is a straightforward adaption of a reduction of partition to the minimum cost flow over time

problem [32], which can also be modeled as a weighted maximum flow problem.

92

Page 101: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

(1)⇒ (3): Let y ∈ RP fulfill the premise of (3). We can write y = y+ − y− for some

y+, y− ∈ RP+. By the premise of (3), y+ and y− are generalized path decompositions of

the same flow x(e) :=∑

P∈P:−→e ∈P y+(P )−

∑P∈P:←−e ∈P y

+(P ) =∑

P∈P:−→e ∈P y−(P )−∑

P∈P:←−e ∈P y−(P ). Thus

∑P∈P r(P )y(P ) =

∑P∈P r(P )y+(P )−

∑P∈P r(P )y−(P ) =

R(x)−R(x) = 0.

(3)⇒ (1): Let y1, y2 ∈ R be two generalized path decompositions of the same flow x.

Then y1 − y2 fulfills the premise of (3), thus∑

P∈P y1(P )− y2(P ) = 0, implying the

two decompositions have equal value.

The case described in Theorem 4.49 bears a certain similarity to the minimum cost

flow problem. It is thus tempting to believe that after restricting to these instances the

problem could be solved by a polynomially sized LP formulation like the following.

max∑

e∈E r(e)x(e)

s.t.∑

d∈v x(d) = 0 ∀v ∈ V \ {s, t}∑d∈s x(d) ≥ 0

x(d) ≤ c(d) ∀d ∈←→E

However, there is a crucial difference between the arc weighted case of maximum weighted

flow and the minimum cost flow problem. In contrast to the minimum cost flow problem –

and the LP stated above – the maximum weighted flow problem does not allow for sending

flow along cycles, i.e., a feasible solution of (MWF ) formulated as flow on arc values must

always bear a path decomposition.17 Indeed, the maximum weighted flow problem with

weights on the arcs still is NP -hard.

Theorem 4.50. The maximum weighted flow problem is NP -hard, even if it is restricted

to instances with r(P ) =∑

d∈P r(d) for some r ∈ RE.

Proof. We give a reduction from the Hamiltonian path problem, which asks for a simple

path containing all vertices. Given a graph G = (V,E), we introduce four new vertices

s, s′, t, t′ with an arc from s to s′ and an arc from t′ to t, as well as two new arcs for every

vertex v ∈ V , one from s′ to v and one from v to t′, respectively. Setting the capacity of

all arcs to 1 and the capacity of all anti-arcs to 0, and setting the weight of all edges to

1, we obtain an instance of maximum weighted flow that has optimum value n+ 3 if and

only if there is a simple s-t-path containing n− 1 of the edges from the original graph – a

Hamiltonian path in G.

17Note that a feasible solution of (MWF ) can induce flow on cycles though, but only as long as each part

of the cycle is part of a flow-carrying path.

93

Page 102: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

The non-negative clockwise cycle criterion

Finally, we want to return to the uppermost path algorithm and the maximum weighted

flow problem in s-t-planar graphs. Our algorithm requires a supermodular and mono-

tone increasing weight function. For the situation in Theorem 4.49, we shortly want to

investigate which conditions are sufficient for the arc weights to fulfill these requirements.

As we have seen in the temporally repeated flow example, every function r(P ) = R −∑d∈P r(d) for some R ∈ R and r ∈ RE+ is supermodular on the restriction of the path

lattice to directed paths. Increasing monotonicity can be characterized by the absence of

negative weight clockwise cycle (with the exception of those cycles no dart of which is on

a simple s-t-path).

Theorem 4.51. Let G = (V,E) be an s-t-plane graph and r(P ) =∑

d∈P r(d) for some

r ∈ RE. Then r is monotone increasing if and only if there is no clockwise simple cycle

C in G such that there is a path P ∈ P with P ∩ C 6= ∅ and∑

d∈C r(d) < 0.

Proof.

“⇒”: Let C be a clockwise cycle with∑

d∈C r(d) < 0 such that there is R ∈ P with R∩C 6=∅. Let v be the first vertex on R that is also on C and let w be the last vertex of R

that is also on C. Note that v 6= w as R∩C 6= ∅. Thus P := R[s, v]◦C[v, w]◦R[w, t]

and Q := R[s, v]◦rev(C[w, v])◦R[w, t] both are simple s-t-paths. Since δP−δQ = δC ,

the path P is left of Q but r(P ) = r(Q) +∑

d∈C r(d) < r(Q).

“⇐”: Assume there are P,Q ∈ P with P � Q but r(P ) < r(Q). Then δP − δQ can be

written as∑k

i=1 δCkfor some clockwise simple cycles C1, . . . , Ck ⊆ P ∪ rev(Q). Note

that every Ci contains at least one dart of P . As∑

d∈P r(d) <∑

d∈Q r(d), at least

one of the cycles Ci must have negative weight, i.e.,∑

d∈Cir(d) < 0.

94

Page 103: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

5 Conclusion and outlook

In the present work, we have established a connection between packing problems on lattice

polyhedra and the maximum flow problem in planar graphs. We shortly summarize our

results and point to possible directions of future research.

The improved two-phase greedy algorithm

In Section 3.2, we have seen how Frank’s two-phase greedy algorithm [13] solves covering

and packing problems on submodular and consecutive lattices if the weight function on

the lattice elements is supermodular and monotone increasing. A new fast implementation

of this algorithm was provided, performing all non-oracle operations in O(|E| log(|E|)).The technique used to achieve this running time incorporates a linear space encoding of

the elements in the support of the dual solution.

Outlook. Our discussion of Frank’s algorithm was based on an article by Faigle and Peis

[9], which also contains a similar but slightly more involved algorithm for supermodular

lattice polyhedra. In view of the improved implementation presented in this thesis, it

is interesting to investigate whether the same concepts can be adapted for the latter

algorithm to achieve a comparable improvement in running time.

The path lattice of a plane graph

In Section 4.1, we have comprehensively discussed the left/right relation ([22], [35] and

[23]) on the set of s-t-paths in a plane graph and the lattice structure it induces on the

path set. A major result of this thesis states that the left/right relation on the set of s-t-

paths in a plane graph induces a submodular lattice. We have also given a more intuitive

characterization of this lattice in s-t-plane graphs based on the idea of uppermost paths

by Ford and Fulkerson and shown that the lattice is consecutive in this case. We further

pointed out the differences between the left/right relation in plane graphs in general and s-

t-plane graphs. In the latter case, many additional (and by intuition expected) properties

hold. In the general case however, there exist graphs on which even any other partial

order cannot achieve consecutivity and submodularity of a lattice at the same time.

Outlook. As the left/right relation is used by several planarity exploiting algorithms like

Klein’s multiple source shortest path algorithm [23] and the leftmost path algorithm of

95

Page 104: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Borradaile and Klein [3], the structural insights on the path lattice might have influence

on our view to these algorithms or even lead to new algorithmic results.

The negative result denying the existence of a submodular and consecutive lattice in

the example graph (cf. Figure 4.5 and Theorem 4.22) can very likely be generalized to a

negative result for every planar graph that is not s-t-planar, i.e., no such graph can be

equipped with a consecutive and submodular lattice. The key idea is to use Kuratowski’s

theorem [26] to show that such a non-s-t-planar graph either contains our very elemental

example graph or a graph with five vertices and edges for all pairs of vertices but s and t

as a minor. The latter graph has the same negative property as the example graph.

The uppermost path algorithm

In Section 4.2, we applied the results of the preceeding sections on the path formula-

tion of the maximum flow problem and achieved the uppermost path algorithm of Ford

and Fulkerson as a special case of the two-phase greedy algorithm. In order to provide

an implementation of the uppermost path algorithm, we have shown how to construct a

sequence of uppermost residual paths in overall linear time through simultaneous bidirec-

tional traversal of a face boundary. This simple technique combined with the improved

implementation of the general two-phase greedy algorithm resulted in a relatively easy

O(|V | log(|V |))-implementation of algorithm. Although outperformed by the linear time

shortest path approach in s-t-planar graphs and less general than the leftmost path al-

gorithm of Borradaile and Klein, the presented implementation still yields a useful ap-

plication related to flow decomposition and can be implemented without the use of any

non-basic data structures. Using the technique of representing chains in consecutive struc-

tures by difference lists, we can encode path decompositions of s-t-planar flows in linear

space.

Outlook. Besides the uppermost path algorithm we have also given a short introduction to

the leftmost path algorighm of Borradaile and Klein and proved that it actually constructs

a chain with respect to the left/right relation. We suggest an alternative approach on

proving the algorithms running time by an argument based on the face potentials (cf.

Remark 4.43). If successful, this might provide interesting insight into the structure of

the path lattice as well. Furthermore, a better understanding on how this algorithm and

the path lattice actually interact might give rise to a possible extension of the general

two-phase greedy algorithm to certain non-consecutive structures. An idea for such an

approach could be the introduction of “backward” elements to enable the algorithm to

take back a part of its previous decisions, much like flow algorithms do when using the

residual network.

In context of planar maximum flow computations, we also want to mention a difficulty

pointed out by Borradaile and Klein in [3]. Maximum flow problems with multiple sinks

96

Page 105: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

or with vertex capacities in general graphs can easily be reduced to the standard case. In

planar graphs however, these reductions might destroy planarity. Thus, the planar maxi-

mum flow algorithms at hand cannot be applied. The search for new planarity exploiting

algorithms for these problems promises to be an interesting challenge. It might also be

helpful to investigate in how far the path lattice can be extended into this direction. For

the vertex capacitated problem, the case of s-t-planar graphs might be of special interest.

The maximum weighted flow problem

The interpretation of the maximum flow problem as a packing problem allows for weights

on the paths, giving rise to weighted flows, which have been discussed in Section 4.3.

Our earlier results immediately implied that the maximum weighted flow problem can

be solved efficiently by the uppermost path algorithm if the weights are supermodular

and monotone increasing with respect to the path lattice. Our brief investigation of this

problem also led to the interesting insight that those instances for which the weighted

value of the flow does not depend on the path decomposition can be characterized by

the existence of arc weights. For a planar graph, monotonicity of the weight function is

equivalent to the absence of negative weight clockwise cycles in this case.

Outlook. Although we characterized monotone increasing weight functions, a more de-

tailed look on the supermodularity condition is of need. This might provide a better

insight in how restrictive the requirements actually are.

97

Page 106: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Notation index

Symbol Description Page

⊂ proper subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

⊆ subset or equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2E power set {S : S ⊆ E} of the set E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

S ∪ T disjoint union, implies S ∩ T = ∅ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

← assignment operator in pseudo-code listings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

f ◦ g concatenation of two functions, (f ◦ g)(x) = f(g(x)) . . . . . . . . . . . . . . . . . . . . . .

fk k-times repeated concatenation of the function f . . . . . . . . . . . . . . . . . . . . . . . . .

1S incidence vector of the set S, i.e., 1S(e) = 1 if e ∈ S, and 0 otherwise . . . . .

x ≤ y componentwise less or equal, x(e) ≤ y(e) for all e ∈ E (with x, y ∈ RE) . . .support(x) the support {e ∈ E : x(e) 6= 0} of the vector x ∈ RE . . . . . . . . . . . . . . . . . . . . . .span(S) the linear hull

{∑v∈S λ(v)v : λ ∈ RS

}of the vectors in S . . . . . . . . . . . . . . . . .

log(x) logarithm of x with basis 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

R+ set of non-negative real numbers {x ∈ R : x ≥ 0} . . . . . . . . . . . . . . . . . . . . . . . . .

f = O(g) O-notation for “f is asymptotically bounded by g” . . . . . . . . . . . . . . . . . . . . . . 9←→E set of darts E × {−1, 1} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

rev(d) reverse of the dart d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14tailG(d) tail of the dart d in the graph G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

headG(d) head of the dart d in the graph G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14−→E ,←−E set of arcs E × {1} and set of anti-arcs E × {−1} in a graph . . . . . . . . . . . 14

−→e ,←−e arc (e, 1) and anti-arc (e,−1) of the edge e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

E(D) set of edges of the darts in D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

G[F ] subgraph of G with edge set F . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

P ◦Q concatenation of two paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

P [v, w] subpath of a simple path or cycle P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

v →D w There is a v-w-path contained in D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

v ↔D w There is a v-w-path and a w-v-path contained in D. . . . . . . . . . . . . . . . . . . . 17

T [v] the unique v-r-path (r-v-path) in an in-tree (out-tree) with root r . . . . . . 18

Γ+(S) set of darts leaving S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

δd incidence vector of a dart in the arc space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

δD∑

d∈D δd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

δ(d) δdT δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

98

Page 107: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

D(δ) {d ∈←→E : δ(d) = 1} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Scut(G) cut space of G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Scycle(G) cycle space of G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

π combinatorial embedding of a graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

π∗ dual embedding π ◦ rev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

G∗ dual graph of G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

V ∗ set of faces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22rightG(d) tailG∗(d), the face to the right of d in G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

leftG(d) headG∗(d), the face to the left of d in G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

f∞ infinite face . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Φ(δ) face potential of the circulation δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

P set of simple s-t-paths in G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

C set of simple cycles in G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

� a partial order in general, the left/right relation in particular . . . . . . . 40, 59

∧,∨ meet and join operator in lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

L[S] the sublattice {L ∈ L : L ⊆ S} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

L+, L− symmetric difference of elements chosen by two-phase greedy algorithm 53

S+ faces with positive potential, {f ∈ V ∗ : Φ(δP − δQ) > 0} . . . . . . . . . . . . . . . . 64

S− faces with negative potential, {f ∈ V ∗ : Φ(δP − δQ) < 0} . . . . . . . . . . . . . . . 64

δ∧ δP −∑

f∈S+ Φ(δP − δQ)(f)δf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

δ∨ δP −∑

f∈S− Φ(δP − δQ)(f)δf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

D∧, D∨ D(δ∧), D(δ∨) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

φl(d) φ(left(d)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

φr(d) φ(right(d)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66

99

Page 108: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Index

adjacency

of vertices, 14

of vertices and faces, 24

algorithm

of Borradaile and Klein, 86

of Ford and Fulkerson, 38

ancestor, 18

anti-arc, 14

arborescence, 17

arc, 14

arc space, 19

bridge lemma, 61

capacity, 33

chain, 40

change of tracks lemma, 68

circulation, 20

clockwise, 27

comparable, 40

complementary slackness, 11

connected component, 17

consecutivity, 41

contraction, 17

counterclockwise, 27

covering problem, 44

cut, 18

capacity of, 37

simple, 18, 19

s-t-cut, 37

cut lattice, 43

cut space, 20

cycle, 15

residual, 33

simple, 15

cycle space, 20

cycle/cut duality, 24

cycle/cut orthogonality, 20

dart, 14

anti-parallel, 14

entering a path, 28

hidden, 66

incoming, 14

leaving a path, 28

outgoing, 14

P -dart, 66

parallel, 14

Q-dart, 66

residual, 33

reverse, 14

solid, 66

descendant, 18

duality

of cycles and cuts, 24

of deletion and contraction, 28

of linear programs, 11

of shortest path and min cut, 81

edge, 14

contracted, 17

exterior, 26

interior, 26

parallel, 14

100

Page 109: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

embedding

combinatorial, 22

planar, 24

s-t-planar, 31

entering, 28

equivalence class, 8

equivalence relation, 8

Euler’s formula, 24

exterior, 26

face, 22

exterior, 26

interior, 26

left, 23

right, 23

face potential, 25

flow, 33

weighted, 89, 90

flow decomposition, 35

function

(sub-/super-)modular, 41

monotone increasing, 41

graph, 14

dual, 22

embedded, 22

planar, 24

s-t-planar, 31

simple, 14

Hasse diagram, 64

Hassin’s method, 82

heap, 9

in-tree, 17

incidence, 14

incomparable, 40

interdigitating spanning trees, 24

interior, 26

interval matrix, 13

join, 41

lattice, 41

boolean, 42

consecutive, 41

(sub-/super-)modular, 41

path lattice, 71

leaving, 28

left/right relation, 59

in s-t-plane graphs, 62

leftmost path algorithm, 86

linear program, 11

loop, 14

lowermost path, 60

max-flow/min-cut theorem, 38

maximum

of a lattice, 41

maximum flow problem, 35

path formulation, 36

maximum weighted flow problem, 90

meet, 41

minimum cut problem, 36

NP -hard, 9

oracle, 47

orbit, 8

orientation lemma, 61, 62

out-tree, 17

packing problem, 44

partial order, 40

partition, 8

path, 15

is left/right of, 59

lowermost, 60

residual, 33

simple, 15

uppermost, 60

path augmenting algorithm, 38

101

Page 110: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

path decomposition, 35

path lattice, 71

of an s-t-plane graphs, 63

permutation, 8

residual, 33

right first search, 85

root, 18

root-directed tree, 17

rotation system, 22

running time, 9

shortest path

algorithm, 56

potential, 82

problem, 45

subgraph, 15

of a planar graphs, 28

submodularity, 41

supermodularity, 41

total dual integrality, 12

of lattice polyhedra, 45

total unimodularity, 13

tree, 17

two-phase greedy algorithm

basic implementation, 48

improved implementation, 53

running time, 55

uppermost path, 60

uppermost path algorithm, 76

vertex, 14

exterior, 26

interior, 26

vertex cover, 44

walk, 15

102

Page 111: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

Bibliography

[1] P. Ackermann, V. große Rebel, and G. Rosenberger, Algebraische Struk-

turen und universelle Algebren fur Informatiker, Shaker, Aachen, 2004.

[2] D. Bertsimas and J. N. Tsitsiklis, Introduction to linear optimization, Athena

Scientific, Belmont, Massachusetts, 1997.

[3] G. Borradaile and P. Klein, An O(n log n) algorithm for maximum st-flow in a

directed planar graph, in Proceedings of the Seventeenth Annual ACM-SIAM Sympo-

sium on Discrete Algorithms, New York, 2006, ACM, pp. 524–533.

[4] J. M. Boyer and W. J. Myrvold, On the cutting edge: simplified O(n) planarity

by edge addition, Journal of Graph Algorithms and Applications, 8 (2004), pp. 241–

273.

[5] S. A. Cook, The complexity of theorem-proving procedures, in STOC ’71: Proceed-

ings of the third annual ACM symposium on Theory of computing, New York, 1971,

ACM, pp. 151–158.

[6] E. W. Dijkstra, A note on two problems in connexion with graphs, Numerische

Mathematik, 1 (1959), pp. 269–271.

[7] J. Edmonds, Submodular functions, matroids, and certain polyhedra, in Combina-

torial optimization—Eureka, you shrink!, vol. 2570 of Lecture Notes in Computer

Science, Springer, Berlin, 2003, pp. 11–26.

[8] J. Edmonds and R. Giles, A min-max relation for submodular functions on graphs,

in Studies in integer programming (Proc. Workshop, Bonn, 1975), vol. 1 of Annals

of Discrete Mathematics, North-Holland, Amsterdam, 1977, pp. 185–204.

[9] U. Faigle and B. Peis, Two-phase greedy algorithms for some classes of combina-

torial linear programs, in Proceedings of the Nineteenth Annual ACM-SIAM Sympo-

sium on Discrete Algorithms, New York, 2008, ACM, pp. 161–166.

[10] L. R. Ford, Jr. and D. R. Fulkerson, Maximal flow through a network, Canadian

Journal of Mathematics. Journal Canadien de Mathematiques, 8 (1956), pp. 399–404.

103

Page 112: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

[11] , Constructing maximal dynamic flows from static flows, Operations Research,

6 (1958), pp. 419–433.

[12] , Flows in networks, Princeton University Press, Princeton, N.J., 1962.

[13] A. Frank, Increasing the rooted-connectivity of a digraph by one, Mathematical

Programming, 84 (1999), pp. 565–576.

[14] D. R. Fulkerson, Packing rooted directed cuts in a weighted directed graph, Math-

ematical Programming, 6 (1974), pp. 1–13.

[15] C. F. Gauss, Theoria motus corporum coelestium in sectionibus conicis solem am-

bientium, F. Perthes & J. H. Besser, Hamburg, 1809.

[16] T. E. Harris and F. S. Ross, Fundamentals of a method for evaluating rail net

capacities, The RAND Corporation, Santa Monica, California, 1955.

[17] R. Hassin, Maximum flow in (s, t) planar networks, Information Processing Letters,

13 (1981), p. 107.

[18] M. R. Henzinger, P. Klein, S. Rao, and S. Subramanian, Faster shortest-path

algorithms for planar graphs, Journal of Computer and System Sciences, 55 (1997),

pp. 3–23.

[19] A. J. Hoffman and D. E. Schwartz, On lattice polyhedra, in Proceedings of the

Fifth Hungerian Colloquium on Combinatorics, Vol. I, A. Hajnal and V. T. Sos,

eds., vol. 18 of Colloquia mathematica Societatis Janos Bolyai, Amsterdam, 1978,

North-Holland, pp. 593–598.

[20] R. M. Karp, Reducibility among combinatorial problems, in Complexity of computer

computations, Plenum press, New York, 1972, pp. 85–103.

[21] L. G. Khachiyan, A polynomial algorithm in linear programming, Doklady Akademii

Nauk SSSR, 244 (1979), pp. 1093–1096.

[22] S. Khuller, J. Naor, and P. Klein, The lattice structure of flow in planar graphs,

SIAM Journal on Discrete Mathematics, 6 (1993), pp. 477–490.

[23] P. N. Klein, Multiple-source shortest paths in planar graphs, in Proceedings of the

Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New York, 2005,

ACM, pp. 146–155.

[24] , Planar graph algorithms. Lecture at Brown University, Providence, 2009. Avail-

able from: http://www.cs.brown.edu/courses/cs250/ [cited 2009 August 27].

104

Page 113: Lattices and maximum ow algorithms in planar graphs · 2017-04-26 · The third chapter provides basic knowledge on lattices and covering and packing prob-lems, a general framework

[25] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms,

vol. 21 of Algorithms and Combinatorics, Springer, Berlin, third ed., 2006.

[26] K. Kuratowski, Sur le probleme des courbes gauches en topologie, Fundamenta

Mathematicae, 15 (1930), p. 79.

[27] M. Martens and S. T. McCormick, A Polynomial Algorithm for Weighted Ab-

stract Flow, Lecture Notes in Computer Science, 5035 (2008), pp. 97–111.

[28] B. Peis and S. Stiller, Integer linear programming. Lecture at TU Berlin, 2009.

[29] H. Ripphausen-Lipa, D. Wagner, and K. Weihe, The vertex-disjoint Menger

problem in planar graphs, SIAM Journal on Computing, 26 (1997), pp. 331–349.

[30] A. Schrijver, Theory of linear and integer programming, Wiley-Interscience Series

in Discrete Mathematics, John Wiley & Sons Ltd., Chichester, 1986.

[31] , On the history of the transportation and maximum flow problems, Mathematical

Programming, 91 (2002), pp. 437–445.

[32] M. Skutella, An introduction to network flows over time, in Research Trends in

Combinatorial Optimization, W. Cook, L. Lovasz, and J. Vygen, eds., Springer,

Berlin, 2008, pp. 451–482.

[33] D. D. Sleator and R. E. Tarjan, A data structure for dynamic trees, Journal of

Computer and System Sciences, 26 (1983), pp. 362–391.

[34] I. Wegener, Datenstrukturen, Algorithmen und Programmierung 2. Lecture at Uni-

versitat Dortmund, 2005. Available from: http://ls2-www.cs.uni-dortmund.de/

lehre/sommer2005/dap2/skript.pdf [cited 2009 August 27].

[35] K. Weihe, Maximum (s, t)-flows in planar networks in O(|V | log |V |) time, Journal

of Computer and System Sciences, 55 (1997), pp. 454–475.

[36] H. Whitney, Non-separable and planar graphs, Transactions of the American Math-

ematical Society, 34 (1932), pp. 339–362.

[37] G. M. Ziegler, Lectures on polytopes, vol. 152 of Graduate Texts in Mathematics,

Springer, New York, 1995.

105