Upload
ngokhanh
View
244
Download
0
Embed Size (px)
Citation preview
Generalized Combinatorial Auction for Mixed IntegerLinear Programming
by
Mark Michael
A thesis submitted in conformity with the requirementsfor the degree of Master of Applied Science
Graduate Department of Mechanical & Industrial EngineeringUniversity of Toronto
c© Copyright 2014 by Mark Michael
Abstract
Generalized Combinatorial Auction for Mixed Integer Linear Programming
Mark Michael
Master of Applied Science
Graduate Department of Mechanical & Industrial Engineering
University of Toronto
2014
Mixed integer linear programming is an invaluable tool for solving some of the toughest
problems in operations research. Since the integer nature of these problems often makes
finding a solution intractable, many heuristic algorithms have been developed to pro-
vide reasonable solutions while utilizing less resources. Recent research has shown that
combinatorial auctions are adept at solving such problems, specifically dealing with trans-
portation procurement and scheduling. As such, we’ve ventured to develop a generalized
combinatorial auction framework to solve mixed integer linear programs. We utilize our
algorithm to solve a benchmark set of problems before implementing our algorithm on a
radiation therapy problem. We show that by taking advantage of the problem structure
we can formulate combinatorial auction algorithms that compete favourably with some
of the newest works in the field.
ii
Acknowledgements
I’d like to thank my supervisors, Professor Chi-Guhn Lee who gave me a chance to
complete a master’s when others wouldn’t. His vast knowledge and creative ideas built
the foundation for this work. Professor Timothy Chan, who sets a shining example of
what to try to be in the world of academia. His broad interests and patience gave this
work direction, purpose and made the journey a truly enjoyable one.
iii
Contents
1 Introduction 1
2 Exploring MILP Solving Algorithms 3
2.1 Linear Programming Review . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.1 Revised Simplex Method . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Delayed Column Generation . . . . . . . . . . . . . . . . . . . . . 6
2.2 Mixed Integer Linear Programming . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Exact Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Combinatorial Auctions . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Bid Generation Problem . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.2 Winner Determination Problem . . . . . . . . . . . . . . . . . . . 14
2.3.3 Pricing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Proposed Algorithm 16
3.1 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.1 Algorithm Overview . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Winner Determination Problem . . . . . . . . . . . . . . . . . . . 17
3.1.3 Pricing Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.4 Bid Generation Problem . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Solve WDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.3 Solve the PP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.4 Solve The BGPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.5 Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.6 Convergence & Optimality . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
iv
3.3.1 Integer Simplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.2 Cutting Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Deciding on Best Variable to Generate Cuts From . . . . . . . . . 27
Deciding on Best h . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.3 Computing an Initial Basis . . . . . . . . . . . . . . . . . . . . . . 27
4 Case Study: Benchmark Tests 29
4.1 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 Implementation of GCA Algorithm . . . . . . . . . . . . . . . . . . . . . 31
4.2.1 Initial Feasible Solution . . . . . . . . . . . . . . . . . . . . . . . 31
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3.1 Comparison With CPLEX . . . . . . . . . . . . . . . . . . . . . . 32
4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Case Study: Radiation Therapy 39
5.1 Introduction to Intensity Modulated Radiation Therapy . . . . . . . . . . 39
5.2 Problem Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.1 Definitions & Notation . . . . . . . . . . . . . . . . . . . . . . . . 41
5.2.2 Equivalent MIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3 Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.1 Winner Determination Problem . . . . . . . . . . . . . . . . . . . 43
5.3.2 Pricing Problem / Dual . . . . . . . . . . . . . . . . . . . . . . . 44
5.3.3 Bid Generation Problem . . . . . . . . . . . . . . . . . . . . . . . 45
5.3.4 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Column Generation Interpretation . . . . . . . . . . . . . . . . . . 48
Convergence on Choice of Beams . . . . . . . . . . . . . . . . . . 48
5.3.6 Proof Of Finite Termination . . . . . . . . . . . . . . . . . . . . . 49
5.3.7 Algorithmic Comparison with Romeijn et al. (2005) and Dong et al.
(2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.3.8 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Tiny Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Comparison with CPLEX Equivalent MIP and 4π . . . . . . . . . 52
5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6 Conclusion 56
v
Bibliography 58
vi
Chapter 1
Introduction
Mixed integer linear programs are utilized to solve a plethora of problems in many differ-
ent disciplines. From scheduling to routing, facility planning to portfolio optimization,
mixed integer linear programs have become an invaluable formulation methodology for
solving some of the world’s most challenging problems. The main challenge with most
mixed integer linear programs, however, is that the addition of integer variables often
makes the problem intractable. As some important mixed integer linear programming
problems, like some formulations of the travelling salesman problem, have proven to be
NP-hard, it has become ever more important to obtain different methodologies that can
solve these problems quickly, even at some expense to the solution quality.
Inspired by the work in Kwon (2005) we formulate a generalized combinatorial auction
mechanism to solve generic mixed integer linear programs. Kwon (2005)’s work utilized
an iterative combinatorial auction to optimize a truck load transportation procurement
problem. To map their problem as an auction, they allowed the different procurement
services to submit bids on the combinations of items that they would deliver. The
centralized auctioneer would select which bids to utilize and then identify ask-prices for
the delivery of each item so that new bids may be submitted in the next round. They
further demonstrated the versatility of combinatorial auctions by allowing for each of the
bidders (procurement services) in the auction to have their own specialized bid generation
problems. Only the generated bids were sent to a centralized auctioneer to optimize
the optimal distribution of items. Offering customizable bid generation problems, and
opportunities to distribute computing resources over a network, their work yielded very
promising solutions in the field. As such, we would like to investigate the potential of
this methodology being generalized to MILPs
We begin chapter 2 with a review of linear programming and the revised simplex
method in a way that motivates the development of our algorithm. We then move on to
1
Chapter 1. Introduction 2
discuss mixed integer linear programming problems, elaborating on some of the exact and
heuristic methods currently used in research, and some of the methodologies utilized in
obtaining initial integer feasible solutions. The chapter concludes with a more in-depth
look at combinatorial auctions.
Having presented the background research, we present our algorithm in chapter 3.
This chapter closely mirrors the mechanism utilized to describe the revised simplex
method in section 2.1.1. Once the algorithm is presented, we discuss some potential
methods to take advantage of the strengths in combinatorial auctions. Specifically, we
provide insight into how one may tailor the algorithm to better suit a given problem
structure.
In chapter 4 we take a step back from the theory and utilize the most generic version
of our algorithm to solve a benchmark set of mixed integer linear programs. Using the
information provided by the library, we attempt to discover the strengths and weaknesses
of the algorithm in its most basic form. While our algorithm may not be the perfect
black box solution to all mixed integer linear programs, it offers a decent starting point
for further research in the field.
Finally, chapter 5, is dedicated to streamlining the generalized combinatorial auction
for a radiation therapy problem. We compare our algorithm with the CPLEX optimiza-
tion studio and Dong et al. (2013)’s work in the same field. We show that our streamlined
mechanism can compete with some of the most recently published work and we use this
as a testament to the versatility and aptitude of our generalized combinatorial auction.
Specifically our contributions are:
1. A general mixed integer linear programming algorithm that offers many opportuni-
ties to restyle the algorithm to solve specific problems more efficiently and to take
advantage of parallel processing computing platforms
2. An application of our algorithm to a broad range of problem structures, and an
analysis on the aptitude of our algorithm at solving specific problem types
3. The first usage of combinatorial auctions in the realm of radiation therapy
Chapter 2
Exploring MILP Solving Algorithms
The algorithm that we have created borrows concepts from combinatorial auctions, col-
umn generation, and the simplex method and applies them to solving general mixed
integer linear programming problems. We will begin by reviewing the revised simplex
method and delayed column generation in a notation that will be consistent with the
formulation of our algorithm going forward in Section 2.1. This section is of particular
importance since the mechanism by which we introduce the, well established, revised
simplex method provides the foundation on which our algorithm is premised. Section 2.2
then begins with a generalization of a mixed integer linear programming problem, and
continues on to discuss current methods for solving mixed integer linear programming
problems. We conclude this chapter with an in-depth look into combinatorial auctions
in Section 2.3.
2.1 Linear Programming Review
2.1.1 Revised Simplex Method
The simplex method is premised on the theory that any linear program (LP), if it has an
optimal solution, will have an optimal solution at a corner point of the feasible region.
Therefore, by traversing through the corner points of the feasible region one can determine
an optimal solution of the LP. Suppose we are given an LP, in standard form, with n
decision variables x ∈ Rn and m ≤ n constraints. Let A be an m × n matrix and b be
3
Chapter 2. Exploring MILP Solving Algorithms 4
an m element vector, the LP can be formulated as:
minimizex
cTx
subject to Ax = b
x ≥ 0
(2.1)
A basic feasible solution (i.e., corner point) identifies m elements of x to vary and the
remaining n−m elements fixed at 0. More formally, let xB ∈ Rm be the basic elements of
x and {xN ∈ Rn−m|xN = 0}. The matrix B ∈ Rm×m can be obtained by only including
those columns of A where the corresponding element of x is in xB, the remaining columns
are then combined to create N ∈ Rm×(n−m). We can re-write (2.1) as:
minimizexB ,xN
cTBxB + cT
NxN
subject to BxB + NxN = b
xB ≥ 0,xN = 0
(2.2)
Without loss of generality, assuming that the original matrix A was of full row-rank
(otherwise you can intelligently remove those linearly dependent rows) the matrix B
would be invertible and, as such (2.2) would have a unique solution x∗B = B−1b, the
basic feasible solution provided to us. This solution, however, may not be the optimal
solution to (2.1). The simplex algorithm directs us to identify an element of xN that
could further reduce the objective value if it were not 0.
In order to determine how to further decrease our objective value we should try and
identify how the objective would change if we were to change the current basic feasible
solution. To do this, we can re-write (2.2) enforcing the current solution to be xB = x∗Bwhere x∗B = B−1b and xN = x∗N where x∗N = 0. This may be counter-intuitive since this
is the only solution but if we were to go about it regardless we would obtain:
minimizexB ,xN
cTBxB + cT
NxN
subject to BxB + NxN = b
xB = x∗B|x∗B = B−1b
xN = x∗N |x∗N = 0
(2.3)
We can then try to formulate the Lagrangian dual of this problem. The Lagrangian is
Chapter 2. Exploring MILP Solving Algorithms 5
then:
L(xB,xN ,λ,µ,ν) = cTBxB + cT
NxN + (b−BxB −NxN)T λ
+ (x∗B − xB)T µ + (x∗N − xN)T ν(2.4)
While the dual will be:
Λ = maximizeλ,µ,ν
infxB ,xN
L(xB,xN ,λ,µ,ν) (2.5)
Since L(xB,xN ,λ,µ,ν) is differentiable with respect to xB and xN , we can take partial
derivatives and set them equal to 0, in order to obtain the infimum.
∂L
∂xB= cB −BTλ− µ⇒ 0
µ = cB −BTλ
∂L
∂xN= cN −NTλ− ν ⇒ 0
ν = cN −NTλ
(2.6)
Now that we have expressions for µ and ν, the dual can be simplified to.
Λ(λ) =
∞ if x 6= x∗ or BxB + NxN 6= b
bTλ + µTx∗B + νTx∗N otherwise(2.7)
Strong duality tells us that the objective values of the primal and the dual should be
equal, therefore: cTBxB + cT
NxN = bTλ+µTx∗B + νTx∗N . Since we know the values of x∗Band x∗N , simple manipulation gives us λ = (BT)−1cB. Furthermore, by strong duality,
a change in objective of the dual will equally affect the change in the primal so we can
take derivatives with respect to x∗B and x∗N and see how changing our constraints will
affect the final solution.
∂Λ
∂x∗B= µ = cB −BTλ
λ=(BT)−1cB=======⇒ cB − cB = 0
∂Λ
∂x∗N= ν = cN −NTλ
λ=(BT)−1cB=======⇒ cN −NT(BT)−1cB
(2.8)
Therefore, by allowing an element i of xN with(
∂Λ∂x∗N
)i< 0, equivalently νi < 0, to
increase from 0 we can further reduce the objective of (2.1). As such, the values obtained
from these derivatives are called the reduced cost. Moving forward, we choose an element
Chapter 2. Exploring MILP Solving Algorithms 6
of xN with negative reduced cost and allow it to enter our basis. However, as this variable
is changing from 0 to whatever positive value it should take, there is a good chance that
we will be affecting the feasibility of our problem. Therefore, the simplex algorithm has
a mechanism to decide which element should be leaving our basis and how all the other
elements in the basis will be affected. Alternatively, if there is no element with negative
reduced cost our current basic feasible solution is optimal.
To choose the element that is leaving, we identify how each of the elements in the
current basis are affected by a change in (xN)i, the entering variable. It can be shown
that this change we are describing can be given by d = B−1Ni ∈ Rm, where Ni is the
ith column of N, corresponding to the entering variable. Since we want all the values in
our basis to remain positive (feasible) we choose the element as follows:
j = arg minimizej
{αj =
(xB)jdj
∣∣∣∣ dj < 0
}(2.9)
If d ≥ 0 then we can move along this direction for any distance and the LP given in 2.1
is unbounded. Otherwise, we set xB + αjd → xB and we allow (xN)i to enter the basis
and (xB)j to leave the basis. This will provide us with a new basic feasible solution and
we start the process over.
2.1.2 Delayed Column Generation
The simplex algorithm is predicated on the idea that we only need to evaluate m of the
n variables at each iteration, the remaining n −m variables are given values of 0. One
mechanism by which we can reduce the resources utilized in solving larger LPs, n� m,
is to partition the problem variables into two groups, one set that is sent to the solver
to solve and the remainder of the variables, that will be fixed at 0. Once we obtain the
solution to the reduced problem we utilize the dual variables to generate variables (one
or multiple) that have the potential to reduce the objective value. As such, from (2.1)
let’s break down x into xV ∈ Rp where m ≤ p ≤ n and xH ∈ Rn−p, for visible and hidden
respectively. We can then define a smaller linear programming problem as follows:
minimizexV
cTV xV
subject to VxV = b
xV ≥ 0
(2.10)
Chapter 2. Exploring MILP Solving Algorithms 7
Just as with the basic and non-basic elements of x in the simplex method, assuming we
can obtain a basic feasible solution from the p ≤ n elements of xV , we can solve (2.10)
under the premise that xH = 0. Having obtained a solution, we determine the basis
utilized since m ≤ p, compute the dual variables λ = (BT)−1cB, and solve the following
sub problem to determine if there are any variables that have a possibility of reducing
the objective found in (2.10).
minimizei
(cH −HTλ
)i (2.11)
Where the matrix V ∈ Rp×m is obtained by retaining only those columns of A that
correspond to elements of xV and the matrix H ∈ R(n−p)×m are the remaining columns.
Assuming (2.11) obtains an objective value less than 0, that column can be added to V
and the process starts over. Depending on the application, it may even be beneficial to
add multiple columns at each iteration of the algorithm, the only downside to this would
be that the problem grows at a faster rate and more computation may be required at
each iteration. Regardless of the number of columns added, if the columns were not able
to reduce the objective, their associated xV element will be 0, and you may still add
another column at the next iteration. The process terminates when you’ve determined
that (2.10) is unbounded, or alternatively when (2.10) obtains a solution and (2.11) has
a non-negative objective value, in which case the current xV = x∗V , xH = 0 is the optimal
solution to (2.1).
2.2 Mixed Integer Linear Programming
There are many scenarios where you may choose to enforce that some subset of the
problem variables are integer. For those types of problems we have mixed integer linear
programs (MILP). These problems are generally much more computationally expensive
and as a result there are many specialized algorithms to solve specific instances of these
problems and to take advantage of specific problem structures. In this work, however,
we choose to look at general MILPs. Choosing x ∈ Rnx to be the continuous variables
and y ∈ Zny to be the integer decision variables, we can illustrate any MILP as:
minimizex,y
cTxx + cT
y y
subject to Axx + Ayy = b
x ≥ 0
y ≥ 0
(2.12)
Chapter 2. Exploring MILP Solving Algorithms 8
2.2.1 Exact Algorithms
Amongst the most intuitive algorithms for solving MILPs, branch and bound uses a
tree-like search mechanism to identify candidate regions for the solution space. The
algorithm commences by relaxing the integer requirements on the MILP and solving the
resulting LP. If the optimal solution (x∗,y∗) has y∗ integer then the current solution is
the optimal MILP solution. In all other cases, the current objective, z = cTxx∗ + cT
y y∗ is
a lower bound on the optimal MILP solution. We create child nodes by choosing some
index {i|y∗i /∈ Z} and branching to two further constrained LPs, the first imposing a
lower bound on yi ≥ dy∗i e, the second imposing yi ≤ by∗i c. This will create two new
solutions where the solution to those LP will be the lower bound for those respective
solution spaces. Given it’s simplicity this algorithm is often utilized as a base structure
for other MILP solving algorithms but on its own it is a very slow algorithm that could
take significant time to find any integer solution and, even when a solution is found,
searching the entire tree is draining on both time and memory resources. (Lawler and
Wood, 1966)
Cutting planes algorithms are similar in the sense that they iteratively reduce the
feasible region by adding constraints that remove the current non-integer solution from
the feasible region before re-optimizing. In this algorithm, however, more work is done
on deciding the exact constraint to add to the problem. First introduced by Ralph E
Gomory (Gomory, 1958), these algorithms are predicated on the fact that an LP optimal
solutions will lie on the corner of the feasible region, cutting planes simply determine
an intelligent plane that separates the current LP optimal solution from all the integer-
feasible solutions. In general, these cuts perform better than the floor and ceiling cuts
utilized in the branch and bound scheme (Kelley and Jr, 1960). Gomory Cuts, in specific,
have been accredited with a big portion of the speed advancements made in commercial
MILP solvers in the late 1990’s and early 2000’s. CPLEX began incorporating these cuts
in 1999 and they’ve been shown to cut, pun intended, average computation by a factor
of 2.5 (Cornuejols, 2012). Figure 2.1 illustrates how the optimal LP solution changes
with the addition of each cutting plane, providing some graphical insight on how this
optimization scheme operates.
The branch-and-cut algorithm simply applies a cutting planes technique inside the
branch and bound structure. At each node of the branch and bound tree a cutting plane
is added to the problem. One of the key advantages is that more of the corner points of
the feasible region will be at the integer solutions, making it more likely that future LP
solutions at the branch and bound nodes will have integer solutions. Furthermore, cutting
planes generally identify cuts that are closest to the LP optimal solution from which they
Chapter 2. Exploring MILP Solving Algorithms 9
0 0.5 1 1.5 2 2.5 30
1
2
3
4
5Original LP Problem
y1
y2
Optimal
0 0.5 1 1.5 2 2.5 30
1
2
3
4
5After Adding Cut #1
y1
y2 Optimal
0 0.5 1 1.5 2 2.5 30
1
2
3
4
5After Adding Cut #2
y1
y2
Optimal
0 0.5 1 1.5 2 2.5 30
1
2
3
4
5After Adding Cut #3
y1
y2
Optimal
Figure 2.1: Cutting Planes for maximize 6y1 + y2 s.t. 15y1 + y2 ≤ 30, 3y1 + 4y2 ≤ 18
were generated, however, by combining the cutting planes technique within the branch
and bound scheme, planes can be generated from various LP optimal solutions, one for
each node. The merger of cutting planes with branch and bound generally identifies
integer solutions faster, which allows for quicker pruning of the branch and bound search
tree. (Padberg, 2001)
Instead of adding constraints to the problem, we can also begin with a small prob-
lem and utilize a column generation inspired approach to add columns as we progress
throughout the branch and bound algorithm, this is known as branch and price. The
branch and price algorithm starts with a subset of the columns as visible and at each
node of the branch and bound tree one can identify new columns to add to the problem
by looking at the dual information, as described in Subsection 2.1.2. In this way, the
problem stays as small as possible and solving the LP at each branch and bound node is
faster. (Barnhart et al., 1998)
Further combinations of the above algorithms exist. The branch-price-and-cut algo-
rithm identifies combinations of columns and cutting planes to add to the problem at
each node. While more processing is required at each node, the theory is that the prob-
lem size at each of the nodes remains small and integer solutions are identified with fewer
Chapter 2. Exploring MILP Solving Algorithms 10
iterations than the simpler branch-price and branch-cut algorithms. Once an incumbent
integer solution has been found the branch and bound search tree can be filtered utilizing
the lower bounds of the subdivided feasible regions, this greatly reduces problem size.
(Feillet et al., 2010)
Bender’s decomposition and Lagrangian relaxation utilize duality theory to begin
with dual feasible (primal infeasible) solutions and iteratively approach primal feasibility.
Bender’s decomposition separates the MILP solving problem into two components, the
first fixes the integer variables and solves a sub-problem consisting of purely continuous
variables and the second component fixes the continuous variables and solves a pure
integer problem. In this way problem size is reduced and each sub-problem can be solved
relatively efficiently. In a more rigorous manner, if we were to fix the integer variables to
y = y∗, we can re-formulate (2.12) as the following:
minimizex
cTxx
subject to Axx = (b−Ayy∗)
x ≥ 0
(2.13)
Since this is a pure LP problem we can formulate the dual, which is given as:
maximizeλ
(b−Ayy∗)Tλ
subject to ATxλ ≤ cx
(2.14)
The weak duality theorem states that cTxx ≥ (b−Ayy
∗)Tλ for all feasible x and λ. Let
z be the objective of (2.12) (i.e., z = cTxx + cT
y y). Then weak duality theorem translates
to z ≥ cTy y + maxAT
xλ≤cx(b −Ayy)Tλ for any y. This means that we can re-formulate
(2.12) as:
minimizez,y
z
subject to z ≥ cTy y + (b−Ayy)Tλ ∀λ ∈ {λ|AT
xλ ≤ cx}
y ≥ 0
(2.15)
The algorithm commences by finding any feasible solution to (2.14), λ(0) and identifying
it as the first element of the set S = {λ(k)}. Next, the algorithm requires that we solve
(2.15), adding a constraint for each element of S. The optimal solution y∗ is relayed to
(2.14) which identifies another λ(k) to add to S and (2.15) is solved again. The algorithm
terminates once the weak duality criteria is satisfied (i.e., z∗ ≥ cTy y∗+ (b−Ayy
∗)Tλ(k)).
Chapter 2. Exploring MILP Solving Algorithms 11
In this way, Bender’s decomposition utilizes the current integer solution to identify the
best constraint to add to (2.15) (Codato and Fischetti, 2006).
The Lagrangian relaxation algorithm uses a scheme similar to that of Bender’s de-
composition with the caveat that instead of adding constraints to the problem, the dual
variables identify a gradient through which we can move the integer solution while main-
taining dual feasibility. This algorithm is often utilized as a an efficient mechanism for
determining lower bounds on integer linear programming problems and, as such, it is
often used in conjunction with the branch and bound based algorithms to help quickly
prune the search tree. (Fisher, 2004)
2.2.2 Heuristics
Given the NP-hard nature of general mixed integer programming, heuristics are often
favoured when solving such problems. When guaranteeing an optimal solution is not the
main goal heuristics are an invaluable tool. Many heuristics are designed for application
specific but the discussion will be limited to some of the more generally MILP applicable
heuristics.
Local search techniques are some of the most common heuristics primarily for their
simplicity. The only requirement with local search techniques is that one must define
what constitutes a neighbouring solution. Starting at an initial solution, the heuristics
then identify techniques to traverse through these neighbouring solutions to intelligently
determine the best direction to step in. In the case of MILPs, one simple definition of
neighbouring solutions are those solutions that have ‖y(i)−y(j)‖1 = 1 (i.e., they differ in
one element by 1). The challenge with these mechanisms is that it’s often simple to get
stuck at a locally optimal solution (a solution where all direct neighbouring solutions have
higher objective). Special care must be taken to ensure that enough of the solution space
is searched before terminating. Simulated Annealing utilizes a temperature parameter
that identifies the likelihood of visiting a node with a worse objective, with time the
temperature decreases to 0. At the beginning of the algorithm, traversing is done almost
completely randomly and as the cooling process begins the algorithm will slowly converge
to a local optima (Tsallis and Stariolo, 1996). Similarly, Tabu search looks within the
neighbourhood of the current solution and moves to the best candidate neighbouring
solution. However, upon visiting a solution, it is made tabu (made unable to re-visit
this solution) for some measure of time, thereby ensuring that other nearby solutions are
also visited (All and Terms, 2014). The k-OPT algorithm allows one to search through
k different changes of the current solution to identify the best solution within a larger
Chapter 2. Exploring MILP Solving Algorithms 12
vicinity of the current solution before moving in the best direction. This mechanism
broadens the scope of the search so that the algorithm can jump from one local optima
to the next bypassing the few poor solutions in between (Potvin et al., 1989).
Another class of heuristics are evolutionary algorithms like genetic algorithms and
scatter search. These algorithms utilize candidate solutions as a population, the mem-
bers of the population then go through mating, mutation and recombination phases with
the hopes of generating fitter candidate solutions in the next round. The main difference
between scatter search and genetic algorithms is the later is a random process where
members are randomly mated and mutated, etc. Scatter search however, has more de-
terministic rules about how to combine candidate solutions and how to improve them at
each mutation phase (Deb et al., 2002).
Swarm algorithms combine concepts from local search and evolutionary algorithms.
Neighbourhoods are defined just as in the local search schemes but a population element
is added to determine the search path. For example ant colony optimization simulates
the movement of ants through a parameter space representing all the solutions. The ants
lay pheromones that make it more likely for future ants to traverse the same path, this
mechanism is best suited for graph traversing strategies. Particle swarm optimization
randomly distributes particles in the solution space with some initial velocity at each time-
step the particles move and their objective values are re-computed, then all the particles
are then accelerated towards the particles with the best objective values (Dorigo et al.,
1997).
Column generation for mixed integer programming is also a heuristic as it does not
guarantee optimality, or even feasibility in the general case. Generally, the MILP is
solved in its integer-relaxed form as an LP using column generation. Having solved the
LP relaxed problem, one can utilize those columns were generated for the LP in order to
solve the original mixed integer programming problem. By only utilizing those variables
that have been generated by the LP the algorithm effectively reduces the number of
decision variables in the MILP. This mechanism is often very useful in problems similar
to cutting stock problems where there can be near-infinite variables. Research has been
done on identifying which instances of problems result in exact ILP solutions (Vanderbeck
and Wolsey, 1996) and utilizing advanced column generation techniques to increase the
likelihood of obtaining integer corner points (Bredstrom et al., 2014).
Chapter 2. Exploring MILP Solving Algorithms 13
2.3 Combinatorial Auctions
Unlike a standard auction where bidders bid on a single object at a time, a combinatorial
auction (CA) is a type of auction where bidders bid on packages or combinations of items.
The auctioneer determines which bidders get which packages and at what prices. The
main advantage of CAs is that they allow bidders to assign a value to a collection of items
that may differ from the sum of the values of the individual items. As a simple example,
the value of a pair of shoes is usually higher than the value of a right shoe alone plus
the value of a left shoe alone (Cramton et al., 2006). Combinatorial auctions combine
concepts from game theory, economics, computer science and operations research and
have successfully been applied to problems in resource allocation (Abrache et al., 2007,
Parkes and Ungar, 2000), transportation procurement (Kwon et al., Kwon, 2005) and
price valuation (Dunford and Hoffman, 2007, Fotakis et al., 2013, MacKie-Mason and
Varian, 1994).
A combinatorial auction has three essential components that can be used iteratively
to determine the optimal allocation of items to bidders. First, there is the bid generation
problem (BGP); in this problem, each bidder determines what package(s) of items to bid
on and the corresponding bid price(s). The second problem is the winner determination
problem (WDP); given the bid prices, the auctioneer determines the distribution of items
to the bidders in the current round. The last problem is the pricing problem (PP); once
the winners have been chosen, the ask prices are updated. These prices are relayed to
the bidders and, in the case of iterative auctions, the process repeats, allowing bidders
to challenge the winners from the previous round (Kwon et al., Parkes and Ungar, 2000,
Parkes, 2001, Porter et al., 2003). This three-part decomposition method generally pro-
duces good quality solutions but, in general, there is no guarantee of global optimality
(de Vries and Vohra, 2003, Xia et al., 2004).
Next, we describe in detail each component of a combinatorial auction.
2.3.1 Bid Generation Problem
The solution of the BGP represents the bidder’s valuation for a package of items. The
flexibility of the BGP is the strength of the CA approach. Each bidder has the capability
of implementing their own unique pricing scheme, submitting bids on multiple packages,
or even utilizing multiple pricing schemes to generate multiple bids. Furthermore, the
ability to decompose a complex bidding process into a relatively small problem for each
bidder is a computational benefit of CAs. The BGP utilizes the ask prices generated
by the auctioneer in the pricing problem and generates bids that are processed by the
Chapter 2. Exploring MILP Solving Algorithms 14
winner determination problem in the next round.
2.3.2 Winner Determination Problem
The winner determination problem (WDP) represents the global objective that is required
to be solved. In this problem, the auctioneer is provided with a set of bids on various
packages of items from the BGP. The auctioneer is then required to solve a maximum set
packing problem to decide which items each bidder wins. This is an NP-hard problem.
As in the example illustrated by Kwon et al., let’s assume that an auctioneer would like to
sell a set of items, he obtains bids on subsets of these items, the price for each collection
will be noted as pi and φ is the set of all collections of items. The winner determination
problem is then:
maximizex
∑i∈φ
pixi (2.16a)
subject to∑i∈φ
aijxi ≤ 1 ∀j ∈ P (2.16b)
xi ∈ {0, 1} ∀i ∈ φ (2.16c)
Where aij is a boolean value that indicates that item j is in package i and P is the set of
all items. The first constraint ensures that we sell at most one of each items, the second
ensures that we sell the entire package i or none of it.
2.3.3 Pricing Problem
After the optimal allocation of items has been decided, the task falls to the auctioneer
to update the ask prices for each item. It is this step which is the most highly variant in
literature. In the Generalized Vickrey Auction the pricing problem enforces each winning
bidder to add the opportunity cost that their presence introduced to the other bidders.
Which is given as the value of the bids that would have won absent of the current bidder
minus the value of the winning bids (MacKie-Mason and Varian, 1994, Parkes and Ungar,
2000, Parkes, 2001, Pekec and Rothkopf, 2003). Another pricing mechanism involves
finding the pseudo-dual of the WDP (de Vries and Vohra, 2003, Dunford and Hoffman,
2007, Kwon et al., Kwon, 2005), pseudo because the winner determination problem is an
integer program and not a simple linear program so some added complexity is required to
work around this issue. If the dual actually existed, as it does with linear programs, then
the shadow price for the 2.16b constraints would represent the added utility (amount
more money) the auctioneer would receive if he could sell two of item j as opposed to the
Chapter 2. Exploring MILP Solving Algorithms 15
single one. This is essentially the value of one more of item j. The pseudo-dual of the
above WDP will be referred to as the pricing problem (PP) and is given as:
minimizeπ,q
∑j∈P
πj +∑i∈φ
qi (2.17a)
subject to∑j∈P
aijπj = pi ∀i : xi = 0 (2.17b)∑j∈P
aijπj + qi ≥ pi ∀i : xi = 1 (2.17c)
In the following iteration the optimal πj would be the ask price, and it is the shadow
price associated with each of the 2.16b constraints. The qi variables arise as the dual
variables for the binary constraints on xi, what is special here is that if the optimal xi
was not 1 then we need to ensure that the corresponding qi is 0. There is no added value
of increasing the maximum value from 1 to 2 since the optimal solution did not even
require the 1. Put more formally, since the binary constraint for the variable xi was not
binding, the corresponding dual variable, qi, must equal 0.
Chapter 3
Proposed Algorithm
The contribution of Combinatorial Auctions to the optimization realm is the decision to
set an opportunity cost for each item in the auction. Given the posted costs, the bidders
can identify the best set of bids to present to the auctioneer. Similarly, in this paper
we present a simple technique to determine said costs for any MILP. The result is an
iterative algorithm that yields a feasible integer solution at each iteration. In this chapter
we begin by formulating the three sub problems for the generalized combinatorial auction
in Section 3.1. We then, in detail, present the inner workings of the algorithm in Section
3.2. Before we assess the algorithm on specific problem types in chapters 4 and 5, we
discuss the possible extensions of this algorithm in Section 3.3
3.1 Formulation
3.1.1 Algorithm Overview
The CA algorithm is designed to mimic an auctioneer who is attempting to solve the
overall optimization problem. Given specific bids, or columns (the two terms will be used
synonymously), the auctioneer determines the optimal allocation of the items and the
shadow price of each constraint (the marginal cost of each resource). This information
is then relayed back to the bidders and they identify the next set of bids to submit to
the auctioneer. The algorithm is flexible in the sense that at each iteration the number
of total bids visible to the auctioneer could be increased by an arbitrary amount. There
could be multiple bidders, or a single bidder, and each bidder is allowed to submit more
than one bid at each iteration of the auction. The algorithm stops when the auctioneer
decides that enough iterations have occurred or if the bidders cannot submit any more
promising bids. Figure 3.1 is a flow chart of the algorithm.
16
Chapter 3. Proposed Algorithm 17
Figure 3.1: Flowchart of the proposed algorithm
3.1.2 Winner Determination Problem
As previously stated, the auctioneer is attempting to determine the global optimal solu-
tion to the generalized MILP. However, in the CA mechanism the auctioneer should only
have to deal with the current set of proposed bids from the varying bidders, those visible
variables as in (2.10). Similarly, in this generalized MILP the auctioneer is dealing with
only a subset of the total possible bids (i.e xV ∈ Rpx , px ≤ nx and yV ∈ Zpy , py ≤ ny). At
each iteration of the auction, the WDP solves the restricted problem dealing with those
bids previously deemed worthy of being inserted into the current set of visible elements,
as shown in 3.1.
minimizexV ,yV
cTxV xV + cT
yV yV
subject to VxxV + VyyV = b
xV ≥ 0
(3.1)
3.1.3 Pricing Problem
In order to best generate the opportunity cost for each resource (constraint), we borrow
concepts from the dual problem of an LP. Suppose we are given the solution to (3.1),
Chapter 3. Proposed Algorithm 18
x∗V , y∗V . In Subsection 2.1.1, our review of the simplex method, we reformulated an
augmented LP (2.3) where we added constraints to fix the decision variables, thereby
allowing us to take a derivative of the Lagrangian in order to determine the reduced cost
of each variable. In this formulation, we take a very similar approach. Since we already
know how to compute reduced costs for the continuous variables, we only add constraints
fixing the integer variables (i.e., yV = y∗V and all hidden variables are 0). Since y∗V ∈ Zpy ,
any feasible LP solution to this augmented WDP will have yV integer (specifically y∗V ).
The benefit here is that we can relax the integer constraint and formulate the Lagrangian
dual. The complete MILP can be re-written as:
minimizexV ,yV ,xH ,yH
cTxV xV + cT
yV yV + cTxHxH + cT
yHyH
subject to VxxV + VyyV + HxxH + HyyH = b
xV ≥ 0
xH = x∗H = 0
yV = y∗V
yH = y∗H = 0
(3.2)
Therefore the Lagrangian will be:
L(xV ,xH ,yV ,yH ,λ,µx,µy,νx,νy) = cTxV xV + cT
xHxH + cTyV yV + cT
yHyH
+ (b−VxxV −HxxH −VyyV −HyyH)T λ
+ (0− xV )T µx + (x∗H − xH)T νx
+ (y∗V − yV )T µy + (y∗H − yH)T νy
(3.3)
Since L(· · · ) is differentiable with respect to all its independent variables, we can take
partial derivatives and set them equal to 0, in order to obtain the infimum.
Chapter 3. Proposed Algorithm 19
∂L
∂xV= cxV −VT
xλ− µx ⇒ 0
µx = cxV −VTxλ
∂L
∂xH= cxH −HT
xλ− νx ⇒ 0
νx = cxH −HTxλ
∂L
∂yV= cyV −VT
y λ− µy ⇒ 0
µy = cyV −VTy λ
∂L
∂yH= cyH −HT
y λ− νy ⇒ 0
νy = cyH −HTy λ
(3.4)
Now that we have expressions for all the Lagrangian multipliers, with the exception
of λ, we can re-write and simplify the Lagrangian to the following:
inf L(· · · ) = bTλ + µTy y∗V + νT
xx∗H + νTy y∗H (3.5)
Giving us a dual of:
Λ(λ) =
bTλ + µTy y∗V + νT
xx∗H + νTy y∗H if xH = 0 and xV ≥ 0 and y = y∗
∞ otherwise(3.6)
The dual problem attempts to maximize the Lagrangian constrained to µx ≥ 0, since
µx is the dual variable for a set of inequality constraints. We can rearrange (3.6) by
removing the 0-valued x∗H and y∗H variables and adding the required dual constraint.
µx ≥ 0 can be expressed as VTxλ ≤ cxV and, despite its redundancy, we can also add,
VTy λ + µy ≤ cyV since we know that µy = cyV −VT
y λ from (3.4). This will yield a final
dual problem of:
Chapter 3. Proposed Algorithm 20
maximizeλ,µy
bTλ + (y∗V )Tµy
subject to VTxλ ≤ cxV
VTy λ + µy ≤ cyV
(3.7)
In summary, given primal variables x and dual variables λ, where cTx = bTλ, λ ∈ Rm
represents the opportunity cost for each of the m constraints. The issue, however, is that
this primal-dual relationship is derived only for convex optimization problems, which the
introduction of integer variables violates. However, assume we have the current solution
to the WDP, we have a feasible y and by forcing yV = y∗V , the current optimal yV , we
have circumvented the integer nature of y and reformulated the problem as an LP with
the same exact objective and decision variable values. This is almost identical to how we
derived the equation for reduced cost for regular simplex method in (2.8). Just as with
(2.8), the dual variables obtained via λ and µy give us an indication of how the WDP
objective will change with a change in any of the variables.
3.1.4 Bid Generation Problem
The bid generation problem attempts to decide the best variable(s), xi, i ∈ {1, 2, ..., nx−px} and/or yi, i ∈ {1, 2, ..., ny − py}, to insert into our current set of visible variables.
There are two variants of the problem, the first for determining a continuous variable
to enter and the second for allowing the integer variables to enter the set of variables
visible to the auctioneer. As with the revised simplex and delayed column generation, we
require that the reduced cost of the entering variable be negative, otherwise the current
solution is optimal. To add a continuous variable we generate a bid using:
minimizei∈{1,nx−px}
cxi = cxi − (HTx )iλ (3.8)
For the integer variables we utilize:
minimizei∈{1,ny−py}
cyi = cyi − (HTy )iλ (3.9)
The BGP can be further decomposed by partitioning the sets Hx and Hy, the non-basic
continuous and integer variables respectively into a user-defined l groups and generating
a bid from each of those l groups at each iteration (i.e. add at most l variables to the
set of visible variables at each iteration). The choice of l and decision on how exactly to
Chapter 3. Proposed Algorithm 21
partition the variables into the l groups is completely arbitrary but by intelligently doing
so you can greatly reduce problem size. In chapter 5, a case study on radiation therapy,
we utilize this flexibility to partition the fluence map variables by the beam that they
are generated from. At each iteration we generate a bid from each beam.
3.2 Process Flow
3.2.1 Initialization
We begin by attempting to solve the WDP, but to do so we require that the user generates
a set of visible variables (continuous and integer) large enough for the solver to be able
to actually obtain a solution.
3.2.2 Solve WDP
Given the current set of visible variables, those added in the initialization and those added
by the BGPs in the preceding iterations of the algorithm, solve the WDP as a MILP.
Since, generally, the WDP has fewer variables than the complete MILP formulation (i.e.,
not all the variables are visible), there will be considerable time savings. The goal is that
this problem is small enough that repeatedly solving it will still take a shorter time than
solving the complete problem with all variables visible.
3.2.3 Solve the PP
After solving the winner determination problem there is no real need to independently
formulate and solve the pricing problem. Instead, you can take the solution obtained
from the WDP, add py constraints fixing the integer variables and solve the problem
using any simplex based LP solver giving the current solution as the starting point. The
solver will finish in one iteration and can output all the dual variables.
3.2.4 Solve The BGPs
To determine which variables should be included in future iterations of this algorithm we
solve the BGP adding the variable(s) with lowest reduced cost from each of the l groups.
As an example, for a general MILP one can set l = 2 and partition the variables into
continuous and integer, or alternatively l = 3 for continuous, binary and general integer.
If there are no variables with negative reduced cost we stop as we have reached our
Chapter 3. Proposed Algorithm 22
optimal solution. Otherwise we continue to solve this new larger winner determination
problem, we provide the last solution of the WDP as the starting point for the algorithm
as it will still be feasible
3.2.5 Considerations
Depending on the exact problem formulation, this mechanism is sensitive to the initial
set of visible variables provided to the auctioneer. Gradients indicate how slight pertur-
bations of a vector affect some value. Furthermore, these gradients are only valid within
the vicinity of the vector from which they were evaluated. However, perturbing an integer
variable by a small amount, δ < 1, is impossible. For this reason, since the dual variables
were derived from the gradient information obtained after fixing the integer variables to
the solution obtained from the previous iteration of the WDP, we are more likely to retain
the same choice of integer variables in future iterations. It is only within the vicinity of
the current solution that the gradient is valid. As a result, at an adjacent feasible integer
solution the gradient could look completely different, requiring that different variables
enter to reduce the objective. For example, if we have binary constraints indicating if
certain continuous variables can be used or not, there is a higher tendency to select those
variables with the associated integer variable already set to 1 to enter than there is to
select a variable corresponding to an integer variable that is not 1 (i.e., if this variable
were to enter, unless the integer solution changes, we can not allow this variable to enter
the basis). The reason for this, is that we are utilizing gradient information that tells us
that the only feasible mechanism to decreasing the objective value is to make visible those
variables that the WDP has determined the associated binary variable is already on. In
the context of chapter 5, the gradient information most closely resembles the utilization
of those fluence maps corresponding to beams that were already chosen to be on. To
change the selection of beam angles, a fluence map needs to be generated that reduces
the objective by an amount equal to the difference between the previous best choice of
beams and the best choice of beams with this new beam introduced. Since integer vari-
ables greatly affect computational requirements, there are three strategies that should be
employed when deciding which variables should be visible and which shouldn’t. Firstly,
one can try to minimize the number of integer variables. The algorithm is capable of
selecting integer to make visible to the auctioneer, having few initial integer variables
in the first set of visible variables reduces the computational requirements of obtaining
the first solution to the WDP. Future iterations will have the previous WDP solution
available as an upper-bound solution. This will significantly reduce computation time as
Chapter 3. Proposed Algorithm 23
Table 3.1: Potential Strategies When Generating Bids
Strategy Usage Sample Formulation
Minimize pyWhen generating
initial bids
minimizexV ,yV ,py Mpysubject to VxxV + VyyV = b
xV ∈ Rpx ,xV ≥ 0yV ∈ Zpy
Maximizeimpact
BGPmaximizei HT
i HHi
subject to ci < 0
Prioritizeinteger variables
BGPminimizei ci −MI(i is Integer Var)subject to ci < 0
it allows the auctioneer to prune the branch and bound search tree utilized in optimizing
the WDP. The second strategy would be to maximize impact on the hidden variables.
Selecting, integer variables that share a lot of non-zero coefficients with as many hidden
variables as possible means that the auctioneer’s selection for the optimal solution to
the WDP greatly effect the reduced costs of more variables. Therefore, there is higher
likelihood that a column that can reduce the objective value of the WDP will be chosen.
The last strategy is to put a priority on making integer variables visible. The reason for
this is, as mentioned before, there are often scenarios that for a continuous variable to
enter the basis a corresponding integer variable must be 1. In that case, to make that
variable enter it requires that you first make the integer variable visible then make that
variable visible. Table 3.1 summarizes the strategies described and suggests potential
formulations. Note that M is some large number and I(i is Integer Var) is an indicator
function indicating that the ith variable is one of the integer variables.
While the second strategy is out of the scope of this work, we utilize the first and
third strategies extensively in chapter 5. In terms of minimizing the number of integer
variables, we start with all the beams turned off and at each iteration we decide on one
beam to enter into the set of visible variables. Therefore, the first iteration has no visible
integer variables and can be solved as an LP. In each consequent iteration we add one
integer variable, prioritizing them over the continuous ones so that we maximize the
likelihood of obtaining a good final choice of beams while still providing a good initial
solution to the WDP so that it efficiently prune the branch and bound tree.
Chapter 3. Proposed Algorithm 24
3.2.6 Convergence & Optimality
Theorem 1. If there exists an optimal solution to the MILP then the algorithm converges
to a locally optimal solution
Proof. ∵ the BGP minimizes reduced cost any negative reduced cost will cause the ad-
dition of a bid that will cause the dual of the pricing problem to have lower or equal (in
the case of degenerate basic feasible solutions) objective value. ∵ the sequence {WDP∗t}is decreasing and is bounded below by the MILP optimal solution, MILP∗, it converges
to some value WDP∗∞ ≥ MILP∗
Theorem 1 indicates that the algorithm will identify an optimal solution if one exists.
Special care, however, must be taken when dealing with problems with infinitely many
variables. One can loop forever adding variables via the BGP from an inconveniently
formed problem where one solution is degenerate with infinitely many variables optimal
at 0. However, in the general case when working with a non-infinite number of variables
nx + ny <∞ , there will be at most as many iterations as there are variables, nx + ny.
Theorem 2. If minimizei∈{1,2,...,nx−px} cxi = cxi−(HTx )iλ ≥ 0 and minimizei∈{1,2,...,ny−py} cyi =
cyi− (HTy )iλ ≥ 0 and maximizei∈{1,2,...,py} cyi =
∣∣cyi − (V Ty )iλ− µyi
∣∣ = 0 then the solu-
tion is globally optimal
Proof. cxi − (HTx )iλ = (νx)i if minimizei∈{1,2,...,nx−px} cxi = cxi − (HT
x )iλ ≥ 0 then
νx ≥ 0. Similarly, νy ≥ 0 and µy = 0. By definition, µx = 0 from (2.8). ∵ ∂L∂x∗V
= µx,∂L∂x∗H
= νx,∂L∂y∗V
= µy and ∂L∂y∗H
= νy we have ∂L∂x∗V
= 0, ∂L∂x∗H≥ 0, ∂L
∂y∗V= 0 and ∂L
∂y∗H≥ 0.
∴ changing any visible variable will increase the objective and increasing any hidden
variable from 0 will also increase the objective. ∵ ∂L∂y∗V
= 0 this solution is optimal for
the integer-relaxed LP formulation for the problem which is a lower bound on the MILP
solution, ∴ this is the lower bound on the solution, ∴ optimal.
Theorem 2 is a very special case that does not always present itself in practice.
Essentially, if one was to relax the integer variables in the MILP formulation and there
exists an integer optimal solution to the resultant LP then the proposed algorithm will
be able to identify a globally optimal solution without requiring to add all the nx + ny
variables to the WDP.
Chapter 3. Proposed Algorithm 25
3.3 Extensions
3.3.1 Integer Simplex
The vast majority of the computational complexity is attributed to trying to solve the
winner determination problem as an integer problem. There are heuristics that can
be implemented to greatly reduce computational burden. Suppose for example that an
integer variable was selected to be visible, instead of allowing the auctioneer to assign it a
value optimally through an integer programming mechanism we can instead immediately
set it’s value to 1 and add that as a constraint to the LP solver. The issue with this,
is there are occasions where this addition of an integer variable can make the problem
infeasible. To circumvent this issue, we can obtain the basis matrix as was done in (2.9)
and utilize an integer programming solver to only vary the integer variable to be added
to the visible set, the continuous variables, and those integer variables already in the
visible set with non-zero d values. Algorithm 1 shows the pseudo-code for this potential
integer simplex extension.
The other scenario in this integer simplex extension is the removal of variables. Sup-
pose if by chance it is beneficial to set one of the integer variables, k, that was previously
set to 1 back down to 0. Instead of waiting for the BGPs to generate a variable with
which dk 6= 0, we can extend the role of the bidders to identify the removal of previously
generated integer variables . To achieve this we can consider another variable, k−, with
(Ay)k− = − (Ay)k. This then can be added as a regular integer variable like before.
cyk− = cyk− − (HTy )k−λ (3.10)
Alternatively if we have an integer variable, k+, that is already 1 we can increase it to 2
by deciding to add it again.
cyk+ = cyk+ − (VTy )k+λ− µyk+ (3.11)
In this way, when these reduced costs are negative we immediately move in the direction
of the gradient with the hopes of reducing the objective value. The benefit of course, is
that we minimize the need to solve the WDP as a MILP. In chapter 5 we take advantage of
the specific problem structure to solve the WDP as LP, thereby reducing the computation
requirements of the algorithm.
Chapter 3. Proposed Algorithm 26
Algorithm 1 Ensuring Feasibility With Integer Simplex
1: procedure EnsureFeasibility(iToEnter, y∗V , x∗V )2: d← B−1NiToEnter
3: V ← [V NiToEnter]
4: Solve
minimizexV ,yVcTxV xV + cT
yV yVsubject to VxxV + VyyV = b
(yV )i ∈ Z ∀ {i|di 6= 0}(yV )i = (y∗V )i ∀ {i|di = 0}
xV ≥ 05: return (x∗V ,y
∗V )
3.3.2 Cutting Planes
The introduction of cutting planes greatly revolutionized MILP solving algorithms. Our
algorithm utilizes the shadow prices of each of the constraints to identify how to generate
bids. At each iteration we can also solve a parallel cutting planes algorithm to identify
a constraint to add to our problem. The additional constraints would provide more
gradient information from which we could obtain higher quality bids. As we would be
solving an LP at each iteration, the algorithm would also have a lower bound on the
optimal solution, potentially paving the way for more advanced stopping conditions. In
subsection 2.2.1, we discussed the importance of cutting planes methods in commercial
MILP solvers This inspired us to attempt to incorporate Gomory fractional cuts into the
algorithm, but some of the inconsistent behaviours attributed to rounding errors made
generating safe cuts difficult and therefore out of the scope of this work. We describe the
methodology below, but stress that much more work still needs to be done on filtering
out unsafe cuts.
Given an LP and an optimal basis matrix B ∈ Rm×m one can multiply the constraints
by B−1 obtaining the following LP.
minimizexB ,xN
cTBxB + cT
NxN
subject to ImxB + B−1NxN = B−1b
xB ≥ 0,xN = 0
(3.12)
Suppose further that one element i of xB corresponds to an integer variable yi that was
relaxed and the resulting solution xi /∈ Z. We can generate a Gomory fractional cut. In
summation form, the row corresponding to xi can be expressed as xi +∑j∈N
(B−1N)ijxj =
B−1bBi. Let fij = (B−1N)ij − b(B−1N)ijc be the positive fractional component of
Chapter 3. Proposed Algorithm 27
(B−1N)ij and fi0 = (B−1b)i−b(B−1b)ic be the positive fractional component of (B−1b)i
then the constraint∑j∈N
fijxj ≥ fi0 is feasible for every integer solution but cuts off the
current solution xB = B−1b, therefore the constraint is a cutting plane.
Suppose further we were to multiply the row by a positive integer value h ∈ Z+, hxi+∑j∈N
(hB−1N)ijxj = hB−1bBi. Again, computing the positive fractional parts we obtain,
f(h)ij = (hB−1N)ij−b(hB−1N)ijc and f
(h)i0 = (hB−1b)i−b(hB−1b)ic. All integer solutions
satisfy∑j∈N
f(h)ij xj ≥ f
(h)i0 yet the current solution xB = B−1b does not, therefore each
value of h generates another cutting plane. In theory, we can add all the constraints to the
model and resolve the LP, further increasing our lower bound of on the integer solution
but the memory and processing requirements going forward would grow unmanageable.
Therefore we must decide on a mechanism to add a single constraint at each iteration of
the algorithm.
Deciding on Best Variable to Generate Cuts From
One can generate a cut from any and all of the integer variables with fractional LP optimal
solutions. Had we been more confident in the cutting plane algorithm implemented,
research could have been done to determine the efficacy of utilizing variables that are
not in the set of visible variables to the CA section of the algorithm. Having generated
a cut from these variables, we could immediately add them to the set of visible variables
for the next iteration of the CA. Another approach would be to only generate cuts from
those variables that are visible to the CA, the added constraint introduces a new dual
variable that could be beneficial to generating more accurate bids.
Deciding on Best h
The metric that we utilized to decide on the best cut is to maximize the distance from
the cutting plane to the current LP solution. This was an intuitive heuristic that had a
very simple implementation.
maximizeh
|f (h)Ti x∗ − f (h)
i0 |‖f (h)i ‖
(3.13)
3.3.3 Computing an Initial Basis
In literature, much work has been done in determining an initial feasible solution to MIPs.
For binary problems there are the Pivot-and-complement and the OCTANE algorithms
Chapter 3. Proposed Algorithm 28
that heuristically find quick solutions to binary integer programs (Balas et al., 2004).
For general MIPs the Pivot-and-shift algorithm utilizes simplex inspired pivot operations
that maintain feasibility while attempting to either remove integer variables from basis,
or improving the objective value (Huang and Mehrotra, 2013). At points in the algorithm
the solution is either rounded or a small MIP search is conducted in a small neighbour-
hood of the current solution. The Feasibility Pump and Active Constraints Branching
are two other algorithms utilized to achieve integer-feasibility quickly.
The Feasibility Pump starts with some LP feasible solution (x∗,y∗) one can round
the integer variables, yi = [y∗i ] then minimize the distance ‖(x,y) − (x∗, yi)‖1. If the
distance is 0 then the solution is integer feasible, otherwise you can use the new point as
the starting point, round and repeat (Bertacco et al., 2007).
Active Constraints Branching is a heuristic utilized within the branch and bound tree
to decide which variable to branch on. The algorithm first identifies which constraints are
active and which are not. Utilizing different measures, this heuristic determines which
variable influences the most active constraints, this variable is the one that is branched
on (Patel and Chinneck, 2007).
Chapter 4
Case Study: Benchmark Tests
4.1 Problem Overview
Since this problem was formulated for general MILP’s it is pertinent to determine and
analyse its aptitude at solving general problems. MIPLIB 2010, available at http:
//miplib.zib.de, is a standard test set used to compare the performance of mixed
integer optimizers. It was developed for the sole purpose of providing real-world mixed
integer problems to researchers to compare the performance of their optimizers (Koch
et al., 2011). In order to demonstrate the capabilities of the proposed Generalized Com-
binatorial Auction we ran all the benchmark instances. The tests are categorized by
12 different constraint types, utilizing this information we can aggregate the results and
determine how well the proposed algorithm performs with different problem types. The
constraint types are as follows:
• Aggregation Constraints (AG) aixi + akxk = b
In this scenario xi, xk can either be integer or continuous
• Variable Bound Constraints (VB) xi ≤ akxk + b
Again, xi, xk can either be integer or continuous. These constraints are used to
change the bound of a variable based on the value of another (e.g. change the
maximum flow on a road based on the existence of an added lane)
• Set Partitioning Constraints (PR)∑yi = 1
The yi’s are binary and collectively they define a set of items, only one of which can
be used at a time. These constraints we’d expect to perform either very poorly or
very well because once a yi element has been set to 1 all future gradient information
will favour this result. Making it very easy for the algorithm to get stuck at a local
29
Chapter 4. Case Study: Benchmark Tests 30
minimum. Alternatively, if the correct element was chosen it is very likely that the
algorithm will converge quickly to the correct result.
• Set Packing Constraints (PC)∑yi ≤ 1
Again, the yi’s are binary. These constraints are more forgiving since they allow
at most one element of the set to be 1, as opposed to exactly one element. Since
the auctioneer is not necessarily forced to make a decision on the choice of yi to
pack the set, it is more likely that the auctioneer could delay until he has better
gradient information to decide which element should enter.
• Set Covering Constraints (CV)∑yi ≥ 1
The yi’s are binary. Requiring that at least one element is non-zero means that the
auctioneer will obtain very little gradient information from this constraint. Once
an element has been set to 1 all other elements of the set are free to vary as they
please. Therefore the dual variable associated with this constraint will probably
not yield very useful information on deciding which variable should enter the set of
visible variables.
• Cardinality Constraints (CR)∑yi = b
The yi’s are binary. Cardinality constraints should have similar performance to set
packing constraints. Once the optimal choice of b elements has been chosen, future
iterations will tend to choose the same b elements. This could be either beneficial
or problematic depending on the initial choice and the stability of the objective
value around those variables.
• Equality Knapsack Constraints (EK)∑aiyi = b
Again yi’s are binary. The benefit to these constraints is that the same gradient
information will target specific elements of y, those with large |ai| will be made
more or less valuable depending on sign(ai).
• Binary Packing Constraints (BN)∑aiyi + akxk ≤ ak
The yi’s are binary while the xk’s are continuous.
• Invariant Knapsack Constraints (IK)∑yi ≤ b
The yi’s are binary. Much like the case with set covering constraints, invariant
knapsack constraints will not provide any dual information until∑yi = b. De-
pending on the remainder of the constraints, solution may have already converged
or be on the verge of converging to a local minima.
Chapter 4. Case Study: Benchmark Tests 31
• Binary Knapsack Constraints (KS)∑aiyi ≤ b
The yi’s are binary. The benefit to this constraint over invariant knapsack con-
straints is that the dual information affects the different yi’s differently making it
more likely that we select the better yi to enter the set of visible variables.
• Integer Knapsack Constraints (IKS)∑aiyi ≤ b
The yi’s are integer. The difficulty with integer variables as opposed to binary
variables, is that while it may be optimal to add an integer variable to the set of
visible variables and set it to 2, as opposed to 1, the dual variables are only valid at,
and in the vicinity of, the current optimal solution. The further from the current
optimal, the less adept the dual variables are at predicting the changes in objective.
• Mixed Binary Constraints (M01)∑ayiyi +
∑axixi ≤ b
The yi’s are binary and the xi’s are continuous.
4.2 Implementation of GCA Algorithm
Testing on a general set of problems means that there is no simple mechanism to take
advantage of specific problem structure or underlying metrics. As a result we chose to
have a single bidder that generates both continuous and integer bids as required, this more
accurately determines the prowess of the algorithm at generating good bids since there
aren’t too many bids generated per iteration. In the extreme case, if one was to make all
possible variables available to the auctioneer we would effectively be solving the original
MILP. However, by reducing the number of variables that are available there is a greater
need to generate the correct bids or otherwise there could be very large discrepancies
between the true optimal and the optimal solution provided by this algorithm. The
algorithm terminates at four possible conditions:
1. The problem is infeasible
2. The problem achieves an unbounded optimal solution
3. The bidder is unable to generate a bid with negative reduced cost
4.2.1 Initial Feasible Solution
In literature there are various mechanisms to determine an initial integer feasible solution
to a generalized MILP. The Pivot-and-shift algorithm is a simplex inspired algorithm that
attempts to execute extra simplex pivot operations with the goal of making as many of
Chapter 4. Case Study: Benchmark Tests 32
the integer variables non-basic. There are three types of pivots used in the algorithm
that all attempt to maintain LP feasibility. Type 1 exchanges a basic integer variable
with a non-basic one. Type 2, will improve the objective while replacing a like for like
variable (continuous with continuous or integer with integer). Type 3 will reduce integer
infeasibility by exchanging like for like variables. When the pivots stall the algorithm
can round certain variables (shifting) or do a small neighbourhood MIP search. (Balas
et al., 2004)
The algorithm used in this work is the feasibility pump, simply because it is the
algorithm of choice in CPLEX version 12 (current version). Start with any solution to the
LP relaxed version of the MIP (xj,yj) then round the integer component {y}j := byjc of
the current solution. The feasibility pump generates an LP that minimizes the distance
∆(yj, {y}j) then updates yj to be the solution of that LP. In this way consecutive
roundings are closer and closer to the feasible region. (Bertacco et al., 2007)
4.3 Results
4.3.1 Comparison With CPLEX
Table 4.1 shows a comparison of objective values provided by the generalized combinato-
rial auction and the true optimal solution for only those test case instances where both
solutions exist and are available. The table also lists the different types of constraints
involved in each test case, this will provide us a baseline to determine the constraint
types that this algorithm performs well with and those that it performs poorly.
Table 4.1: MIPLIB Benchmark Results
Test Case Opt. GCA Opt. ∆% AG VB PR PC CV CR EK BN IK KS IKS M01
30n20b8 302 753 149% 0 1 1 0 0 0 0 0 0 0 0 0
50v-10 3311.18 3597 8.63% 0 0 0 0 0 0 0 0 0 0 0 1
a1c1s1 11503.4 16752 45.63% 1 0 0 0 0 0 0 0 0 0 1 0
acc-tight5 0 0 0% 1 1 1 1 1 0 0 1 0 0 1 0
aflow40b 1168 1432 23% 0 1 1 0 0 0 0 0 0 0 0 1
air04 56140 56570 1% 1 0 1 0 0 0 0 0 0 0 0 0
app1-2 -41 -27 34% 1 0 0 1 0 0 0 1 0 0 1 0
bab5 -106412 -24555 76.92% 1 0 1 1 0 1 1 0 1 1 0 1
beasleyC3 754 954 27% 1 0 0 0 0 0 0 0 0 0 1 0
bienst2 54.6 70 28% 1 0 0 0 1 0 0 0 0 0 1 0
binkar10 1 6742 7081 5% 0 0 0 0 0 0 0 0 0 0 1 0
blp-ar98 6205.21 7936 27.89% 0 1 0 1 0 0 0 0 0 0 0 1
blp-ic97 4025.02 4562 13.34% 0 0 0 1 0 0 0 0 0 0 0 1
bnatt350 0 0 0% 0 0 0 1 0 0 0 1 0 0 1 0
cov1075 20 20 0% 0 0 0 1 0 0 0 1 0 0 1 0
csched010 408 487 19% 1 1 0 0 0 0 0 0 0 0 1 0
d10200 12430 3 99.98% 0 0 1 0 0 0 0 0 1 1 0 0
danoint 65.67 95 45% 1 0 0 0 1 0 0 0 0 0 1 0
eil33-2 934 1022 9% 0 1 0 0 0 0 0 0 0 0 0 0
eilB101 1217 1401 15% 0 0 1 0 0 0 0 0 0 0 0 0
enlight13 71 71 0% 0 0 0 0 0 0 0 0 0 0 0 0
Continued on next page
Chapter 4. Case Study: Benchmark Tests 33
Test Case Opt. GCA Opt. ∆% AG VB PR PC CV CR EK BN IK KS IKS M01
enlight15 69 69 0.00% 0 0 0 0 0 0 0 0 0 0 0 0
ex10 100 100 0.00% 0 1 1 0 0 0 0 0 1 0 0 0
ex9 81 81 0% 0 1 1 0 0 0 0 0 1 0 0 0
g200x740i 30086 44069 46.48% 0 1 0 0 0 0 0 0 0 0 0 1
germanrr 47095900 53065577 12.68% 0 1 0 0 0 0 0 0 0 0 0 1
gmu-35-40 -2407000 -2406000 0% 1 0 1 0 0 0 0 0 0 0 1 0
iis-100-0-cov 29 29 0% 0 0 0 1 0 0 0 1 0 0 0 0
iis-bupa-cov 36 36 0% 0 0 0 1 0 0 0 1 0 0 0 0
iis-pima-cov 33 33 0% 0 0 0 1 0 0 0 1 0 0 0 0
k16x240 10674 11359 6.42% 0 1 0 0 0 0 0 0 0 0 0 1
lectsched-2 0 0 0.00% 1 1 0 0 0 0 0 0 0 0 1 0
lectsched-3 0 0 0.00% 1 1 0 0 0 0 0 0 0 0 1 0
lectsched-4-obj 4 19 375% 1 1 0 0 0 0 0 0 0 0 1 0
lotsize 1480200 2539250 71.55% 0 1 0 1 0 0 0 0 1 0 0 1
m100n500k4r1 -25 -23 8.00% 0 0 0 1 0 0 0 0 0 0 0 0
macrophage 374 601 61% 0 0 0 1 0 0 0 1 0 0 0 0
maxgasflow -4.5E+07 -3.8E+07 14.74% 1 1 0 0 0 0 0 0 0 0 0 1
mc11 11689 13509 15.57% 0 1 0 0 0 0 0 0 0 0 0 1
mcsched 211900 248600 17% 1 1 1 1 0 0 0 0 1 0 0 1
mik-250-1-100-1 -66730 -66730 0% 0 0 0 0 0 0 0 0 0 0 0 0
mine-166-5 -5.7E+08 -5.4E+08 5% 0 1 0 0 0 0 0 0 0 1 0 0
mine-90-10 -7.8E+08 -6.8E+07 91% 0 1 0 0 0 0 0 0 0 1 0 0
mkc -563.846 -498 11.68% 0 1 0 1 0 0 0 0 1 0 0 1
msc98-ip 19840000 26760000 35% 1 1 1 0 0 1 0 0 1 1 1 1
mzzv11 -21720 -11930 45% 1 1 1 1 1 0 0 1 1 0 1 1
n3-3 15920 18350 15% 1 0 0 0 0 0 0 0 0 0 0 1
n4-3 8993 10010 11% 1 0 0 0 0 0 0 0 0 0 0 1
n9-3 14410 15650 9% 1 0 0 0 0 0 0 0 0 0 0 1
neos-1109824 378 556 47% 0 1 1 0 0 1 0 0 1 0 0 0
neos-1171692 -273 -248 9% 0 1 0 0 0 0 0 0 1 0 0 1
neos-1171737 -195 -165 15% 0 1 0 1 0 0 0 0 0 0 0 1
neos-1224597 -428 -428 0% 1 1 1 0 1 0 0 1 1 0 1 1
neos-1396125 3000 3000 0% 1 1 1 0 1 0 0 1 1 0 1 0
neos-1426635 -176 -171 3% 0 1 0 1 0 0 0 0 0 0 0 1
neos-1426662 -44 -44 0% 0 1 0 1 0 0 0 0 0 0 0 1
neos-1436709 -128 -118 8% 0 1 0 0 0 0 0 0 0 0 0 1
neos-1440225 36 36 0% 0 0 1 0 0 0 0 0 0 0 0 0
neos-1440460 -179.3 -168 6% 0 1 0 0 0 0 0 0 0 0 0 1
neos-1442119 -181 -175 3% 0 1 0 0 0 0 0 0 0 0 0 1
neos-1442657 -154.5 -140 9% 0 1 0 0 0 0 0 0 0 0 0 1
neos15 80600 80600 0% 1 0 0 0 0 0 0 0 0 0 1 0
neos-1620770 9 10 11.11% 0 1 1 0 0 0 0 0 0 0 0 0
neos-506422 0 361 ∞% 0 1 1 0 0 0 0 0 0 0 0 1
neos-506428 583800 875700 50% 0 1 1 0 0 0 0 0 1 0 0 1
neos-520729 -1385000 -1385000 0.00% 1 0 1 0 0 0 0 0 0 0 0 1
neos-777800 -80 -80 0% 0 1 0 1 0 1 0 0 0 0 0 0
neos808444 0 0 0% 1 1 1 0 0 1 0 1 0 0 1 0
neos-824695 31 35 13% 0 1 0 0 1 0 1 0 0 0 1 0
neos-826650 29 32 10% 0 1 1 0 1 0 1 1 0 0 1 0
neos-826694 58 60 3% 0 0 1 0 1 0 1 1 0 0 1 0
neos-826812 58.01 67 15% 1 0 0 0 0 1 0 1 0 0 0 1
neos-826841 29.01 31.01 7% 1 0 1 0 0 1 0 1 0 0 0 1
neos-849702 0 0 0% 0 0 1 1 0 1 0 0 1 0 0 0
neos-885086 -243 -228 6.17% 0 1 0 1 0 0 0 0 0 0 0 1
ns1952667 0 0 0.00% 0 0 0 0 0 0 0 0 0 0 1 0
sp98ic 4.49E+08 4.55E+08 1% 1 0 1 1 0 0 0 1 0 0 1 0
sp98ir 2.2E+08 2.26E+08 3% 1 0 0 0 0 0 1 0 1 0 1 1
swath 467.4 520.4 11% 0 1 0 0 1 0 0 0 0 0 1 0
transportmoment -3.1E+09 -3E+09 3% 1 1 0 0 0 0 0 0 0 0 0 1
triptim1 22.87 22.87 0% 1 0 1 1 0 0 0 1 0 0 0 1
tw-myciel4 10 10 0% 1 0 0 1 0 0 0 1 0 0 0 1
umts 30090000 31600000 5% 1 1 0 1 1 0 0 1 0 0 1 1
vpphard 5 28 460% 0 1 1 0 0 1 0 0 1 0 0 0
wachplan -8 -8 0% 1 0 0 0 1 0 0 1 0 0 1 1
We begin the investigation of our results by looking how well our algorithm performed
on the aggregate set of test cases. Figure 4.1 illustrates the proportion of test cases that
had below a given threshold percentage difference from the actual optimal solution, as
Chapter 4. Case Study: Benchmark Tests 34
provided by the MIPLIB documentation. Since we are comparing against the guaranteed
optimal solution there is no way that we can achieve a negative percentage difference,
ideally we’d hope to get 0% on each test case. While the GCA does not achieve the exact
optimal for every test case, the GCA converges at the optimal solution in 33% of the test
cases and is within 9% from the optimal solution in 50% of the cases. Only 1 of every
5 test cases results in a GCA optimal that is greater than 20% from the true optimal
solution. This preliminary investigation indicates that this algorithm seems promising as
a mechanism to solve general MILPs, but as advertised, it does not guarantee optimality.
We can further deepen our understanding of the strengths and limitations of the GCA
algorithm by investigating how well it performed in the presence of specific constraint
types.
0 20 40 60 80 100 120 140 160 180 2000
10
20
30
40
50
60
70
80
90
100
Percentage Difference
Per
cent
age
of T
est C
ases
Figure 4.1: Cumulative Proportion of Test Cases Having at Minimum the SpecifiedPercentage Error
In order to determine how well the GCA algorithm performs in the presence of each
constraint type, we require some metric for the goodness of an outcome. In this case,
our metric, which we will call g, is an indicator function that the percentage difference
is below some threshold γ. Therefore, if ∆i ≤ γ we obtain a value of 1, indicating good
and 0 otherwise. Thus we have, gi = I (∆i ≤ γ) for each test case i. Similarly, we can
assign a value of 1 if a constraint exists in test case i and 0 otherwise. The correlation
between the gi’s, and the existence of any of the constraint types will shed light on if
the specific constraint types allow the GCA to yield more accurate optimal solutions, or
otherwise hinder the algorithm from performing well. We further extend this analysis
by looking at the existence, or lack thereof, of pairs of constraint types. In figure 4.2
we find the correlation between the existence of two constraint types and the gi’s as
computed with varying γ’s. To be more specific, sub-figure 4.2a shows that the existence
of aggregation constraints is slightly, positively correlated (0.09) with a goodness metric
Chapter 4. Case Study: Benchmark Tests 35
of gi = I (∆i ≤ 1%). However, when we require the existence of both the aggregation
and set partitioning constraints we obtain a higher correlation of 0.23 indicating that
the algorithm performed better when both constraint types existed than when only the
aggregation constraints existed. Along the main diagonal of each of the sub-figures we see
how our goodness metric correlates with a single constraint type. Off the main diagonal,
we require both constraint types to exist.
A brief look at figures 4.2 indicates that the algorithm excels at the set partitioning
constraints (PR), set packing constraints (PC), binary packing constraints(BN), and the
integer knapsack constraints (IKS). The algorithm performs poorly on equality knapsack
(EK), binary knapsack (KS) and mixed binary constraints (M01). Of the 21 binary
packing problems, 12 test cases achieved an exact optimal solution and only 3 such test
cases achieved a percentage difference of over 20%. The difference between these binary
packing constraints and the mixed binary constraints, on which the GCA performed
very poorly, is that the mixed binary constraints have multiple continuous variables
involved in each constraint whereas the binary packing constraints have only one. The
reason this is significant is that the more continuous variables there are the fewer non-
basic integer variables are required to obtain an integer feasible solution. With only one
continuous variable in the binary packing constraints, there is a much higher likelihood
that more integer variables are forced into the visible set in order to obtain feasibility.
With better initial conditions, it is more likely that if the algorithm was to converge at
a local minima, the minima would have similar objective value. The benefit to binary
packing constraints over strictly binary constraints, is that the one added continuous
variable makes it much more likely that the constraint is binding. Since non-binding
constraints have a dual variables of 0, we’d obtain no gradient information from which to
generate new variables. The existence of a single continuous variable seems to perform
well. As we’d expect, the set packing constraints performed ever so slightly better than
the set partitioning constraints. While the correlations are very similar between these
two types of constraints, one major difference is that the solution quality is far more
volatile with the set partitioning constraints. Of the 29 set partitioning problems 4 test
cases had percentage differences of over 100% and another 5 test cases had differences
over 35%. The 25 set packing constraints did not have a single test case with over
100% error and only 3 test cases had a percentage error of over 33%. This coincides
with our hypothesis that set packing constraints, although they may quickly converge
to a solution, the solution can often be a local minimum. Set packing constraints have
more flexibility in that a feasible solution is not as restrictive. Similar to the set packing
constraints, the cardinality constraints do not leave much room for error. Once a feasible
Chapter 4. Case Study: Benchmark Tests 36
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.09 0.09 0.23 0.00 0.12 -0.07 -0.11 0.19 -0.03 -0.11 0.12 -0.06
VB 0.09 -0.26 0.03 -0.07 0.03 0.04 -0.11 0.15 -0.02 -0.13 0.09 -0.36
PR 0.23 0.03 0.13 0.06 0.10 -0.03 -0.13 0.13 0.05 -0.13 0.13 -0.05
PC 0.00 -0.07 0.06 0.13 0.00 0.14 -0.08 0.26 -0.10 -0.08 0.06 -0.11
CV 0.12 0.03 0.10 0.00 0.00 N/A -0.13 0.12 0.14 N/A 0.00 0.08
CR -0.07 0.04 -0.03 0.14 N/A 0.00 -0.08 0.00 -0.07 -0.11 0.06 -0.16
EK -0.11 -0.11 -0.13 -0.08 -0.13 -0.08 -0.18 -0.11 -0.11 -0.08 -0.16 -0.11
BN 0.19 0.15 0.13 0.26 0.12 0.00 -0.11 0.29 0.14 N/A 0.19 0.12
IK -0.03 -0.02 0.05 -0.10 0.14 -0.07 -0.11 0.14 -0.04 -0.13 0.04 -0.18
KS -0.11 -0.13 -0.13 -0.08 N/A -0.11 -0.08 N/A -0.13 -0.18 -0.08 -0.11
IKS 0.12 0.09 0.13 0.06 0.00 0.06 -0.16 0.19 0.04 -0.08 0.13 0.00
M01 -0.06 -0.36 -0.05 -0.11 0.08 -0.16 -0.11 0.12 -0.18 -0.11 0.00 -0.34
(a) γ = 1%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.12 0.08 0.28 0.07 0.11 -0.07 -0.11 0.25 -0.04 -0.11 0.16 -0.07
VB 0.08 -0.28 0.01 -0.08 0.02 0.03 -0.11 0.15 -0.03 -0.14 0.08 -0.37
PR 0.28 0.01 0.16 0.15 0.09 -0.04 -0.14 0.20 0.04 -0.14 0.20 -0.06
PC 0.07 -0.08 0.15 0.17 0.00 0.13 -0.08 0.31 -0.10 -0.08 0.15 -0.12
CV 0.11 0.02 0.09 0.00 -0.01 N/A -0.14 0.11 0.13 N/A -0.01 0.07
CR -0.07 0.03 -0.04 0.13 N/A -0.01 -0.08 0.00 -0.07 -0.11 0.05 -0.16
EK -0.11 -0.11 -0.14 -0.08 -0.14 -0.08 -0.18 -0.11 -0.11 -0.08 -0.16 -0.11
BN 0.25 0.15 0.20 0.31 0.11 0.00 -0.11 0.34 0.13 N/A 0.25 0.11
IK -0.04 -0.03 0.04 -0.10 0.13 -0.07 -0.11 0.13 -0.05 -0.14 0.03 -0.19
KS -0.11 -0.14 -0.14 -0.08 N/A -0.11 -0.08 N/A -0.14 -0.18 -0.08 -0.11
IKS 0.16 0.08 0.20 0.15 -0.01 0.05 -0.16 0.25 0.03 -0.08 0.16 0.00
M01 -0.07 -0.37 -0.06 -0.12 0.07 -0.16 -0.11 0.11 -0.19 -0.11 0.00 -0.36
(b) γ = 2%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.13 0.06 0.26 0.06 0.09 -0.09 0.04 0.22 0.04 -0.12 0.19 -0.03
VB 0.06 -0.28 -0.01 -0.03 0.01 0.02 -0.12 0.13 -0.05 -0.14 0.06 -0.34
PR 0.26 -0.01 0.12 0.13 0.08 -0.05 -0.14 0.18 0.02 -0.14 0.18 -0.07
PC 0.06 -0.03 0.13 0.19 -0.01 0.12 -0.08 0.29 -0.11 -0.08 0.13 -0.07
CV 0.09 0.01 0.08 -0.01 -0.03 N/A -0.14 0.09 0.12 N/A -0.03 0.06
CR -0.09 0.02 -0.05 0.12 N/A -0.02 -0.08 -0.01 -0.09 -0.12 0.04 -0.17
EK 0.04 -0.12 -0.14 -0.08 -0.14 -0.08 -0.09 -0.12 0.04 -0.08 -0.05 0.04
BN 0.22 0.13 0.18 0.29 0.09 -0.01 -0.12 0.30 0.12 N/A 0.22 0.09
IK 0.04 -0.05 0.02 -0.11 0.12 -0.09 0.04 0.12 -0.01 -0.14 0.12 -0.12
KS -0.12 -0.14 -0.14 -0.08 N/A -0.12 -0.08 N/A -0.14 -0.19 -0.08 -0.12
IKS 0.19 0.06 0.18 0.13 -0.03 0.04 -0.05 0.22 0.12 -0.08 0.18 0.08
M01 -0.03 -0.34 -0.07 -0.07 0.06 -0.17 0.04 0.09 -0.12 -0.12 0.08 -0.31
(c) γ = 3%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.12 0.09 0.22 0.03 0.07 -0.10 0.03 0.19 0.02 -0.13 0.14 -0.01
VB 0.09 -0.26 -0.05 -0.06 -0.02 0.00 -0.13 0.10 -0.08 -0.16 0.03 -0.28
PR 0.22 -0.05 0.12 0.10 0.15 -0.07 -0.03 0.22 -0.01 -0.16 0.22 -0.10
PC 0.03 -0.06 0.10 0.14 -0.03 0.10 -0.09 0.25 -0.13 -0.09 0.10 -0.10
CV 0.07 -0.02 0.15 -0.03 0.01 N/A -0.03 0.15 0.10 N/A 0.01 0.05
CR -0.10 0.00 -0.07 0.10 N/A -0.05 -0.09 -0.03 -0.10 -0.13 0.03 -0.18
EK 0.03 -0.13 -0.03 -0.09 -0.03 -0.09 0.00 0.03 0.03 -0.09 0.05 0.03
BN 0.19 0.10 0.22 0.25 0.15 -0.03 0.03 0.31 0.10 N/A 0.25 0.07
IK 0.02 -0.08 -0.01 -0.13 0.10 -0.10 0.03 0.10 -0.05 -0.16 0.10 -0.15
KS -0.13 -0.16 -0.16 -0.09 N/A -0.13 -0.09 N/A -0.16 -0.20 -0.09 -0.13
IKS 0.14 0.03 0.22 0.10 0.01 0.03 0.05 0.25 0.10 -0.09 0.17 0.06
M01 -0.01 -0.28 -0.10 -0.10 0.05 -0.18 0.03 0.07 -0.15 -0.13 0.06 -0.27
(d) γ = 4%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.10 0.08 0.21 0.02 0.06 -0.11 0.03 0.18 0.01 -0.13 0.13 -0.02
VB 0.08 -0.24 -0.06 -0.06 -0.02 -0.01 -0.13 0.10 -0.09 -0.03 0.02 -0.30
PR 0.21 -0.06 0.10 0.10 0.14 -0.08 -0.03 0.21 -0.02 -0.16 0.21 -0.11
PC 0.02 -0.06 0.10 0.12 -0.03 0.10 -0.09 0.24 -0.14 -0.09 0.10 -0.11
CV 0.06 -0.02 0.14 -0.03 0.00 N/A -0.03 0.14 0.10 N/A 0.00 0.04
CR -0.11 -0.01 -0.08 0.10 N/A -0.05 -0.09 -0.03 -0.11 -0.13 0.03 -0.19
EK 0.03 -0.13 -0.03 -0.09 -0.03 -0.09 -0.01 0.03 0.03 -0.09 0.04 0.03
BN 0.18 0.10 0.21 0.24 0.14 -0.03 0.03 0.30 0.10 N/A 0.24 0.06
IK 0.01 -0.09 -0.02 -0.14 0.10 -0.11 0.03 0.10 -0.06 -0.16 0.10 -0.16
KS -0.13 -0.03 -0.16 -0.09 N/A -0.13 -0.09 N/A -0.16 -0.11 -0.09 -0.13
IKS 0.13 0.02 0.21 0.10 0.00 0.03 0.04 0.24 0.10 -0.09 0.15 0.05
M01 -0.02 -0.30 -0.11 -0.11 0.04 -0.19 0.03 0.06 -0.16 -0.13 0.05 -0.29
(e) γ = 5%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.11 0.12 0.19 0.08 0.12 -0.12 0.02 0.22 0.00 -0.14 0.16 0.01
VB 0.12 -0.24 -0.08 -0.02 0.04 -0.02 -0.14 0.17 -0.11 -0.04 0.06 -0.27
PR 0.19 -0.08 0.07 0.08 0.13 -0.09 -0.04 0.19 -0.04 -0.17 0.19 -0.13
PC 0.08 -0.02 0.08 0.14 0.09 0.09 -0.10 0.29 -0.15 -0.10 0.17 -0.07
CV 0.12 0.04 0.13 0.09 0.05 N/A -0.04 0.20 0.09 N/A 0.05 0.14
CR -0.12 -0.02 -0.09 0.09 N/A -0.07 -0.10 -0.04 -0.12 -0.14 0.02 -0.20
EK 0.02 -0.14 -0.04 -0.10 -0.04 -0.10 -0.02 0.02 0.02 -0.10 0.03 0.02
BN 0.22 0.17 0.19 0.29 0.20 -0.04 0.02 0.32 0.09 N/A 0.29 0.12
IK 0.00 -0.11 -0.04 -0.15 0.09 -0.12 0.02 0.09 -0.08 -0.17 0.08 -0.17
KS -0.14 -0.04 -0.17 -0.10 N/A -0.14 -0.10 N/A -0.17 -0.12 -0.10 -0.14
IKS 0.16 0.06 0.19 0.17 0.05 0.02 0.03 0.29 0.08 -0.10 0.22 0.13
M01 0.01 -0.27 -0.13 -0.07 0.14 -0.20 0.02 0.12 -0.17 -0.14 0.13 -0.28
(f) γ = 6%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.08 0.08 0.21 0.05 0.09 -0.04 0.01 0.24 -0.03 -0.15 0.10 0.02
VB 0.08 -0.20 -0.13 0.01 0.01 -0.04 -0.15 0.14 -0.15 -0.06 0.02 -0.18
PR 0.21 -0.13 0.05 0.05 0.10 -0.03 -0.06 0.23 -0.08 -0.18 0.16 -0.09
PC 0.05 0.01 0.05 0.13 0.07 0.07 -0.11 0.24 -0.17 -0.11 0.14 -0.05
CV 0.09 0.01 0.10 0.07 0.01 N/A -0.06 0.17 0.07 N/A 0.01 0.12
CR -0.04 -0.04 -0.03 0.07 N/A -0.03 -0.11 0.07 -0.14 -0.15 0.01 -0.10
EK 0.01 -0.15 -0.06 -0.11 -0.06 -0.11 -0.04 0.01 0.01 -0.11 0.01 0.01
BN 0.24 0.14 0.23 0.24 0.17 0.07 0.01 0.32 0.07 N/A 0.24 0.17
IK -0.03 -0.15 -0.08 -0.17 0.07 -0.14 0.01 0.07 -0.13 -0.18 0.06 -0.21
KS -0.15 -0.06 -0.18 -0.11 N/A -0.15 -0.11 N/A -0.18 -0.14 -0.11 -0.15
IKS 0.10 0.02 0.16 0.14 0.01 0.01 0.01 0.24 0.06 -0.11 0.15 0.10
M01 0.02 -0.18 -0.09 -0.05 0.12 -0.10 0.01 0.17 -0.21 -0.15 0.10 -0.18
(g) γ = 7%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.06 0.07 0.20 0.04 0.08 -0.05 0.00 0.23 -0.04 -0.15 0.09 0.01
VB 0.07 -0.18 -0.14 0.00 0.00 -0.05 -0.15 0.13 -0.16 -0.06 0.01 -0.15
PR 0.20 -0.14 0.03 0.05 0.10 -0.04 -0.06 0.22 -0.09 -0.19 0.15 -0.10
PC 0.04 0.00 0.05 0.11 0.07 0.07 -0.11 0.23 -0.18 -0.11 0.13 -0.06
CV 0.08 0.00 0.10 0.07 0.00 N/A -0.06 0.16 0.07 N/A 0.00 0.11
CR -0.05 -0.05 -0.04 0.07 N/A -0.03 -0.11 0.07 -0.15 -0.15 0.00 -0.11
EK 0.00 -0.15 -0.06 -0.11 -0.06 -0.11 -0.05 0.00 0.00 -0.11 0.00 0.00
BN 0.23 0.13 0.22 0.23 0.16 0.07 0.00 0.31 0.07 N/A 0.23 0.16
IK -0.04 -0.16 -0.09 -0.18 0.07 -0.15 0.00 0.07 -0.14 -0.19 0.05 -0.21
KS -0.15 -0.06 -0.19 -0.11 N/A -0.15 -0.11 N/A -0.19 -0.15 -0.11 -0.15
IKS 0.09 0.01 0.15 0.13 0.00 0.00 0.00 0.23 0.05 -0.11 0.13 0.10
M01 0.01 -0.15 -0.10 -0.06 0.11 -0.11 0.00 0.16 -0.21 -0.15 0.10 -0.15
(h) γ = 8%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG 0.05 0.04 0.16 0.02 0.06 -0.06 -0.01 0.20 -0.06 -0.16 0.05 0.03
VB 0.04 -0.25 -0.18 -0.02 -0.02 -0.06 -0.16 0.11 -0.19 -0.08 -0.03 -0.19
PR 0.16 -0.18 -0.02 0.03 0.08 -0.06 -0.08 0.20 -0.12 -0.20 0.12 -0.13
PC 0.02 -0.02 0.03 0.11 0.05 0.05 -0.12 0.20 -0.20 -0.12 0.11 -0.09
CV 0.06 -0.02 0.08 0.05 -0.02 N/A -0.08 0.14 0.05 N/A -0.02 0.10
CR -0.06 -0.06 -0.06 0.05 N/A -0.06 -0.12 0.05 -0.16 -0.16 -0.01 -0.12
EK -0.01 -0.16 -0.08 -0.12 -0.08 -0.12 -0.06 -0.01 -0.01 -0.12 -0.01 -0.01
BN 0.20 0.11 0.20 0.20 0.14 0.05 -0.01 0.27 0.05 N/A 0.20 0.14
IK -0.06 -0.19 -0.12 -0.20 0.05 -0.16 -0.01 0.05 -0.18 -0.20 0.04 -0.24
KS -0.16 -0.08 -0.20 -0.12 N/A -0.16 -0.12 N/A -0.20 -0.16 -0.12 -0.16
IKS 0.05 -0.03 0.12 0.11 -0.02 -0.01 -0.01 0.20 0.04 -0.12 0.08 0.08
M01 0.03 -0.19 -0.13 -0.09 0.10 -0.12 -0.01 0.14 -0.24 -0.16 0.08 -0.13
(i) γ = 9%
AG VB PR PC CV CR EK BN IK KS IKS M01
AG -0.01 0.01 0.13 -0.01 0.04 -0.08 -0.02 0.18 -0.08 -0.18 0.01 -0.01
VB 0.01 -0.19 -0.21 -0.05 -0.04 -0.08 -0.18 0.09 -0.15 -0.09 -0.06 -0.14
PR 0.13 -0.21 -0.07 0.00 0.06 -0.08 -0.09 0.17 -0.15 -0.22 0.10 -0.16
PC -0.01 -0.05 0.00 0.07 0.04 0.04 -0.12 0.18 -0.22 -0.12 0.09 -0.12
CV 0.04 -0.04 0.06 0.04 -0.05 N/A -0.09 0.12 0.04 N/A -0.05 0.08
CR -0.08 -0.08 -0.08 0.04 N/A -0.08 -0.12 0.04 -0.18 -0.18 -0.02 -0.14
EK -0.02 -0.18 -0.09 -0.12 -0.09 -0.12 -0.08 -0.02 -0.02 -0.12 -0.03 -0.02
BN 0.18 0.09 0.17 0.18 0.12 0.04 -0.02 0.23 0.04 N/A 0.18 0.12
IK -0.08 -0.15 -0.15 -0.22 0.04 -0.18 -0.02 0.04 -0.15 -0.22 0.02 -0.19
KS -0.18 -0.09 -0.22 -0.12 N/A -0.18 -0.12 N/A -0.22 -0.18 -0.12 -0.18
IKS 0.01 -0.06 0.10 0.09 -0.05 -0.02 -0.03 0.18 0.02 -0.12 0.03 0.06
M01 -0.01 -0.14 -0.16 -0.12 0.08 -0.14 -0.02 0.12 -0.19 -0.18 0.06 -0.10
(j) γ = 10%
Figure 4.2: Correlations between existence of constraints and goodness. N/A indicatesno test cases with the specified pair of constraint types
Chapter 4. Case Study: Benchmark Tests 37
solution is found, it will be impossible to improve the objective value without introducing
a binary variable and simultaneously forcing another to go to 0. Again, this makes the
final solution very susceptible to the quality of the initial feasible solution provided to
the algorithm.
A more in-depth look into the specific problems to which we performed well shows
that the algorithm was proficient at solving:
• set covering problems, for example those arising from irreducible infeasible subsys-
tem covering problems (Amaldi et al., 2003, Pfetsch, 2008)
• time table and scheduling problems, these can often involve the well behaving
binary packing constraints, set packing constraints and/or the invariant and integer
knapsack constraints (Barutt and Hull, 1990, Bley et al., 2010, Borndorfer and
Liebchen, 2008, Schilly, 2007)
• Network planning problems, these utilize set packing, set covering and invariant
knapsack constraints (Goossens et al., 2004, Fischetti and Lodi, 2003)
Similarly, we found that some of the problems to which we performed poorer were:
• Vehicle routing and network flow problems, these could involve many set partition-
ing, knapsack and cardinality constraints (Ortega and Wolsey, 2003)
• Lot sizing problems, these have many variable bound, knapsack and mixed binary
constraints (Fischetti and Lodi, 2003, Gade and Kucukyavuz, 2010, Pochet and
Vyve, 2004)
• Resource levelling problems with availability constraints, these had a combination
of variable bound and set partitioning constraints (Coughlan et al., 2010)
4.4 Conclusions
Having developed an algorithm with the flexibility to solve general MILPs we figured it
was prudent to test the algorithm on a random set of problems. Utilizing the MIPLIB
library, which was developed for specifically this use case, we showed that our algorithm is
quite capable of achieving good solutions to general MILPs. Achieving the exact optimal
in 33% of test cases and within 10% of the optimal solution for over 50% of the test
cases seems like a promising start. Since we were utilizing the exact same algorithm to
solve all the test cases there was no way to intelligently tailor the BGPs and the WDP
Chapter 4. Case Study: Benchmark Tests 38
achieve more consistent results. We did, however, show that the GCA was more adept at
solving problems with specific constraint types (set partitioning constraints, set packing
constraints, binary packing constraints, and the integer knapsack constraints) than it was
at solving others (equality knapsack, binary knapsack, and mixed binary constraints).
While it may be a little unsettling that this algorithm was very inconsistent, with
no upper bound on how far our objective is from the optimal solution, we utilize the
following section to focus on a specific problem and show:
• How simple it is to tailor the GCA algorithm to a specific problem structure
• How taking advantage of problem structure can yield consistently good objective
value results
• How this algorithm can compete with some of the recently published works
Chapter 5
Case Study: Radiation Therapy
In chapter 4 we demonstrated the generalized combinatorial auction on a generic group
of MILPs. With no ability to adjust for specific problem structure we observed very
mixed results. We performed well on some problems and performed poorly in others, as
one could naturally expect with non-exact optimization methodologies. In this chapter,
however, we apply the algorithm developed in chapter 3 to a radiation therapy problem.
In this case study, we take advantage of specific problem structure to greatly reduce com-
putational time and compete well against one of the state-of-the-art heuristic algorithms
in the field.
5.1 Introduction to Intensity Modulated Radiation
Therapy
According to the American Cancer Society there will be an estimated 1.7 million new
cases of cancer and 600 thousand cancer deaths this year throughout the United States
(American Cancer Society, 2013). Approximately 60% of these patients will undergo
radiation therapy at some point during the course of their treatment (Hamacher and
Kufer, 2002). External beam radiation therapy aims to deliver sufficient radiation to
denature the DNA of cancerous cells while mitigating the damage to healthy tissue.
Intensity-modulated radiation therapy, a type of external beam therapy, modulates the
intensity of small subsections of each beam in order to produce a non-uniform intensity
profile. Determining the optimal angles of the beams and the intensity profile (also known
as the fluence map) delivered from each beam is an optimization problem.
Much research has been done in both the medical physics and optimization com-
munities on beam angle optimization alone (Lee et al., 2011, Lim et al., 2007a, 2009,
39
Chapter 5. Case Study: Radiation Therapy 40
Misic et al., 2010), fluence map optimization alone (Ahmed et al., 2010, Aleman et al.,
2010, Romeijn and Dempsey, 2008, Salari and Romeijn, 2011, Shepard et al., 1999), and
combined beam angle and fluence map optimization (Aleman et al., 2008, Aleman and
Sharpe, 2011, Aleman et al., 2013, Lee et al., 2003, Lim et al., 2007b, Zhang et al., 2008).
Beam angle optimization is generally modelled as a cardinality constrained mixed-integer
programming problem and thus is less tractable for large problem instances. As a result,
much effort has been directed towards developing heuristics that generate good beam
angle choices (Aleman et al., 2008, Lim et al., 2007b, 2009, Misic et al., 2010). Beam
angle optimization is often considered together with fluence map optimization (e.g., as a
two-stage problem) and many specialized techniques have been developed to solve this
problem (Aleman et al., 2008, Lim et al., 2007a, 2009, 2007b).
Integer programming methods may be sufficient for small problems but computational
time grows exponentially with the number of candidate beams. Furthermore, for co-
planar beams, there may be many approximately optimal solutions and thus well-designed
heuristics may be able to find a good solution without much difficulty. However, when
considering beams from varying both the polar and azimuthal angles, as in the case of
4π (Dong et al., 2013) treatments or as other continuous path or arc treatments become
more prevalent, determining the optimal set of beam angles or paths will greatly increase
the computational challenge of designing optimal treatments.
In this work, we develop a novel decomposition approach to solve the combined beam
angle and fluence map optimization problem, applying the ideas developed in chapter 3.
Our method scales well with problem size parameters including the number of potential
beam directions, the number of beams chosen and the number of voxels. Our specific
contributions are:
1. We apply our algorithm to simultaneously optimize beam angles and fluence maps
in IMRT treatment planning.
2. We demonstrate the first application of combinatorial auctions in a health care
optimization problem.
3. We identify how one can utilize the generalized CA framework to take advantage
of specific problem structure.
5.2 Problem Overview
Many formulations of IMRT exist. In this work we chose to focus past research done by
Romeijn and Dempsey (2008) and Shepard et al. (1999), who showed that in the realm
Chapter 5. Case Study: Radiation Therapy 41
of linear programs, minimizing absolute deviation from target dosages is an effective
scheme that produces clinically acceptable results. However, we look at a more general
IMRT formulation where instead of prescribing one dose for a voxel, one may prescribe an
acceptable dosage range. Figure 5.1 shows how this method penalizes dosage deviating
from the prescription. For the three different cell types, specified risky, healthy, and
tumor, there is an upper and lower limit to the prescribed regions. Note that the cost of
deviating from the prescribed range is weighted differently, as indicated by the different
slopes. In general, there is most penalty for under-dosing a tumor cell and a low penalty
for over-dosing the tumor. There is a higher penalty for over-dosing a region at risk than
there is over-dosing normal, healthy tissue. The exact weights and prescription ranges
would usually be agreed upon by a medical professional, such as an oncologist.
Dose to Voxel
Pen
alty
Healthy TissueRegion At RiskTumour Cells
Figure 5.1: Penalty Incurred vs. Dosage Delivered to Different Voxel Types
5.2.1 Definitions & Notation
Table 5.1 is a summary of all the definitions and notations utilized for the remainder of
this chapter. In a general sense, we are attempting to determine the optimal set of beams
to turn on while simultaneously optimizing the fluence map of each beam. To do this, our
combinatorial auction utilizes as many bidders as there are beams and generates a bid
from each beam. Each bid is effectively a fluence map indicating which beamlets from
that beam are turned on and which are turned off, an associated variable will determine
the intensity at which that the beamlets that are on should radiate. The set of all the
generated bids is I and we use i to index them. Since each bid originates from a single
beam, we use Bi to identify the set of beamlets in that bid (and therefore beam). Bids
generated from the same beam have the same set of beamlets (i.e., assume bid 1 and
bid 5 are generated from beam 2 then B1 = B5). Furthermore since each beamlet is
part of a single beam, the intersection of any bids not generated from the same beam is
empty. At each iteration we also generate a beam, R is the set of beams that have been
Chapter 5. Case Study: Radiation Therapy 42
Table 5.1: Symbols and Notation for Formulations
Symbol DescriptionDefinitions
beamlet Small, pencil-width beam across which is constant intensityVoxel Volumetric unit in the bodybid A fluence map from one beam
Sets & IndicesI Set of all bids, could have more than one bid per beami Index for the bids, the ith bidV Set of all voxels in problem regionj Index for the voxels, the jth voxelBi Set of all beamlets belonging to the beam of bid ik Index for the beamlet, the kth beamletR Set of generated candidate beamsr Index for the generated candidate beams, the rth beamIr Set of bids originating from beam r
ParametersDjk Dosage delivered to voxel j from per unit intensity of beamlet kN Number of beams to be selected in optimal solutionW Maximum intensity delivered by any beamletlj,uj Lower and upper prescribed dosages to jth voxelαj,βj Penalty for deviation from lower and upper dosage ranges for jth voxel
Bid Generation Problemwk Intensity to deliver from the kth beamlet
Winner Determination Problemyr Boolean indicating that candidate beam r is in the optimal set of N beamsxi The proportion of bid i to usesj,tj Deviation from lower and upper dosage ranges
Pricing Problemπj, ρj Dual variable for the sj and tj constraintsµr Dual variable for the boolean yr constraintsη Dual variable for the cardinality constraintφi Dual variable for the xi is non-zero only if yr is 1 constraints
generated, this will get bigger at each iteration until all the beams in the problem have
been generated.
5.2.2 Equivalent MIP
To best illustrate the over-arching goal of this research we present a more intuitive for-
mulation of the mixed integer problem that we are essentially trying to solve. As stated
before, this problem is intractable for large problem instances but still provides an excel-
Chapter 5. Case Study: Radiation Therapy 43
lent comparison tool for both the solution quality and the computational benefits to our
mechanism. Since we can consider this MILP to be generating one bid, or fluence map,
for each beam, we can index the bids by r as opposed to i, therefore Br is equivalent to
Bi in this formulation. To keep the notation consistent, we assume that all the beams
have been generated and, therefore, |R| is the total number of beams in the problem.
We will use BR to represent the full set of all beamlets belonging to any of the beams.
The final problem will have a binary variable yr for each beam to indicate if the beam
is on or off; a variable wk for each beamlet to indicate the intensity of that beamlet; the
variables sj and tj indexed on voxels identify the under and over dosages of the jth voxel
respectively.
minimizes,t,w,y
∑j∈V
(αjsj + βjtj)
subject to sj +∑k∈BR
Djkwk ≥ lj ∀j ∈ V
tj −∑k∈BR
Djkwk ≥ −uj ∀j ∈ V
Wyr − wk ≥ 0 ∀r ∈ R, k ∈ Br∑r∈R
yr ≤ N
wk, sj, tj ≥ 0
yr ∈ {0, 1} ∀r ∈ R
(5.1)
5.3 Formulation
5.3.1 Winner Determination Problem
We begin with the WDP because it is from this that the remainder of the problems
were derived. In this formulation of the WDP the xi’s are indexed on bids and we are
allowed more than one bid from each beam. Therefore the xi variables allow for some
conic combination of bids to create a superposition of fluence maps for each beam. The
yr variables are indexed on the beams and determine if a beam is part of the optimal set
or not. As such, in order to get the exact optimal solution we would probably require as
many bids as there are beamlets contained by the chosen beams (that way each beamlet
can have its own unique intensity wk = Wxi). Since R only corresponds to the set of
beams generated in previous iterations of the algorithm, if R was the complete set of
Chapter 5. Case Study: Radiation Therapy 44
possible beams and there was a one-to-one correspondence of bids to beamlets, the WDP
would be identical to the MIP. The WDP is formulated as follows:
minimizes,t,x,y
∑j∈V
αjsj + βjtj
subject to sj +∑i∈I
xi∑k∈Bi
Djkwk ≥ lj ∀j ∈ V
tj −∑i∈I
xi∑k∈Bi
Djkwk ≥ −uj ∀j ∈ V
yr − xi ≥ 0 ∀r ∈ R, i ∈ Ir∑r∈R
yr ≤ N
xi, sj, tj ≥ 0
yr ∈ {0, 1} ∀r ∈ R
(5.2)
The major differences and similarities between the MILP in (5.1) and the WDP in (5.2)
are:
• There is an integer variable for each beam in (5.1) (|RMILP| is total number of
beams) and an integer variable for only the beams previously generated in (5.2)
|RWDP| ≤ |RMILP|
• There is a continuous variable for each beamlet in (5.1) whereas beamlets are
grouped together in (5.2) to form a bid and there is a continuous variable for
each bid.
• The objective function for both the MILP and the WDP are identical
5.3.2 Pricing Problem / Dual
If one was to replace all the binary constraints with yr = y∗r constraints, where y∗r is the
optimal yr value provided by the WDP, then the pricing problem is formulated as the LP
dual of the WDP. This is not linearisation, per say, but by doing this we’ve effectively
eliminated all the integer variables and given ourselves meaningful dual prices that can
Chapter 5. Case Study: Radiation Therapy 45
be relayed to the bidders. The pricing problem is given as:
maximizeρ,π,φ,µ,η
∑j∈V
(ljπj − ujρj) + ηn+∑r∈R
y∗rµr
subject to∑j∈V
(πj − ρj)∑k∈Bi
Djkwk − φi ≤ 0 ∀i ∈ I∑i∈Ir
φi − η − µr ≤ 0 ∀r ∈ R
0 ≤ πj ≤ αj 0 ≤ ρj ≤ βj ∀j ∈ V
0 ≤ φi, ∀i ∈ I
0 ≤ η
(5.3)
Now consider the LP that we’ve obtained by fixing the yr variables in the WDP. ∵∑y∗r r∈R ≤ N and y∗r ∈ {0, 1}, we can drop the
∑yrr∈R ≤ N constraints because
they are automatically satisfied and the yr − xi ≥ 0 constraints can be rewritten as
xi = 0 ∀i ∈ {Ir|y∗r = 0} and xi ≤ 1 ∀i ∈ {Ir|y∗r = 1}. Redefining I∗ to be the set of
all bids where their corresponding beam is in the optimal set (y∗r = 1) will give us the
following LP, which we refer to as the Realized LP.
minimizex,y,z
∑j∈V
(αjsj + βjtj)
subject to sj +∑i∈I
xi∑k∈Bi
Djkwk ≥ lj ∀j ∈ V
tj −∑i∈I
xi∑k∈Bi
Djkwk ≥ −uj ∀j ∈ V
xi ≤ 1 ∀i ∈ I∗
xi, sj, tj ≥ 0
(5.4)
This formulation is the LP solved at the optimal leaf node in the branch and bound tree.
We prove later on that the bid generation problem is obtaining the minimum reduced cost
of this optimization problem. All the bids generated from beams with y∗r = 1 correspond
to adding columns utilizing column generation where (5.4) is the master problem, we use
this fact to prove convergence in 5.3.5.
5.3.3 Bid Generation Problem
Now each bidder’s goal is to obtain as much reward as possible. Provided some reward
metric, each bidder will submit a bid identifying its optimal fluence map. Given the
Chapter 5. Case Study: Radiation Therapy 46
simplicity of the problem, what the bidder is actually doing is adding up the reward
(πj − ρj) from each voxel along the path of each beamlet. If the reward is positive the
bidder will shoot at 100% of max capacity if not then the bidder won’t shoot along that
beamlet. In the future this problem can be extended to ensure that this is a feasible
collimator configuration by solving a sequence of maximum sub-array problems along
each row of beamlets but we leave this as a possible extension.
maximizewk
∑j∈V
(πj − ρj)∑k∈Bi
Djkwk
subject to 0 ≤ wk ≤ W ∀k ∈ Bi
(5.5)
Closer inspection of the BGP shows us that the bidders are actually minimizing the
reduced cost of any new bid xi in the Realized LP 5.4.
c = c− ATλ, where λ is the dual vector ∴
cxi = 0−
(∑j∈V
(πj − ρj)∑k∈Bi
Djkwk + φi
)∴
minimizewk
cxi = maximizewk
∑j∈V
(πj − ρj)∑k∈Bi
Djkwk + φi ∴
w∗k = arg maxwk
∑j∈V
(πj − ρj)∑k∈Bi
Djkwk
This would tell us how to add a bidding fluence map from any beam, but in some instances
we may wish to increase the scope of the beams we choose to look at (increase the set R).
Utilizing our dual variables as directed in subsection 3.1.4 we generate candidate beams
using the following optimization problem. This is equivalent to minimizing reduced cost
assuming the yr’s were continuous and not in the set of visible variables.
maximizer3R
η −∑i∈Ir
φi (5.6)
One important note, however, is that although the WDP is capable of selecting the
winning beams. The first N beams that are added into the candidate beam pool could
always have an associated yr value of 1. Since, it is always possible to turn the beam
on but not have any intensity from the beamlets, xi = 0∀i ∈ Ir. So for those first N
iterations, it is computationally beneficial to solve the WDP as an LP, setting the yr
variables to 1 and allowing the WDP to only optimize the utilized intensity of each bid,
xi.
Chapter 5. Case Study: Radiation Therapy 47
5.3.4 Algorithm
Figure 5.2: Flow Chart of the Algorithm
As with most iterative combinatorial auctions, the algorithm iteratively solves the
BGP followed by the WDP and the PP. The main difference in this scenario, however,
is a stop condition is added whereby we can converge on the choice of beams. As will be
explained later, once the same consecutive choice of beams is made, each iteration has
an increased likelihood of obtaining the same set of winning bidders. Therefore, once
we have reached a desired number of consecutive iterations with no change in winning
beams we stop the algorithm and solve the LP obtained by fixing the binary yr variables.
Chapter 5. Case Study: Radiation Therapy 48
5.3.5 Discussion
Column Generation Interpretation
Suppose that we are only generating bids from beams where y∗r = 1 (i.e. beams that are
in the optimal set). The BGP generates a bid that has the minimum reduced cost for the
Realized LP, which is only considering those optimal beams. Since the premise of column
generation is to iteratively add columns that have minimum (negative) reduced cost,
this mechanism is simply generating columns from those beams. Therefore, if allowed to
generate a sufficient number of columns, this approach will converge to the Realized LP
optimal for the current y∗r just as column generation has been shown to be proficient at
solving general LP’s. In other words, the Realized LP is simply the master problem and
the BGP is the sub problem for the conventional column generation on a fixed choice of
beams. The difference in this algorithm is we utilize the reduced costs to also generate
bids that are currently deemed infeasible in the master problem. Not only do we generate
bids from those beams that have been selected, y∗r = 1, but we also generate bids from
beams that are not selected. Therefore, without changing the yr’s those bids cannot
enter the basis. We generate them, regardless, with the hopes that they would provide
sufficient incentive to change the current choice of yr’s. One question, however, still
remains, will the proposed algorithm converge on a choice of beams and if so how long
will we take to converge?
Convergence on Choice of Beams
In duality theory the dual multipliers represent the change in objective cost due to a
change in any non-basic variable, in the vicinity of the current feasible solution- much
like a gradient. Consider the Realized LP, formulation 5.4, the dual variables πj and ρj
will identify if we should increase or decrease the radiation delivered to the jth voxel given
the current dosages delivered by each beam and, most importantly, given this current
choice of beams. It is because of this localized nature of the gradient interpretation of
dual variables that we argue that there should be fast convergence on the choice of beams.
Visualize attempting to solve the WDP by solving an exponential number of LPs,(|R|n
)= |R|!|R|!(|R|−n)!
of them, and choosing the one with a minimum objective value. At
each iteration, we obtain the dual variables, essentially answering which voxels need more
dosage and which voxels require less. Bids generated from the current optimal choice of
beams are guaranteed to decrease the objective value of the Realized LP, just as with
column generation, one instance of the(|R|n
)LPs. For a different beam to compete and
enter the set of winning beams it must submit a bid that would not only decrease the
Chapter 5. Case Study: Radiation Therapy 49
objective value of one of the LPs that it’s a part of but rather decrease the objective
value by an amount more than the winning beams’ new bids are decreasing the objective
value of the Realized LP (otherwise the original choice of beams would still be optimal).
Unfortunately, unlike with the Realized LP there is no guarantee that bids generated
from non-optimal beams would actually reduce the objective since the reduced costs are
computed based on specifically not choosing those beams to be in the optimal set, as such
the reduced costs are almost completely arbitrary for those beams. More specifically, the
dual variables obtained identify a gradient local to the current solution, in the direction of
primal-feasibility, therefore, the dual variables are more effective at generating bids from
winning beams than they are at generating bids from losing beams. As the auctioneer
begins to consecutively pick the same set of winning beams there are more and more bids
that have already decreased the WDP objective with this choice of beams, the chances
that another beam can provide a bid that would sufficiently reduce the objective greatly
decrease. Therefore, if you select a set of beams and identify dual variables by fixing
those beams, you are more likely to obtain the same set of beams as the optimal set
of beams in the next iteration than any other choice of beams. We take advantage of
this result by defining the stop condition as achieving a specified number of consecutive
identical choices of optimal beams.
5.3.6 Proof Of Finite Termination
Lemma 1. The WDP objective is equal to the objective of the Realized LP at each
iteration
Proof. By definition the Realized LP is the optimal leaf node of the WDP branch and
bound tree.
Lemma 2. Let WDP∗t be the WDP objective at iteration t, the sequence {WDP∗t} is
decreasing and bounded below by the MIP objective value
Proof. By Lemma 1 we need to show that the Realized LP is decreasing. ∵ the BGP
minimizes reduced cost any negative reduced cost will cause the addition of a bid that
will cause the Realized LP to have lower or equal (in the case of degenerate basic feasible
solutions) objective value. ∵ the Realized LP is a feasible solution to the WDP, the
WDP’s objective will be ≤ the Realized LP. ∵ there are finite degenerate solutions and
there will only be a new choice of y∗r if the resultant Realized LP has lower objective than
the current one.
Theorem 3. The algorithm terminates in finite time
Chapter 5. Case Study: Radiation Therapy 50
Proof. ∵ the sequence {WDP∗t} is decreasing and it is bounded below, therefore it con-
verges to some value WDP∗t ≥ MILP∗. Since there are finite degenerate solutions, the
algorithm will stop in finite time.
5.3.7 Algorithmic Comparison with Romeijn et al. (2005) and
Dong et al. (2013)
The algorithm we’ve developed has some very interesting parallels with the work in
Romeijn et al. (2005) and Dong et al. (2013). Since we will spend some time comparing
the results of our algorithm against this work, which we will identify as the 4π algorithm,
it is important to identify the similarities and differences between our work and what has
already been established.
Romeijn et al. (2005) utilizes a column generation approach essentially identical to
the BGP algorithm. The goal is to identify an aperture, in our case a bid, with the
minimum reduced cost. The key difference is that their master problem does not have
any dependencies on the beams. Just as our BGP reduced to minimizing the reduced
cost of the Realized LP, they began with the Realized LP and derived the BGP.
Dong et al. (2013) was an extension of the work in Romeijn et al. (2005) that attempts
to also solve the beam selection problem. By minimizing the computation requirements
to select beams, they were able to generate fluence maps for many more beams than
what is generally feasible from integer programming formulations of the problem. To do
this the 4π algorithm worked as follows:
1. Solve the Romeijn et al. (2005) BGP for all the beams not in the current optimal
set, obtaining one bid per such beam
2. Add the beam that contributed the lowest reduced cost to the set of candidate
beams
3. Solve the Equivalent MIP as an LP by fixing the current set of beams, the solution
replaces the bids generated thus far
4. If we have yet to select N beams go back to 1
In this way, they effectively eliminated the need for a WDP all together and rather solved
an LP at each iteration of the algorithm. In so doing, however, their greedy approach
to solving the beam selection problem eliminates the possibility of back-tracking on their
selection of candidate beams. Once a beam has been selected it is part of the optimal set,
this makes it easy to get stuck in a local minima, just as with most greedy algorithms.
Chapter 5. Case Study: Radiation Therapy 51
To counter-act this effect, they’ve solved the Equivalent MIP in 5.1 which provides a
more informative gradient from which to decide which bid from all non-candidate beams
should enter. The similarities and differences between the 4π work and our algorithm
are highlighted below.
• 4π solves the Equivalent MIP at each iteration to obtain more complete gradient
information, our algorithm only considers those bids that have been generated at
each iteration. This makes it more likely that 4π will choose a better beam to add
when there are many beams to choose from at the cost of solving a larger problem
at each iteration. As we increase the number of voxels or the number of beamlets
we’d expect our algorithm to scale better in terms of computation time.
• 4π cannot remove a beam once it has been added, our algorithm can remove beams
when the WDP deems it optimal to do so. As a result the 4π algorithm caps
off at N iterations, once those N beams have been selected the algorithm solves
the Equivalent MIP and exits. The CA algorithm that we present can have more
iterations to change the optimal choice of beams.
• Both algorithms utilize the same gradient information to generate a bid, the dif-
ference being, in the case of 4π it generates one bid from all the beams that are
not in the optimal set. This one bid identifies the beam to add to the set. Our
CA algorithm, however generates a bid from beam of the candidate beams and
the selection of which beam enters the set of variables visible to the WDP is done
utilizing a separate optimization problem (5.6).
5.3.8 Results
Tiny Problem
To first illustrate the algorithms potential we utilize a very small problem instance. In
Table 5.2 we’ve illustrated a 2D case where C identifies a critical organ and T is a tumor
cell. Along the perimeter we’ve labelled the beamlets and beams where each beam is
directed perpendicular to the side it is on. We require that the dosage to tumor cells
be between 0.8 - 0.9Gy and penalize with α = 1 and β = 0.1, the under- and over-
dosage penalties respectively. For the critical organs we require no more than 0.5Gy and
penalize with β = 0.6. All other voxels carry no penalty. The Djk matrix was computed
by reducing the dosage delivered by a factor of 0.5 with each adjacent voxel starting
with a value of 2 (i.e. the first beamlet D1,1 = 2 and D7,1 = 1 where voxel 1 is the
top left voxel and voxel 7 is the one just below it). This behaviour mimics real-world
Chapter 5. Case Study: Radiation Therapy 52
Djk matrices as each voxel will absorb some of the radiation passing through it, as the
beamlet passes through the body there will be some decay of the effective intensity to
each cell. Looking at the geometry of the situation we can immediately tell that most of
Table 5.2: Illustration of Tiny Problem
Beam 11 2 3 4 5 6
Beam 4
6 C 1
Beam 2
5 C C C C 24 C T T C 33 C T T C 42 C T 51 6
6 5 4 3 2 1Beam 3
the dosage should be handled by beamlets 3 and 4 belonging to Beam 3. Solving the MIP
gives us a final objective of 0.195 after using an intensity of 2.7 and 2.2 for beamlets 3
and 4 respectively and 0.5 from beamlet 4 in beam 4. Utilizing our CA Algorithm results
in the same optimal solution, the 4π algorithm, however, incorrectly chooses the first
beam in the second iteration and results in an objective value of 0.325, a 67% increase
in objective value.
Comparison with CPLEX Equivalent MIP and 4π
As with many computationally hard problems, problem size greatly affects solution qual-
ity and/or solve time. In this work, we primarily focus on varying the number of beams
(integer variables) with a brief look at varying the number of voxels. Problems were
generated using a prostate cancer case within the Computational Environment for Ra-
diotherapy Research (CERR).
As outlined in Tables 5.3, in most cases the MIP is unable to solve the problem in
under 2 days of processing time, the maximum time we can request on Scinet computing
cluster (Loken et al., 2010). In those cases, if CPLEX has identified upper and lower
bounds after 2 days of processing we present them in place of the objective value. While
it is evident that the CA algorithm does not necessarily converge to the optimal choice
of beams, the final solution is always within a few percent of both the best bound on the
solution and the best integer solution found by the MIP in 2 days (when available). For
large problem instances, and over 45 beams the MIP is unable to allocate memory for
the branch and cut tree, making the problem completely unsolvable with a 32GB system.
Chapter 5. Case Study: Radiation Therapy 53
Table 5.3: Comparison of MIP vs CA vs 4π 33,900 Voxels approx 90 beamlets per beam
BeamsObjective Total Time
MIP CA 4π MIP CA 4πTiny (4) 0.195 0.195 0.325 0s 0s 0s
10 442 451 453 3 hours 95s 378s15 439 440 441 1.5 days 201s 458s20 431 - 438 453 441 > 2 days 298s 469s30 425 - 441 452 444 > 2 days 360s 394s45 425 - 443 443 443 > 2 days 279s 467s60 - 447 443 Out of Memory 417s 489s
15 (100K Voxels) - 440 441 - 406s 817s
For this larger problem set, the 4π framework, seems to be much more consistent in both
the time and solution quality criteria. The objective value was generally in the 441 to
444 range. Although the CA algorithm took a little less time to process, and significantly
less for the 10 to 15 beams problems, the solution quality had higher variation from 440
to 453. Attempting the problem with an increase to 100,000 voxels, shows that the 4π
algorithm does not scale well with the increase in voxels. This is understandable given
that the 4π algorithm has to solve a fairly large LP at each iteration, whereas the WDP
is significantly smaller at each iteration, it must only decide on intensities for each bid
generated as opposed to each beamlet.
Figure 5.3 shows the dose volume histograms for the CA and 4π results. While the
objective values are generally very close to each other it seems the dose distributions
are not. For the 10 beam problem, the CA performs somewhat better, yielding minimal
damage dosage to the rectum and the bladder while delivering higher dosage to more of
the planning target volume and critical target volume. This coincides with the objective
value results, 451 for CA and 453 for 4π. In the remainder of the results, however, the
4π algorithm performed better, again this supports the objective value results.
5.4 Conclusion
As was shown in the results section, the advantages of the combinatorial auction mech-
anism greatly increase as the number of integer variables increases; it does so by decom-
posing the problem into three sub-problems, only one of which is NP-hard. Furthermore,
by taking advantage of the specific problem structure we were able to further decrease
computational requirements and solve the complete WDP only in select circumstances.
Generally, our results were very similar to that of the recent work published in Dong
Chapter 5. Case Study: Radiation Therapy 54
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
10
20
30
40
50
60
70
80
90
100
Dosage Delivered
Per
cent
age
of V
oxel
s
Percentage of Voxels vs Dosage Delivered 10 Beam Problem
CA−PTVCA−CTVCA−RFCA−LFCA−BladderCA−Rectum4pi PTV4pi CTV4pi RF4pi LF4pi Bladder4pi Rectum
(a) 10 Beams
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
10
20
30
40
50
60
70
80
90
100
Dosage Delivered
Per
cent
age
of V
oxel
s
Percentage of Voxels vs Dosage Delivered 20 Beam Problem
CA−PTVCA−CTVCA−RFCA−LFCA−BladderCA−Rectum4pi PTV4pi CTV4pi RF4pi LF4pi Bladder4pi Rectum
(b) 20 Beams
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
10
20
30
40
50
60
70
80
90
100
Dosage Delivered
Per
cent
age
of V
oxel
s
Percentage of Voxels vs Dosage Delivered 30 Beam Problem
CA−PTVCA−CTVCA−RFCA−LFCA−BladderCA−Rectum4pi PTV4pi CTV4pi RF4pi LF4pi Bladder4pi Rectum
(c) 30 Beams
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
10
20
30
40
50
60
70
80
90
100
Dosage Delivered
Per
cent
age
of V
oxel
s
Percentage of Voxels vs Dosage Delivered 60 Beam Problem
CA−PTVCA−CTVCA−RFCA−LFCA−BladderCA−Rectum4pi PTV4pi CTV4pi RF4pi LF4pi Bladder4pi Rectum
(d) 60 Beams
Figure 5.3: Dose volume histograms for CA and 4π methodologies
Chapter 5. Case Study: Radiation Therapy 55
et al. (2013), while we obtained solutions faster than the 4π algorithm the objective
value results were less consistent and performed slightly poorer on the larger problem
sizes. When deciding between many possible beams, the more accurate the gradient in-
formation the more likely one is to choose the correct beam. Although our mechanism
does have the ability to change the optimal selection of beams at each iteration, it did
not do so well enough to out perform the 4π algorithm with any consistency. All that
being said, in the worst case, there was a compromise in the quality of the solution of
5% between the CA mechanism and the CPLEX MIP and a 2% margin between the CA
and the 4π algorithm. Clearly, combinatorial auctions provide an invaluable method for
reducing time complexity and help retract what would usually be an intractable problem.
This CA mechanism can also support broader scope problems within IMRT. As we al-
luded in subsection 5.3.3, our CA mechanism has the potential to simultaneously optimize
all three sub-problems associated with IMRT (beam selection, fleunce map optimization,
and collimator configuration). Since the proposed combinatorial auction framework de-
composes each beam into its own sub-problem, the BGP objectives could incorporate
an aperture optimization problem very similar to research done by Salari and Romeijn
(2011). Or, in the simplest case, a maximum sub-array problem to identify the best leaf
configuration given the reduced costs, therefore each bid would be a single, feasible, leaf
configuration. By setting a hard constraint on the total number of acceptable bids the
algorithm can require an upper limit on beam on time. Further research could be con-
ducted on deciding the exact mechanism to remove bids that were previously generated
with the hopes of replacing them with better ones. In the crudest sense, bids can be
continuously added till the maximum number of bids is obtained, and then those bids
with 0 valued intensities can be removed to allow for the addition of competing bids.
Chapter 6
Conclusion
Mixed integer linear programs are a central focus of a lot of the research presently being
conducted in operations research, computer science, and mathematics. While many real
world problems can be formulated as MILPs, solving them could often be intractable.
We developed a novel combinatorial auction approach that had a few key benefits
1. the algorithm retained an integer feasible solution at each iteration of the algorithm
2. the algorithm was adept at solving generic MILP problems with no further infor-
mation about problem structure
3. the algorithm demonstrated huge time savings when able to take advantage of
specific problem structure
Utilizing the most basic formulation of our algorithm we were able to solve generic
MILPs. More detailed analysis showed that the CA algorithm was particularly effective
at solving problems with binary packing constraints (∑aiyi + akxk ≤ ak). With only
a single continuous variable in the constraint, we increase the likelihood of introducing
a lot of integer variables to obtain integer feasibility without sacrificing too much of
the gradient information that we lose when the constraints are not binding. While, the
algorithm performed favourably with the binary packing constraints, it did however,
perform very poorly with generic mixed binary constraints (∑ayiyi +
∑axixi ≤ b). Too
many continuous variables compromises the need for non-zero integer variables to be
introduced to the auctioneer in order to obtain integer feasibility. The result was that
the objective values obtained after convergence often varied greatly from the true optimal
solution.
While the results for the generic MILPs were mixed, the results for our radiation
therapy case study were far more promising. Having specifically altered the algorithm to
56
Chapter 6. Conclusion 57
take advantage of the problem structure, we were able to consistently compete with state
of the art heuristics developed in Dong et al. (2013). Being no more than 5% away from
either the true optimal or the heuristic solution found, our algorithm often outperformed
both algorithms on speed. We showed that our algorithm scales better with an increase
in voxel size and identified specific mechanisms to increase the scope of the problem so
that our algorithm can simultaneously optimize the beam selection, fluence map, and
aperture optimization problems in IMRT.
A majority of the future work on this project could be dedicated to the generic
algorithm with the hopes of decreasing the optimality gap between our work and the
exact solution. An investigation into some of the methodologies introduced in subsection
3.2.5 could be very fruitful. In 3.2.5 we discussed different trains of thought on how to
initialize the algorithm and how to prioritize which variables are allowed to enter the set
of visible variables. Other avenues of research include:
1. A generalized integer simplex formulation as described in section 3.3 to reduce the
need of solving the WDP as a MILP
2. An investigation into the efficacy of simultaneous cutting planes to introduce more
dual variables with the aim of generating better gradient information
3. A more elaborate mechanism to provide lower bounds to the algorithm as it pro-
gresses
4. A more theoretical and in depth analysis into the convergence and optimality cri-
teria of the algorithm
Bibliography
Jawad Abrache, Teodor Gabriel Crainic, Michel Gendreau, and Monia Rekik. Combi-
natorial auctions. Annals of Operations Research, 153(1):131–164, May 2007. ISSN
0254-5330. doi: 10.1007/s10479-007-0179-z. URL http://link.springer.com/10.
1007/s10479-007-0179-z.
Shabbir Ahmed, Ozan Gozbasi, Martin Savelsbergh, Ian Crocker, Tim Fox, and Eduard
Schreibmann. An Automated Intensity-Modulated Radiation Therapy Planning Sys-
tem. INFORMS Journal on Computing, 22(4):568–583, April 2010. ISSN 1091-9856.
doi: 10.1287/ijoc.1090.0374. URL http://joc.journal.informs.org/cgi/doi/10.
1287/ijoc.1090.0374.
Dionne M. Aleman and Michael B. Sharpe. Optimization Methods for Total Marrow
Irradiation using Intensity Modulated Radiation Therapy. INFOR: Information Sys-
tems and Operational Research, 49(4):234–240, November 2011. ISSN 0315-5986.
doi: 10.3138/infor.49.4.234. URL http://utpjournals.metapress.com/openurl.
asp?genre=article&id=doi:10.3138/infor.49.4.234.
Dionne M. Aleman, Arvind Kumar, Ravindra K. Ahuja, H. Edwin Romeijn, and James F.
Dempsey. Neighborhood search approaches to beam orientation optimization in inten-
sity modulated radiation therapy treatment planning. Journal of Global Optimization,
42(4):587–607, March 2008. ISSN 0925-5001. doi: 10.1007/s10898-008-9286-x. URL
http://link.springer.com/10.1007/s10898-008-9286-x.
Dionne M Aleman, Daniel Glaser, H Edwin Romeijn, and James F Dempsey. Inte-
rior point algorithms: guaranteed optimality for fluence map optimization in IMRT.
Physics in medicine and biology, 55(18):5467–82, September 2010. ISSN 1361-6560.
doi: 10.1088/0031-9155/55/18/013. URL http://www.ncbi.nlm.nih.gov/pubmed/
20798458.
D.M. Aleman, V.V. Misic, and M.B. Sharpe. Computational enhancements to flu-
ence map optimization for total marrow irradiation using IMRT. Computers &
58
BIBLIOGRAPHY 59
Operations Research, 40(9):2167–2177, September 2013. ISSN 03050548. doi:
10.1016/j.cor.2011.05.028. URL http://linkinghub.elsevier.com/retrieve/pii/
S0305054811001626.
A M All and Jstor Terms. Tabu Search : A Tutorial. 20(4):74–94, 2014.
E. Amaldi, Marc E. Pfetsch, and Leslie E. Trotter, Jr. On the maximum feasible subsys-
tem problem, IISs, and IIS-hypergraphs. Mathematical Programming, 95(3):533–554,
2003.
American Cancer Society. Cancer Facts & Figures 2013. American Can-
cer Society, Atlanta, 2013. URL http://www.cancer.org/acs/groups/content/
@epidemiologysurveilance/documents/document/acspc-036845.pdf.
Egon Balas, Stefan Schmieta, and Christopher Wallace. Pivot and shifta mixed in-
teger programming heuristic. Discrete Optimization, 1(1):3–12, June 2004. ISSN
15725286. doi: 10.1016/j.disopt.2004.03.001. URL http://linkinghub.elsevier.
com/retrieve/pii/S1572528604000027.
Cynthia Barnhart, Ellis L. Johnson, George L. Nemhauser, Martin W. P. Savelsbergh,
and Pamela H. Vance. Branch-and-Price: Column Generation for Solving Huge Integer
Programs. Operations Research, 46(3):316–329, June 1998. ISSN 0030-364X. doi:
10.1287/opre.46.3.316. URL http://pubsonline.informs.org/doi/abs/10.1287/
opre.46.3.316.
J. Barutt and T. Hull. Airline crew scheduling: Supercomputers and algorithms. SIAM
News, 23(6), 1990.
Livio Bertacco, Matteo Fischetti, and Andrea Lodi. A feasibility pump heuristic for
general mixed-integer problems. Discrete Optimization, 4(1):63–76, March 2007. ISSN
15725286. doi: 10.1016/j.disopt.2006.10.001. URL http://linkinghub.elsevier.
com/retrieve/pii/S1572528606000855.
A. Bley, N. Boland, C. Fricke, and G. Froyland. A strengthened formulation and cutting
planes for the open pit mine production scheduling problem. Computers and Operations
Research, 37:1641–1647, 2010.
Ralf Borndorfer and Christian Liebchen. When Periodic Timetables are Suboptimal. In
Jorg Kalcsics and Stefan Nickel, editors, Operations Research Proceedings 2007, pages
449–454. Springer, 2008.
BIBLIOGRAPHY 60
D. Bredstrom, K. Jornsten, M. Ronnqvist, and M. Bouchard. Searching for optimal
integer solutions to set partitioning problems using column generation. International
Transactions in Operational Research, 21(2):177–197, March 2014. ISSN 09696016.
doi: 10.1111/itor.12050. URL http://doi.wiley.com/10.1111/itor.12050.
Gianni Codato and Matteo Fischetti. Combinatorial Benders’ Cuts for Mixed-Integer
Linear Programming. Operations Research, 54(4):756–766, August 2006. ISSN 0030-
364X. doi: 10.1287/opre.1060.0286. URL http://pubsonline.informs.org/doi/
abs/10.1287/opre.1060.0286.
Gerard Cornuejols. The Ongoing Story of Gomory Cuts. Documenta Mathematica,
pages 221–226, 2012. URL http://www.math.uiuc.edu/documenta/vol-ismp/37_
cornuejols-gerard.pdf.
Eamonn Coughlan, Marco Lubbecke, and Jens Schulz. A branch-and-price algorithm
for multi-mode resource leveling. In Paola Festa, editor, Experimental Algorithms,
volume 6049 of Lecture Notes in Computer Science, pages 226–238. Springer Berlin /
Heidelberg, 2010.
Peter Cramton, Yoav Shoham, and Richard Steinberg. Introduction to Combinatorial
Auctions. In Combinatorial Auctions. 2006.
Sven de Vries and Rakesh V. Vohra. Combinatorial Auctions: A Survey. IN-
FORMS Journal on Computing, 15(3):284–309, August 2003. ISSN 1091-9856. doi:
10.1287/ijoc.15.3.284.16077. URL http://pubsonline.informs.org/doi/abs/10.
1287/ijoc.15.3.284.16077.
Kalyanmoy Deb, Associate Member, Amrit Pratap, Sameer Agarwal, and T Meyarivan.
A Fast and Elitist Multiobjective Genetic Algorithm :. 6(2):182–197, 2002.
Peng Dong, Percy Lee, Dan Ruan, Troy Long, Edwin Romeijn, Yingli Yang, Daniel
Low, Patrick Kupelian, and Ke Sheng. 4π non-coplanar liver SBRT: a novel delivery
technique. International journal of radiation oncology, biology, physics, 85(5):1360–6,
April 2013. ISSN 1879-355X. doi: 10.1016/j.ijrobp.2012.09.028. URL http://www.
ncbi.nlm.nih.gov/pubmed/23154076.
Marco Dorigo, Senior Member, and Luca Maria Gambardella. Ant Colony System : A
Cooperative Learning Approach to the Traveling Salesman Problem. 1(1):53–66, 1997.
Melissa Dunford and Karla Hoffman. Testing linear pricing algorithms for use
in ascending combinatorial auctions. George Mason . . . , pages 1–48, 2007.
BIBLIOGRAPHY 61
URL http://www.researchgate.net/publication/228405934_Testing_linear_
pricing_algorithms_for_use_in_ascending_combinatorial_auctions/file/
72e7e5232129a8d35f.pdf.
Dominique Feillet, Michel Gendreau, Andres L. Medaglia, and Jose L. Walteros. A note
on branch-and-cut-and-price. Operations Research Letters, 38(5):346–353, September
2010. ISSN 01676377. doi: 10.1016/j.orl.2010.06.002. URL http://linkinghub.
elsevier.com/retrieve/pii/S0167637710000775.
Matteo Fischetti and Andrea Lodi. Local branching. Mathematical Programming, 98:
23–47, 2003. ISSN 0025-5610.
Marshall L. Fisher. The Lagrangian Relaxation Method for Solving Integer Programming
Problems. Management Science, 50(12 supplement):1861–1871, December 2004. ISSN
0025-1909. doi: 10.1287/mnsc.1040.0263. URL http://pubsonline.informs.org/
doi/abs/10.1287/mnsc.1040.0263.
Dimitris Fotakis, Piotr Krysta, and C Ventre. Combinatorial Auctions without Money.
arXiv preprint arXiv:1310.0177, pages 1–25, 2013. URL http://arxiv.org/abs/
1310.0177.
D. Gade and S. Kucukyavuz. Deterministic lot sizing with service levels. Technical
report, Optimization Online, 2010.
RE Gomory. Outline of an algorithm for integer solutions to linear programs. Bulletin
of the American Mathematical Society, pages 275–278, 1958. URL http://pages.
stern.nyu.edu/~rgomory/academic_papers/07_outline.pdf.
J.-W. Goossens, S. van Hoesel, and L. G. Kroon. A branch-and-cut approach for solving
railway line-planning problems. Transportation Science, 38(3):379–393, 2004.
H.W. Hamacher and K.-H. Kufer. Inverse radiation therapy planning a multiple objective
optimization approach. Discrete Applied Mathematics, 118(1-2):145–161, April 2002.
ISSN 0166218X. doi: 10.1016/S0166-218X(01)00261-X. URL http://linkinghub.
elsevier.com/retrieve/pii/S0166218X0100261X.
Kuo-Ling Huang and Sanjay Mehrotra. An empirical evaluation of walk-and-round
heuristics for mixed integer linear programs. Computational Optimization and Applica-
tions, 55(3):545–570, February 2013. ISSN 0926-6003. doi: 10.1007/s10589-013-9540-0.
URL http://link.springer.com/10.1007/s10589-013-9540-0.
BIBLIOGRAPHY 62
JE Kelley and Jr. The cutting-plane method for solving convex programs. Journal of the
Society for Industrial & Applied . . . , 8(4):703–713, 1960. URL http://epubs.siam.
org/doi/pdf/10.1137/0108053.
Thorsten Koch, Tobias Achterberg, Erling Andersen, Oliver Bastert, Timo Berthold,
Robert E. Bixby, Emilie Danna, Gerald Gamrath, Ambros M. Gleixner, Stefan Heinz,
Andrea Lodi, Hans Mittelmann, Ted Ralphs, Domenico Salvagnin, Daniel E. Steffy,
and Kati Wolter. MIPLIB 2010. Mathematical Programming Computation, 3(2):103–
163, 2011. doi: 10.1007/s12532-011-0025-9. URL http://mpc.zib.de/index.php/
MPC/article/view/56/28.
RH Kwon. Iterative combinatorial auctions with bidder-determined combinations. Man-
agement Science, 51(3):407–418, March 2005. ISSN 0025-1909. doi: 10.1287/mnsc.
1040.0335. URL http://pubsonline.informs.org/doi/abs/10.1287/mnsc.1040.
0335.
Roy H Kwon, Chi-guhn Lee, and Zhong Ma. An Integrated Combinatorial Auction
Mechanism for Truckload Transportation Procurement.
EL Lawler and DE Wood. Branch-and-bound methods: A survey. Operations research, 14
(4):699–719, 1966. URL http://pubsonline.informs.org/doi/abs/10.1287/opre.
14.4.699.
Chieh-Hsiu Jason Lee, Dionne M Aleman, and Michael B Sharpe. A set cover approach
to fast beam orientation optimization in intensity modulated radiation therapy for
total marrow irradiation. Physics in medicine and biology, 56(17):5679–95, September
2011. ISSN 1361-6560. doi: 10.1088/0031-9155/56/17/014. URL http://www.ncbi.
nlm.nih.gov/pubmed/21828910.
EK Lee, T Fox, and I Crocker. Integer programming applied to intensity-modulated
radiation therapy treatment planning. Annals of Operations Research, 119(June 2001):
165–181, 2003. doi: 10.1023/A:1022938707934. URL http://www.springerlink.
com/index/KP7008614G8N8235.pdf.
G. J. Lim, M. C. Ferris, S. J. Wright, D. M. Shepard, and M. a. Earl. An Optimization
Framework for Conformal Radiation Treatment Planning. INFORMS Journal on Com-
puting, 19(3):366–380, January 2007a. ISSN 1091-9856. doi: 10.1287/ijoc.1060.0179.
URL http://joc.journal.informs.org/cgi/doi/10.1287/ijoc.1060.0179.
BIBLIOGRAPHY 63
Gino J. Lim, Jaewon Choi, and Radhe Mohan. Iterative solution methods for
beam angle and fluence map optimization in intensity modulated radiation ther-
apy planning. OR Spectrum, 30(2):289–309, July 2007b. ISSN 0171-6468. doi:
10.1007/s00291-007-0096-1. URL http://www.springerlink.com/index/10.1007/
s00291-007-0096-1.
Gino J Lim, Allen Holder, and Josh Reese. A clustering approach for optimizing beam
angles in IMRT planning. In IIE Annual Conference, number 812, pages 663–668,
2009.
Chris Loken, Daniel Gruner, Leslie Groer, Richard Peltier, Neil Bunn, Michael Craig,
Teresa Henriques, Jillian Dempsey, Ching-Hsing Yu, Joseph Chen, L Jonathan Dursi,
Jason Chong, Scott Northrup, Jaime Pinto, Neil Knecht, and Ramses Van Zon. SciNet:
Lessons Learned from Building a Power-efficient Top-20 System and Data Centre.
Journal of Physics: Conference Series, 256:012026, November 2010. ISSN 1742-6596.
doi: 10.1088/1742-6596/256/1/012026. URL http://stacks.iop.org/1742-6596/
256/i=1/a=012026?key=crossref.460dd0a7bf20e10e76a0c5799b82a3d5.
JK MacKie-Mason and HR Varian. Generalized vickrey auctions. Encyclopedia of
algorithms, pages 399–401, 1994. URL http://books.google.com/books?hl=
en&lr=&id=i3S9_GnHZwYC&oi=fnd&pg=PR6&dq=Encyclopedia+of+Algorithms&ots=
nCoql7CqLx&sig=CQG-gMzgIlqVBdV8CyHqEJeHFAohttp://books.google.com/
books?hl=en&lr=&id=i3S9_GnHZwYC&oi=fnd&pg=PR6&dq=Encyclopedia+of+
algorithms&ots=nCoql7CqNz&sig=RF7d9gnpjdGeSU3A3MwCHovLx6Mhttp:
//141.213.232.243/handle/2027.42/41250.
V.V. Misic, D.M. Aleman, and M.B. Sharpe. Neighborhood search approaches to non-
coplanar beam orientation optimization for total marrow irradiation using IMRT.
European Journal of Operational Research, 205(3):522–527, September 2010. ISSN
03772217. doi: 10.1016/j.ejor.2010.02.019. URL http://linkinghub.elsevier.com/
retrieve/pii/S0377221710001244.
F. Ortega and L. A. Wolsey. A branch-and-cut algorithm for the single-commodity,
uncapacitated, fixed-charge network flow problem. Networks, 41(3):143–158, 2003.
Manfred Padberg. Classical cuts for mixed-integer programming and branch-and-cut.
Mathematical Methods of Operations Research, (1974):321–352, 2001. URL http://
link.springer.com/article/10.1007/s001860100120.
BIBLIOGRAPHY 64
David C Parkes and Lyle H Ungar. Iterative Combinatorial Auctions : Theory and
Practice. In 17th National Conference on Artifical Intelligence, pages 74–81, 2000.
DC Parkes. An iterative generalized Vickrey auction: Strategy-proofness with-
out complete revelation. Proceedings of AAAI Spring Symposium on Game
. . . , 2001. URL http://www.aaai.org/Papers/Symposia/Spring/2001/SS-01-03/
SS01-03-010.pdf.
J Patel and JW Chinneck. Active-constraint variable ordering for faster feasibility of
mixed integer linear programs. Mathematical Programming, pages 1–29, 2007. URL
http://link.springer.com/article/10.1007/s10107-006-0009-0.
A Pekec and MH Rothkopf. Combinatorial auction design. Management Science, 49
(11):1485–1503, 2003. URL http://mansci.journal.informs.org/content/49/11/
1485.short.
Marc E. Pfetsch. Branch-and-cut for the maximum feasible subsystem problem. SIAM
Journal on Optimization, 19:21–38, 2008.
Y. Pochet and M. Van Vyve. A general heuristic for production planning problems.
INFORMS Journal on Computing, 16(3), 2004.
David Porter, Stephen Rassenti, Anil Roopnarine, and Vernon Smith. Combinato-
rial auction design. Proceedings of the National Academy of Sciences of the United
States of America, 100(19):11153–7, September 2003. ISSN 0027-8424. doi: 10.
1073/pnas.1633736100. URL http://www.pubmedcentral.nih.gov/articlerender.
fcgi?artid=196943&tool=pmcentrez&rendertype=abstract.
Jean-yves Potvin, G U Y Lapalme, and Jean-marc Rousseau. A GENERALIZED K-OPT
EXCHANGE PROCEDURE FORTHEMTSP *. 27(4), 1989.
H. Edwin Romeijn and James F. Dempsey. Intensity modulated radiation therapy treat-
ment plan optimization. Top, 16(2):215–243, November 2008. ISSN 1134-5764. doi:
10.1007/s11750-008-0064-1. URL http://www.springerlink.com/index/10.1007/
s11750-008-0064-1.
HE Romeijn, RK Ahuja, JF Dempsey, and Arvind Kumar. A column generation approach
to radiation therapy treatment planning using aperture modulation. SIAM Journal
on Optimization, 15(3):838–862, 2005. URL http://epubs.siam.org/doi/abs/10.
1137/040606612.
BIBLIOGRAPHY 65
Ehsan Salari and H Edwin Romeijn. Quantifying the Trade-off Between IMRT Treat-
ment Plan Quality and Delivery Efficiency Using Direct Aperture Optimization.
INFORMS Journal on Computing, 24(4):518–533, August 2011. ISSN 1091-9856.
doi: 10.1287/ijoc.1110.0474. URL http://joc.journal.informs.org/cgi/doi/10.
1287/ijoc.1110.0474.
Harald Schilly. Modellierung und Implementation eines Vorlesungsplaners. Diploma
thesis, Universitat Wien, 2007.
David M Shepard, Michael C Ferris, Gustavo H Olivera, and T Rockwell Mackie. Op-
timizing the Delivery of Radiation Therapy to Cancer Patients. SIAM Review, 41
(4):721–744, January 1999. ISSN 0036-1445. doi: 10.1137/S0036144598342032. URL
http://epubs.siam.org/doi/abs/10.1137/S0036144598342032.
C Tsallis and DA Stariolo. Generalized simulated annealing. Physica A: Statistical Me-
chanics and its Applications, 1996. URL http://www.sciencedirect.com/science/
article/pii/S0378437196002713.
F Vanderbeck and LA Wolsey. An exact algorithm for IP column generation. Operations
Research Letters, 1996. URL http://www.sciencedirect.com/science/article/
pii/0167637796000338.
Mu Xia, Gary J. Koehler, and Andrew B. Whinston. Pricing combinatorial auctions. Eu-
ropean Journal of Operational Research, 154(1):251–270, April 2004. ISSN 03772217.
doi: 10.1016/S0377-2217(02)00678-1. URL http://linkinghub.elsevier.com/
retrieve/pii/S0377221702006781.
H. H. Zhang, L. Shi, R. Meyer, D. Nazareth, and W. D’Souza. Solving Beam-Angle
Selection and Dose Optimization Simultaneously via High-Throughput Computing.
INFORMS Journal on Computing, 21(3):427–444, November 2008. ISSN 1091-9856.
doi: 10.1287/ijoc.1080.0297. URL http://joc.journal.informs.org/cgi/doi/10.
1287/ijoc.1080.0297.