Restricted robust optimization for maximization over uniform matroid with interval data uncertainty

Preview:

Citation preview

Restricted Robust Optimization for Maximization over Uniform Matroidwith Interval Data Uncertainty

H. Yaman∗ O.E. Karasan∗ M.C. Pınar∗

June 16, 2004

Abstract

For the problem of selecting p items with uncertain (interval) objective function coef-ficients so as to maximize total profit (maximization over uniform matroid) we introducethe r-restricted robust deviation criterion or the r-robust solution for short. We seek so-lutions that minimize the r-restricted robust deviation. This new criterion increases themodeling power of the robust deviation (minmax regret) criterion by reducing the levelof conservativeness of the robust solution. It is shown that r-restricted robust deviationsolutions can be computed efficiently. Results of extensive computational experimentsand comparisons with absolute robustness, robust deviation and restricted absolute ro-bustness criteria are reported and discussed.

Key words. Maximization over uniform matroid, robust optimization with intervalobjective function coefficients, r-restricted robust deviation.

1 Introduction

The purpose of this paper is to introduce a new measure of robustness called the r-restrictedrobust deviation, and investigate its applicability within the context of a well-known problemfrom combinatorial optimization, namely the problem of selecting p items so as to maximizetotal profit. It is assumed that the profit coefficients in the objective function of the prob-lem are uncertain in the sense that they each can assume any value within a finite interval.This type of problems has been introduced and investigated in a series of papers and a mono-graph by Kouvelis and Yu (and co-authors) [12, 16, 24, 25]. Subsequent contributions include[1, 2, 3, 4, 11, 15, 17, 18, 19, 20, 21, 23] although the list is by no means exhaustive. Theunifying theme in all these references is the fact that they all treat one or several well-knowncombinatorial optimization problems e.g., minimum spanning tree, shortest path etc. wherethe data are not known with precision. It is usually assumed that the data behave accordingto some finite or infinite scenarii, or that the data elements can assume any value in someinterval. As new concepts of optimality were needed for such situations, the contributorsto this area proposed to seek a solution that minimizes (resp. maximizes) some measureof worst performance, i.e., a solution that makes the maximum (resp. the minimum) of aperformance measure minimum (resp. maximum). This paradigm gave rise to the conceptsof absolute robustness (also known as minmax criterion), and robust deviation (or minmaxregret criterion) and robust deviation minimization. There are different definitions of robustoptimization problems in the literature [5, 6, 7, 8, 9, 10, 13, 14] although one way or anotherthese approaches also boil down to a minmax or maxmin optimization context. In particu-lar, the approach advocated by Ben-Tal and Nemirovski [6, 7, 8] extensively studied convex

∗Bilkent University, Department of Industrial Engineering, Bilkent, 06800, Ankara, Turkey

1

optimization problems under ellipsoidal data uncertainty although it is not immediately ap-plicable to discrete optimization. Another model by Bertsimas and Sim [9, 10] adopts theinterval model of uncertainty and proposes a restricted version of absolute robustness cri-terion although this connection is overlooked by the authors. Their point of departure isto limit the conservativeness of the robust solution by arguing that it is quite unlikely thatall data elements will assume their worst possible values simultaneously whereas both theabsolute robustness and robust deviation criteria seek solutions for such a contingency.

The picture emerging from these references can be summarized as follows. In the sce-nario model of uncertainty, and already with the absolute robustness criterion, even thesimplest combinatorial optimization problems become intractable. This observation moti-vated Bertsimas and Sim to develop their restricted absolute robustness paradigm. Anotherdistinguishing feature of the Bertsimas-Sim contribution is the ability to control the conser-vativeness of the robust solutions by restricting its protective power only to a subset of allpossible contingencies and establishing a probabilistic guarantee on the remaining events forwhich no protection is offered.

Using the robust deviation criterion under the scenario model of uncertainty which isa conservative criterion also leads to intractable problems in general. Under the intervalmodel of uncertainty, the picture is somewhat blurred with a mix of positive and negativeresults. One positive result about tractability in this direction was obtained by Averbakh[1], and further improved by Conde [11], which constitute the starting point of the presentpaper. Inspired by the work of Bertsimas and Sim, we develop a restricted version of therobust deviation criterion using the problem of Averbakh and Conde, namely, the problemof selecting p elements out of n elements so as to maximize total profit. This problemis also known as the problem of maximization over a uniform matroid and is solvable bya simple procedure in O(n) time (see [11]) if data are known with certainty. Under theinterval model of uncertainty of the objective function coefficients, and using the robustdeviation criterion, Averbakh gave a polynomial time algorithm, which was improved recentlyby Conde. In the present paper, we derive the r-restricted robust deviation version of theproblem and show that it is solvable in polynomial time as well. The main inspiration ofthis work is the contribution of Bertsimas and Sim whereby we develop a restricted versionof the robust deviation criterion in the aim of limiting its conservativeness. Although wedo not have probabilistic guarantees, unlike previous papers in the literature we comparethe performances of the absolute robust, restricted absolute robust (Bertsimas-Sim) and therobust deviation solutions with the r-restricted robust deviation solution on a large numberof randomly generated problem instances.

The rest of the paper is organized as follows. In Section 2 we give a background onrobustness criteria illustrated on the maximization problem over uniform matroid. In Section3 we formulate the r-restricted robust problem. In Section 4 we establish its polynomialsolvability. Finally, Section 5 is devoted to a description of our numerical tests, results andtheir discussion.

2 Background

Let a discrete ground set N of n items be given. Furthermore, let p ≤ n. We consider thefollowing problem

(P) max∑i∈N

cixi

2

s.t.∑i∈N

xi = p (1)

xi ∈ {0, 1} ∀i ∈ N. (2)

Let F denote the set of feasible solutions.It is assumed that the objective function coefficient of i ∈ N denoted by ci is not known

with certainty, but it is known that it takes a value in the interval [li, ui]. For i ∈ N , wedefine wi = ui − li and assume that wi ≥ 0.

Let S denote the set of scenarii. The set S is the Cartesian product of all intervals. Fors ∈ S, c(s) denotes the vector of objective function coefficients in scenario s.

Following Kouvelis and Yu [16], we have the following definitions.

Definition 1 The worst performance for x ∈ F is ax = mins∈S c(s)T x. A solution x∗ iscalled an absolute robust solution if x∗ ∈ argmaxx∈F ax. The problem of finding an absoluterobust solution is called the absolute robust problem (AR).

The AR can be solved easily by solving problem P for the scenario s such that c(s)i = lifor all i ∈ N .

Definition 2 The robust deviation for x ∈ F is dx = maxs∈S(maxy∈F c(s)T y − c(s)T x). Asolution x∗ is called a robust deviation solution if x∗ ∈ argminx∈F dx. The problem of findinga robust deviation solution is called the robust deviation problem (RD).

It was shown in [1, 11] that

dx = maxy∈F

(∑i∈N

ui − wixi)yi −∑i∈N

lixi.

As conv(F ) = {y ∈ Rn+ :

∑i∈N yi = p, yi ≤ 1 ∀i ∈ N}, by the Strong Duality Theorem

(SDT) of Linear Programming (LP), we have

dx = min(λ,γ)∈Λ

(pλ +∑i∈N

γi) −∑i∈N

lixi

where Λ = {(λ, γ) ∈ Rn+1 : λ + γi ≥ ui − wixi and γi ≥ 0 ∀i ∈ N}. Therefore RD is

formulated as:

(RD) min pλ +∑i∈N

γi −∑i∈N

lixi

s.t. (1) and (2)λ + γi ≥ ui − wixi ∀i ∈ N

γi ≥ 0 ∀i ∈ N.

The above problem was shown to be polynomially solvable in [1, 11].Now, following Bertsimas and Sim [9, 10] we give our definition of restricted robust

problems.For 1 ≤ r ≤ n, define S(r) = {s ∈ S : c(s)i < ui for at most r items} and Z(r) = {z ∈

{0, 1}n :∑

i∈N zi ≤ r}.

3

Definition 3 The r-restricted worst performance for x ∈ F is arx = mins∈S(r) c(s)T x. A

solution x∗ is called an r-restricted absolute robust solution if x∗ ∈ argmaxx∈F arx. The

problem of finding an r-restricted absolute robust solution is called the r-restricted absoluterobust problem (r-RAR).

For x ∈ F , the r-restricted worst performance can be computed as

arx =

∑i∈N

uixi − maxz∈Z(r)

∑i∈N

wizixi.

As conv(Zr) = {z ∈ Rn+ :∑

i∈N zi ≤ r, zi ≤ 1 ∀i ∈ N}, by the SDT of LP, we have

arx =

∑i∈N

uixi − min(µ,γ)∈M

µr +∑i∈N

γi

where M = {(µ, γ) ∈ Rn+1+ : µ + γi ≥ wixi}. Therefore the r-RAR is formulated as follows:

(r-RAR) max∑i∈N

uixi − µr −∑i∈N

γi

s.t. (1) and (2)µ + γi ≥ wixi ∀i ∈ N

µ ≥ 0γi ≥ 0 ∀i ∈ N.

This problem can also be shown to be polynomially solvable using ideas from [11]. Howeverwe do not elaborate on the details here.

Now, we can define the problem of interest for the present paper.

Definition 4 The r-restricted robust deviation for x ∈ F is

drx = max

s∈S(r)(max

y∈Fc(s)T y − c(s)T x).

A solution x∗ is called an r-restricted robust deviation solution if x∗ ∈ argminx∈F drx. The

problem of finding an r-restricted robust deviation solution is called the r-restricted robustdeviation problem (r-RRD).

These problems are related in the following way:

Proposition 1 For x ∈ F , arx = an

x = ax for all r ≥ p and drx = dn

x = dx for all r ≥min{p, n − p}.Proof. As S(1) ⊆ S(2) ⊆ ... ⊆ S(n) = S, for x ∈ F , we have a1

x ≥ a2x ≥ ... ≥ an

x = ax andd1

x ≤ d2x ≤ ... ≤ dn

x = dx.For r > p, let ar

x = mins∈S(r) c(s)T x = c(s∗)T x. Construct scenario s′

as follows: fori ∈ N , if c(s∗)i < ui and xi = 0, then c(s

′)i = ui and c(s

′)i = c(s∗)i otherwise. Then

s′ ∈ S(p) and c(s∗)T x = c(s

′)T x. As ap

x = mins∈S(p) c(s)T x ≤ c(s′)T x, we have ap

x ≤ arx. We

also know that apx ≥ ar

x since r > p. So apx = ar

x for all r > p.Let p = min{p, n − p}. For r > p, let dr

x = maxs∈S(r)(maxy∈F c(s)T y − c(s)T x) =c(s∗)T y∗ − c(s∗)T x. Construct scenario s

′as follows: for i ∈ N , if c(s∗)i < ui and xi = 0 or

yi = 1, then c(s′)i = ui and c(s

′)i = c(s∗)i otherwise. Then s

′ ∈ S(p) and c(s∗)T y∗−c(s∗)T x ≤c(s

′)T y∗ − c(s

′)T x. So dr

x ≤ c(s′)T y∗ − c(s

′)T x. As dp

x ≥ c(s′)T y∗− c(s

′)T x, we have dr

x ≤ dpx.

Together with drx ≥ dp

x, this implies that drx = dp

x for r > p. �

4

3 Structural Results and Formulation

In this section we present a MIP formulation of the r-restricted robust problem after someintermediate results.

Proposition 2 For x ∈ F ,

drx = max

{z∈Z(r):yi+zi≤1 ∀i∈N}

(maxy∈F

∑i∈N

uiyi −∑i∈N

(ui − wizi)xi

).

Proof. Let s∗ and y∗ be such that drx = c(s∗)T y∗ − c(s∗)T x and define z∗ and v∗ as follows:

for i ∈ N , if c(s∗)i < ui then z∗i = 1 and v∗i = ui − c(s∗)i and if c(s∗)i = ui then z∗i = 0 andv∗i = 0.

Consider the scenario s′

such that c(s′)i = li if z∗i = 1 and y∗i = 0 and c(s

′)i = ui

otherwise.For a 0-1 vector x, define its support I(x) = {i ∈ N : xi = 1}. Then

c(s∗)T y∗ − c(s∗)T x =∑

i∈I(y∗)\I(x)

c(s∗)i −∑

i∈I(x)\I(y∗)

c(s∗)i

=∑

i∈I(y∗)\I(x)

(ui − v∗i z∗i ) −

∑i∈I(x)\I(y∗)

(ui − v∗i z∗i )

≤∑

i∈I(y∗)\I(x)

ui −∑

i∈I(x)\I(y∗)

(ui − wiz∗i )

= c(s′)T y∗ − c(s

′)T x

≤ maxy∈F

c(s′)T y − c(s

′)T x

≤ maxs∈S(r)

(maxy∈F

c(s)T y − c(s)T x).

As drx = maxs∈S(r)(maxy∈F c(s)T y − c(s)T x), all above inequalities are satisfied at equality.

Hence, there exists an optimal solution where yi + zi ≤ 1 for all i ∈ N and the objectivefunction coefficient of item i is at its lower bound li whenever zi = 1. �

Therefore, for a given x ∈ F , its r-robust deviation can be computed as drx = f r

x −∑i∈N uixi where

f rx = max

∑i∈N

(uiyi + wixizi)

s.t.∑i∈N

yi = p (3)

∑i∈N

zi ≤ r (4)

yi + zi ≤ 1 ∀i ∈ N (5)yi ∈ {0, 1} ∀i ∈ N (6)zi ∈ {0, 1} ∀i ∈ N. (7)

5

Theorem 1 Problem r-RRD can be formulated as follows:

(r-RRD) min pλ + rµ +∑i∈N

γi −∑i∈N

uixi

s.t. (1) and (2)λ + γi ≥ ui ∀i ∈ N (8)µ + γi ≥ wixi ∀i ∈ N (9)µ ≥ 0 (10)γi ≥ 0 ∀i ∈ N. (11)

Proof. Let F rx = {(y, z) ∈ R

2n+ : (3) − (7)}. We first show that conv(F r

x ) = {(y, z) ∈ R2n+ :

(3) − (5)}. Let H be the matrix of left hand side coefficients of constraints∑i∈N

yi ≤ p

−∑i∈N

yi ≤ −p

and constraints (4) and (5). Let e be the n-vector of 1’s, 0 be the n-vector of 0’s and I bethe n × n identity matrix. Then

H =

eT 0T

−eT 0T

0T eT

I I

.

Next we show that H is totally unimodular (TU). Matrix H is TU if each collection ofcolumns of H can be split into two parts so that the sum of the columns in one part minusthe sum of the columns in the other part is a vector with entries only 0, +1 and −1 (seeSchrijver [22], p.269, Theorem 19.3). Let H1 = {h1

1, h12, ..., h

1n} be the set of first n columns

of H and let H2 = {h21, h

22, ..., h

2n} be the set of last n columns of H. Given a set C (we can

consider sets instead of collections, since a repeated column is put in two different parts) ofcolumns of H, we can partition C into two parts C1 and C2 such that the difference of thesum of the columns in C1 and the sum of the columns in C2 has components 0, +1 and −1as follows. Without loss of generality, suppose that {1, 2, ..., k} is the set of indices i suchthat h1

i ∈ C and h2i ∈ C, {k + 1, k + 2, ..., l} is the set of indices i such that h1

i ∈ C andh2

i �∈ C and {l + 1, l + 2, ...,m} is the set of indices i such that h1i �∈ C and h2

i ∈ C. Forj = 1, 2, .., k, if j is odd, put h1

j to C1 and h2j to C2 and if j is even, put h1

j to C2 and h2j to

C1. For j = 1, 2, .., l − k, if j is odd, put h1j+k to C2 and if j is even, put h1

j+k to C1. Forj = 1, 2, ..,m − l, if j is odd, put h2

j+l to C1 and if j is even, put h2j+l to C2.

As H is TU and the right hand side vector is integral, {(y, z) ∈ R2n+ : (3) − (5)} is an

integral polytope (see Schrijver [22], p.268, Corollary 19.2a).As a consequence of this observation, for a given x, the r-restricted robust deviation can

be computed by solving a linear program. Associate dual variables λ to constraint (3), µ toconstraint (4) and γi to constraint (5) for all i ∈ N . Then SDT of LP implies that

f rx = min pλ + rµ +

∑i∈N

γi

s.t. (8) − (11). (12)

6

This concludes the proof. �

It is easy to verify that when r = p the formulation of Theorem 1 is transformed to RDof [11].

4 Solvability Status of the r-Restricted Robust Problem

In this section we establish a key property of the optimal solution set of the problem r-RRD.More precisely, we show below that there exists an optimal solution where the search spacefor λ and µ is significantly reduced.

Let L = {l1, l2, . . . , ln}, U = {u1, u2, . . . , un} and W = {w1, w2, . . . , wn}.

Theorem 2 There exists (x∗, λ∗, µ∗, γ∗) optimal for r-RRD such that the following state-ments are true:

a. Either λ∗ − µ∗ ∈ L or µ∗ ∈ W and λ∗ ∈ U .

b. If µ∗ > 0 then either λ∗ ∈ U or µ∗ ∈ W .

Proof. Let (x∗, λ∗, µ∗, γ∗) be an extreme point optimal solution. Without loss of generality,assume x∗

i = 1 for i ∈ {1, . . . , p} and x∗i = 0 otherwise. Then,

γ∗i = max{wi − µ∗, ui − λ∗, 0} for i ∈ {1, . . . , p},

andγ∗

i = max{ui − λ∗, 0} for i ∈ {p + 1, . . . , n}.Define A such that A = {i ∈ {1, . . . , p} : wi − µ∗ > ui − λ∗}. We subdivide A into A1, A2

and A3 such that A1 = {i ∈ A : wi − µ∗ > 0}, A2 = {i ∈ A : wi − µ∗ = 0} and A3 ={i ∈ A : wi − µ∗ < 0}. Let B = {i ∈ {1, . . . , p} : wi − µ∗ = ui − λ∗} subdivided intoB1 = {i ∈ B : wi − µ∗ = ui − λ∗ > 0}, B2 = {i ∈ B : wi − µ∗ = ui − λ∗ = 0}, andB3 = {i ∈ B : wi − µ∗ = ui − λ∗ < 0}. We also have C = {i ∈ {1, . . . , p} : wi − µ∗ < ui − λ∗}partitioned into C1 = {i ∈ C : ui − λ∗ > 0}, C2 = {i ∈ C : ui − λ∗ = 0}, and C3 = {i ∈C : ui − λ∗ < 0}. We make the same definition for the indices in {p + 1, . . . , n}. Namely,let D = {i ∈ {p + 1, . . . , n} : ui − λ∗ > 0}, E = {i ∈ {p + 1, . . . , n} : ui − λ∗ = 0} andF = {i ∈ {p + 1, . . . , n} : ui − λ∗ < 0}.

First for the proof of part a, assume neither λ∗ − µ∗ ∈ L nor µ∗ ∈ W . This impliesA2 = B = ∅. Now, we have γ∗

i = wi − µ∗ for i ∈ A1, γ∗i = 0 for i ∈ A3, γ∗

i = ui − λ∗ fori ∈ C1, γ∗

i = ui−λ∗ = 0 for i ∈ C2, γ∗i = 0 for i ∈ C3, γ∗

i = ui−λ∗ for i ∈ D, γ∗i = ui−λ∗ = 0

for i ∈ E and finally γ∗i = 0 for i ∈ F . Let (x∗, λ∗, µ∗, γ∗

A1, γ∗

A3, γ∗

C1, γ∗

C2, γ∗

C3, γ∗

D, γ∗E , γ∗

F ) bethe vectoral description of the current solution. For a given set S and some ε, let εS be avector of dimension of |S| with all entries equal to ε. Then, both

(x∗, λ∗, µ∗ + ε, γ∗A1

− εA1 , γ∗A3

, γ∗C1

, γ∗C2

, γ∗C3

, γ∗D, γ∗

E , γ∗F )

and(x∗, λ∗, µ∗ − ε, γ∗

A1+ εA1 , γ

∗A3

, γ∗C1

, γ∗C2

, γ∗C3

, γ∗D, γ∗

E , γ∗F )

are feasible solutions of r-RRD for small enough ε which contradicts the extremality of thestarting optimal solution.

7

Now, assume neither λ∗ − µ∗ ∈ L nor λ∗ ∈ U . This implies that B = C2 = E = ∅. If weproceed as above, we can conclude easily that both

(x∗, λ∗ + ε, µ∗, γ∗A1

, γ∗A2

, γ∗A3

, γ∗C1

− εC1 , γ∗C3

, γ∗D − εD, γ∗

F )

and(x∗, λ∗ − ε, µ∗, γ∗

A1, γ∗

A2, γ∗

A3, γ∗

C1+ εC1 , γ

∗C3

, γ∗D + εD, γ∗

F )

are again feasible solutions to r-RRD with small enough ε > 0.For part b, assume that µ∗ > 0 but neither λ∗ ∈ U nor µ∗ ∈ W . This implies that

A2 = B2 = C2 = E = ∅. For ε > 0, both

(x∗, λ∗ + ε, µ∗ + ε, γ∗A1

− εA1 , γ∗A3

, γ∗B1

− εB1 , γ∗B3

, γ∗C1

− εC1 , γ∗C3

, γ∗D − εD, γ∗

F )

and

(x∗, λ∗ − ε, µ∗ − ε, γ∗A1

+ εA1 , γ∗A3

, γ∗B1

+ εB1 , γ∗B3

, γ∗C1

+ εC1 , γ∗C3

, γ∗D + εD, γ∗

F )

and this is again in contradiction with the extremality of our optimal solution. �

An important consequence of this theorem is that the solution of the problem r-RRDfor fixed λ and µ, boils down to minimization of a piecewise linear function very much likeproblem (3) of [11], the evaluation of which can be accomplished in O(n) time. More preciselyfor fixed λ and µ, we solve the following problem

f(λ, µ) = pλ + rµ + min∑i∈N

max{ui − λ,wixi − µ, 0} −∑i∈N

uixi

s.t. (1) and (2). (13)

Therefore, we transformed r-RRD into

minλ,µ

f(λ, µ)

with f(λ, µ) defined above. We solve the latter problem by testing the critical values of λand µ following Theorem 1 as follows. If λ ∈ U and µ ∈ W , then we need to test n2 values.Otherwise, λ − µ ∈ L. If µ = 0, then λ ∈ L and so can take n different values. Finally, ifµ > 0, then either λ ∈ U or µ ∈ W , resulting in additional 2n2 values to test. We simplypick the λ and µ values yielding the smallest objective function value. Therefore, we haveproved the following result.

Theorem 3 Problem r-RRD can be solved in O(n3) time.

5 Experiments

In this section we provide experimental evidence to the effectiveness of r-restricted robustdeviation criterion on the problem of robust maximization over uniform matroid.

In the paragraphs below, when we speak of an uncertain problem we mean that we fix nand p and generate for each objective function coefficient a random interval, i.e., li and ui

values for each i. Then by an instance of an uncertain problem we mean a random scenarioof objective function coefficients within the prespecified interval while everything else is keptfixed.

8

p=100

179018001810182018301840185018601870188018901900

1 7 13 19 25 31 37 43 49 55 61 67 73 79 85 91 97

r

robu

st d

evia

tion

Figure 1: Robust deviation values obtained from the r-restricted robust solutions as a functionof r: p = 100

We report the outcome of two experiments. In the first one, we aim to demonstrate thatthe choice of r is not critical to the performance of the criterion. To measure performance,we compute the robust deviation value, using the optimal solution obtained from solvingthe problem r-RRD, i.e., the r-robust solution. To conduct this experiment we generate anuncertain problem with n = 500 randomly where we take the li values uniformly distributedin the interval (−10000, 10000), wi values uniformly distributed in the interval (0, 2000) andwe obtain ui values by simply adding wi to li for each i from 1 to n. Keeping all data fixed,we vary r from 1 to min{p, n− p}, and plot the robust deviation values dx as in Definition 2as a function of r. This test was repeated several times with different uncertain problems. Atypical result is shown in Figure 1 where we observe that the robust deviation value obtainedfrom solving r-RRD quickly converges to the minmax-regret value already for small valuesof r. We can reach two conclusions from this experiment. The first one is that the choice ofr is immaterial from a certain value on, where this threshold value is typically reached veryquickly. The second conclusion is that once this threshold value of r is reached, the minmaxregret and r-restricted robust solutions are indistinguishable as far as the smallest value ofmaximum regret is concerned. This is an interesting experimental finding in the sense thatby introducing the r-restricted deviation we hedged ourselves only to damages limited bythe bad behavior of r coefficients. However, full protection is already achieved much earlier,before reaching r = min{p, n − p}.

Our second experiment consists in comparing the relative performance of the four robust-ness measures discussed in the present paper. We report results of the second experiment inFigures 2, 3, 4, 5 and Table 1 below. The reason for separating the results into four figuresand one table is to avoid encumbering plots unnecessarily with four different solutions andto see clearly the relative behavior of the r-robust solution vis-a-vis different criteria. ForFigures 2–5, we randomly generate what we call “extreme” problem instances in order to beable to observe a distinct behavior of the r-robust and the robust deviation solutions, which isusually impossible to obtain without such extreme instances. These instances are generatedas follows. For a fixed r, we take objective function coefficients at their lower bounds withprobability equal to r

n and at their upper bounds with probability equal to 1− rn . We repeat

9

0

0,05

0,1

0,15

0,2

0,25

1 5 9 13 17 21 25 29 33 37 41 45 49

r-value

r-RRDRD

Figure 2: Average percentage deviations of robust deviation and r-robust solutions from theoptimal value for extreme problem instances: p = 50

0

0,02

0,04

0,06

0,08

0,1

0,12

0,14

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99

r-value

r-RRDRD

Figure 3: Average percentage deviations of robust deviation and r-robust solutions from theoptimal value for extreme problem instances: p = 100

10

0

0,02

0,04

0,06

0,08

0,1

0,12

0,14

1 15 29 43 57 71 85 99 113

127

141

155

169

183

197

r-value

r-RRDRD

Figure 4: Average percentage deviations of robust deviation and r-robust solutions from theoptimal value for extreme problem instances: p = 200

0

0,05

0,1

0,15

0,2

0,25

1 18 35 52 69 86 103

120

137

154

171

188

205

222

239

r-value

r-RRDRD

Figure 5: Average percentage deviations of robust deviation and r-robust solutions from theoptimal value for extreme problem instances: p = 250

11

�1 norm �∞ normp,r r-RRD r-AR AR r-RRD r-AR AR

50,10 1,21725 1,45410 1,78714 0,00628 0,00619 0,0073450,25 1,21725 1,45410 1,78714 0,00628 0,00619 0,0073450,40 1,21725 1,42469 1,78714 0,00628 0,00599 0,00734100,20 1,02834 1,10572 1,50877 0,00548 0,00494 0,00612100,50 1,02834 1,11981 1,50877 0,00548 0,00485 0,00612100,80 1,02834 1,26698 1,50877 0,00548 0,00551 0,00612200,40 0,66495 1,06573 0,83099 0,00301 0,00416 0,00339200,100 0,66495 0,72955 0,83099 0,00301 0,00325 0,00339200,160 0,66495 0,83099 0,83099 0,00301 0,00339 0,00339250,50 0,23078 0,46192 0,63424 0,00134 0,00220 0,00265250,125 0,23078 0,23078 0,63424 0,00134 0,00134 0,00265250,200 0,23078 0,63476 0,63424 0,00134 0,00257 0,00265

Table 1: �1 and �∞ norms of error vectors for three different robustness criteria averaged over500 instances for each uncertain problem

this 50 times, thus obtaining 50 problem instances for fixed p and r. Then we compute ther-restricted robust solution and the robust deviation solution. We calculate the deviationof these solutions from the optimal values of each of the 50 instances. We take the averageof these 50 observations for both the r-robust solution and the robust deviation solution,respectively. Therefore we obtain two points for our plots for a fixed value of r. Repeatingthis for values of r ranging from 1 to p, we obtain an entire plot. We observe from ourfour plots that for small values of r, the r-robust solution is somewhat more robust to suchperturbations compared to the robust deviation solution whereas after a certain r value thetwo solutions behave identically.

As the above results show that the r-restricted robust solution behaves like the minmaxregret solution already for small values of r, in the rest of the second experiment, we comparethe performances of the r-robust deviation solution (by solving r-RRD) for a fixed value ofr, the r-restricted robust absolute solution using the same value of r, and the absolute ro-bust criterion of Definition 1. With these three solutions at hand, we generate 500 randominstances from the uncertain profit intervals on which we assume a uniform probability dis-tribution, find the optimal value for each problem instance and compute the difference of theobjective function value yielded by each of the three solutions from the optimal value.

In Table 1 we compute and report the �1 and �∞ norm of the error vector of percentdeviations from the optimal value of the corresponding instance. Here we randomly generatefrom a uniform distribution the values of the objective function coefficients. The resultsclearly show a superiority of the r-restricted robust deviation solution to absolute robust andrestricted absolute robust criteria.

References

[1] I. Averbakh. On the complexity of a class of combinatorial optimization problems withuncertainty. Math. Prog., 90:263–272, 2001.

12

[2] I. Averbakh. Minmax regret linear resource allocation problems. Operations ResearchLetters, 32:174–180, 2004.

[3] I. Averbakh and V. Lebedev. Interval data minmax regret network optimization prob-lems. Discrete Applied Mathematics, 138(3):289–301, 2003.

[4] I. Averbakh and V. Lebedev. On the complexity of minmax regret linear programming.European Journal of Operational Research, in press.

[5] A. Ben-Tal, A. Goryashko, E. Guslitzer, and A. Nemirovski. Adjustable robust solutionsto uncertain linear programs. Math. Prog., 99:351–376, 2004.

[6] A. Ben-Tal and A. Nemirovski. Robust solutions to uncertain linear programs. Opera-tions Research Letters, 25:1–13, 1996.

[7] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Math. OR., 23:769–805,1998.

[8] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis,Algorithms and Engineering Applications. SIAM-MPS, Philadelphia, 2000.

[9] D. Bertsimas and M. Sim. Robust discrete optimization and network flows. Math. Prog.,B 98:49–71, 2003.

[10] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52:35–53, 2004.

[11] E. Conde. An improved algorithm for selecting p items with uncertain returns accordingto the minmax-regret criterion. Math. Prog., in press.

[12] R.L. Daniels and P. Kouvelis. Robust scheduling to hedge against processing time un-certainty in single-stage production. Management Science, 41(2):363–376, 1995.

[13] L. El-Ghaoui and H. Lebret. Robust solutions to least squares problems under uncertaindata matrices. SIAM J. Matrix Anal. Appl., 18:1035–1064, 1997.

[14] L. El-Ghaoui, F. Oustry, and H. Lebret. Robust solutions to uncertain semidefiniteprograms. SIAM J. Optim., 9:33–52, 1998.

[15] M. Inuiguchi and M. Sakawa. Minimax regret solution to linear programming problemswith an interval objective function. European Journal of Operational Research, 86:526–536, 1995.

[16] P. Kouvelis and G. Yu. Robust Discrete Optimization and Applications. Kluwer AcademicPublishers, Boston, 1997.

[17] H.E. Mausser and M. Laguna. A new mixed integer formulation for the maximum regretproblem. International Transactions on Operations Research, 5(5):389–4 03, 1998.

[18] H.E. Mausser and M. Laguna. A heuristic to minimax absolute regret for linear programswith interval objective functions coefficients. European Journal of Operational Research,117:157–174, 1999.

[19] R. Montemanni and L.M. Gambardella. A branch and bound algorithm for the robustspanning tree problem with interval data. European Journal of Operational Research, inpress.

13

[20] R. Montemanni and L.M. Gambardella. An exact algorithm for the shortest problemwith interval data. Computers & Operations Research, 31(10):1667–1680, Sept. 2004.

[21] R. Montemanni, L.M. Gambardella, and A.V. Donati. A branch and bound for the robustshortest path problem with interval data. Operations Research Letters, 32(3):225–232,May 2004.

[22] A. Schrijver. Theory of Linear and Integer Programming. Wiley, New York, 1987.

[23] H. Yaman, O.E. Karasan, and M.C. Pınar. The robust spanning tree problem withinterval data. Operations Research Letters, 29:31–40, 2001.

[24] G. Yu. Min-max optimization of several classical discrete optimization problems. Journalof Optimization Theory and Applications., 98(1):221–242, 1998.

[25] G. Yu and P. Kouvelis. Robust decision problems are hard problems to solve. Technicalreport, University of Texas, Austin, 1993.

14