185
On the Nature of Online Computation By Christian Kudahl Supervised by Joan Boyar Lene Monrad Favrholdt Department of Mathematics and Computer Science University of Southern Denmark c A B H X T m 1 4 ...k 3 1 30. November, 2016

On the Nature of Online Computation - imada.sdu.dkkudahl/phd.pdf · Chapter 3 contains the paper ‘Deciding the On-line Chromatic Number of a ... graph Problem [103]. It was published

  • Upload
    vuthien

  • View
    219

  • Download
    1

Embed Size (px)

Citation preview

On the Nature of Online Computation

By Christian Kudahl

Supervised by

Joan BoyarLene Monrad Favrholdt

Department of Mathematics and Computer ScienceUniversity of Southern Denmark

c

A

B

H

X

Tm

1

4 . . . k

3

1

30. November, 2016

Acknowledgements

The last three years have been a lot of fun. I would like to thank everyone whohelped make this an extremely interesting experience. A special thanks goes tomy supervisors, Joan and Lene, for giving me a lot of freedom to pursue my ownresearch ideas. This has been a huge motivation and made the process highlyenjoyable. I would like to thank everyone in office 42 for lots of fun, discussions,and funny discussions. Thanks to Juraj Hromkovič for allowing me to stay withthe group ‘Informationstechnologie und Ausbildung’ at ETH, where I spent asemester and met a lot of great people. I am also very grateful to my friends,my family, and wife Laura for helping me have even more fun, when I was notworking.

Resumé

Et online problem er et problem, hvor en algoritme er nødt til at foretage u-igenkaldelige valg uden at kende til hele inputinstansen. I advice complexitymodellen tillades algoritmen kendskab til værdien af en vilkårlig funktion afinput. Denne værdi kaldes ‘advice’. I det meste af denne afhandling undersøger viforholdet mellem længden af dette advice og kvaliteten af løsningen, algoritmenproducerer.

En stor del af afhandlingen omhandler klassen AOC, som indeholder maksi-merings (minimerings)-problemer, hvor hver forespørgsen skal accepteres ellerafvises, og hvor følgende gælder:

• Profitten (omkostningerne) for en gyldig løsning er antallet af accepteredeforespørgsler.

• En delmængde (overmængde) af en optimal løsning er stadig gyldig.

LadBc = log

(1 + (c− 1)c−1/cc

).

Vi viser en c-competitive algoritme, som virker for alle problemer i denne klasseog læser Bcn + O(log n) advice bits, og for nogle problemer i klassen giver vien nedre grænse på Bcn−O(log n) advice bits for at være c-competitive (disseproblemer kaldes AOC-fuldstændige). Vi viser, at Online Independent Set, On-line Dominating Set, Online Vertex Cover, Online Set Cover, Online DisjointPath Allocation og Online Cycle Finding alle er AOC-fuldstændige. Vi viser,at ‘Maximum Induced Subgraph With Hereditary Property’ problemet, for allevalg af egenskab, næsten er AOC-fuldstændigt: En c-competitive algoritme skallæse mindst Bcn−O(log2 n) advice bits. For det duale minimeringsproblem va-rierer antallet af advice bits meget med valg af egenskaben. For nogle har enc-competitive algoritme brug for Bcn + O(log n) advice bits. For andre kan enalgoritme være 1-competitive med O(log n) advice bits. Videre i denne retningundersøger vi, hvad der sker, når problemer i AOC bliver vægtede. Igen opførermaksimerings- og minimeringsproblemer sig forskelligt. For maksimeringspro-blemer er der brug for cirka lige så mange advice bits for at være c-competitivesom i det uvægtede tilfælde. For minimeringsproblemer er der dog brug for man-ge flere advice bits. Her kræves n−O(log n) advice bits, for at en algoritme kanvære f(n)-competitive for nogen funktion, f .

De vigtigste resultater i denne afhandling, som ikke er relaterede til AOC, er:

• For Online Search problemet er (M/m)1

2b+1 advice bits nødvendigt ogtilstrækkeligt for en c-competitive algoritme.

• Det er PSPACE-complete at afgøre Online Chromatic Number af en graf,som er pre-farvet.

• Den grådige algoritme er online optimal for Online Independent Set, hvisgrafen har tilstrækkeligt mange isolerede knuder.

Abstract

An online problem is a problem where an algorithm has to make irrevocabledecisions without knowing the whole input instance. In the advice complexitymodel, the algorithm is allowed to learn the value of any function of the wholeinput. This value is called ‘advice’. In most of this thesis, we study the trade-offbetween the length of the advice an algorithm receives and the quality of thesolution it can output.

A large part of this thesis concerns the class AOC which contains maximization(minimization) accept/reject problems, where the following holds:

• The profit (cost) of a feasible solution is the number of accepted requests.

• A subset (superset) of an optimal solution is still feasible.

LetBc = log

(1 + (c− 1)c−1/cc

).

We show a c-competitive algorithm which works for every problem in this classand reads Bcn + O(log n) advice bits. For some problems in AOC we givea lower bound of Bcn − O(log n) advice bits for being c-competitive (we callthese AOC-complete problems). We show that Online Independent Set, OnlineDominating Set, Online Vertex Cover, Online Set Cover, Online Disjoint PathAllocation, and Online Cycle Finding are all AOC-complete. We show thatthe ‘Maximum Induced Subgraph With Hereditary Property’ problem is almostcomplete for AOC, independent of the property: A c-competitive algorithmneeds at least Bcn−O(log2 n) advice bits. For the dual minimization problem,the number of advice bits varies a lot depending on the property. For some,a c-competitive algorithm needs Bcn + O(log n) advice bits. For others, analgorithm can be 1-competitive with O(log n) advice bits. Continuing in thisdirection, we investigate what happens when the problems in AOC are weighted.Again, maximization and minimization problems behave quite differently. Forthe maximization problems, roughly the same number of advice bits is requiredto be c-competitive as in the unweighted case. The minimization problems,however, require many more advice bits. Here, n − O(log n) bits of advice arerequired to be f(n)-competitive for any function, f .

The main contributions in the thesis, which are not related to AOC, are:

• For the Online Search Problem, (M/m)1

2b+1 bits of advice are necessaryand sufficient for a c-competitive algorithm.

• Deciding the Online Chromatic Number of a graph with a pre-coloring isPSPACE-complete.

• The greedy algorithm is online optimal for Online Independent Set whenthe graph has a sufficient number of isolated vertices.

2

Contents

1 Preface 6

2 Introduction 112.1 Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.1 Competitive Analysis . . . . . . . . . . . . . . . . . . . . 122.1.2 On-line Competitive Analysis . . . . . . . . . . . . . . . . 122.1.3 Relative Worst Order Ratio . . . . . . . . . . . . . . . . . 132.1.4 Bijective Analysis and Avarage Analysis . . . . . . . . . . 13

2.2 Computational Complexity . . . . . . . . . . . . . . . . . . . . . 14

I Online Algorithms 15

3 Deciding the On-line Chromatic Number of a Graph with Pre-Coloring is PSPACE-Complete 163.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.3 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.4 PSPACE Completeness . . . . . . . . . . . . . . . . . . . . . . . 193.5 Closing remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4 Adding Isolated Vertices Makes some Greedy Online AlgorithmsOptimal 274.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Algorithms and Preliminaries . . . . . . . . . . . . . . . . . . . . 294.3 Non-optimality of Greedy Algorithms . . . . . . . . . . . . . . . 304.4 Optimality of Greedy Algorithms on Freckle Graphs . . . . . . . 324.5 Adding Isolated Elements in Other Problems . . . . . . . . . . . 344.6 Implications for Worst Case Performance Measures . . . . . . . . 384.7 A Subclass of Freckle Graphs Where Greedy Is Not Optimal (Un-

der Some Non-Worst Case Measures) . . . . . . . . . . . . . . . . 394.8 Complexity of Determining the Online Independence Number,

Vertex Cover Number, and Domination Number . . . . . . . . . 414.9 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 42

3

II Online Algorithms with Advice 44

5 Online Algorithms with Advice: A Survey 455.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.2 Advice Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.3 Relationship to Semi-Online Algorithms . . . . . . . . . . . . . . 53

5.3.1 Assuming Advance Knowledge . . . . . . . . . . . . . . . 535.3.2 Parallel Solutions . . . . . . . . . . . . . . . . . . . . . . . 55

5.4 Advice vs. Randomization . . . . . . . . . . . . . . . . . . . . . . 555.5 Algorithmic Techniques . . . . . . . . . . . . . . . . . . . . . . . 575.6 Lower Bound Techniques . . . . . . . . . . . . . . . . . . . . . . 595.7 String Guessing and Complexity Classes . . . . . . . . . . . . . . 62

5.7.1 String Guessing . . . . . . . . . . . . . . . . . . . . . . . . 625.7.2 Asymmetric String Guessing . . . . . . . . . . . . . . . . 635.7.3 Complexity Classes . . . . . . . . . . . . . . . . . . . . . . 65

5.8 K-Server, Paging, and Friends . . . . . . . . . . . . . . . . . . . 665.9 Bin Packing, Machine Scheduling, and Knapsack . . . . . . . . . 695.10 Graph Coloring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.10.1 Vertex Coloring . . . . . . . . . . . . . . . . . . . . . . . . 725.10.2 Edge Coloring and Variants of Vertex Coloring . . . . . . 73

5.11 Graph Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . 755.12 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765.13 Appendix: Problems Studied in Advice Complexity Models . . . 77

6 Advice Complexity of the Online Search Problem 796.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

6.1.1 Competitive Analysis and Advice Complexity . . . . . . . 806.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816.3 Advice for the Online Search Problem . . . . . . . . . . . . . . . 82

6.3.1 Advice for Optimality . . . . . . . . . . . . . . . . . . . . 826.3.2 Advice for c-Competitiveness . . . . . . . . . . . . . . . . 83

6.4 Advice and Randomization . . . . . . . . . . . . . . . . . . . . . 856.5 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . 87

7 The Advice Complexity of a Class of Hard Online Problems 887.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

7.1.1 Advice Complexity . . . . . . . . . . . . . . . . . . . . . . 907.1.2 String guessing . . . . . . . . . . . . . . . . . . . . . . . . 917.1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 927.1.4 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . 937.1.5 Our contribution . . . . . . . . . . . . . . . . . . . . . . . 937.1.6 Related work . . . . . . . . . . . . . . . . . . . . . . . . . 96

7.2 Asymmetric String Guessing . . . . . . . . . . . . . . . . . . . . . 977.2.1 The Minimization Version . . . . . . . . . . . . . . . . . . 977.2.2 The Maximization Version . . . . . . . . . . . . . . . . . . 98

7.3 Advice Complexity of ASG . . . . . . . . . . . . . . . . . . . . . 997.3.1 Using Covering Designs . . . . . . . . . . . . . . . . . . . 1007.3.2 Advice Complexity of minASG . . . . . . . . . . . . . . . 1027.3.3 Advice Complexity of maxASG . . . . . . . . . . . . . . 1067.3.4 Advice Complexity of ASG when c = Ω(n/ log n) . . . . . 108

4

7.4 The Complexity Class AOC . . . . . . . . . . . . . . . . . . . . . 1107.4.1 AOC-complete Minimization Problems . . . . . . . . . . . 1127.4.2 AOC-complete maximization problems . . . . . . . . . . . 1177.4.3 AOC Problems which are not AOC-complete . . . . . . . . 120

7.5 Conclusion and Open Problems . . . . . . . . . . . . . . . . . . . 1217.6 Appendix: Approximation of the Advice Complexity Bounds . . 123

7.6.1 Approximating the Function B(n, c) . . . . . . . . . . . . 1237.6.2 The Binary Entropy Function . . . . . . . . . . . . . . . . 1247.6.3 Binomial Coefficients . . . . . . . . . . . . . . . . . . . . . 1257.6.4 Approximating the Advice Complexity Bounds for minASG1267.6.5 Approximating the Advice Complexity Bounds for max-

ASG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

8 Advice Complexity of the Online Induced Subgraph Problem 1338.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1348.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378.3 MaxPi and MinPi without Preemption . . . . . . . . . . . . . . . 1398.4 MaxPi with Preemption – Large Competitive Ratios . . . . . . . 1418.5 MaxPi with Preemption – Small Competitive Ratios . . . . . . . 1448.6 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

9 Weighted Online Problems with Advice 1479.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509.3 Weighted Versions of AOC-Complete Minimization Problems . . 1539.4 Exponential Sparsification . . . . . . . . . . . . . . . . . . . . . . 1569.5 Matching and Other Non-Complete AOC Problems . . . . . . . . 160

9.5.1 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . 1619.6 Scheduling with Sublinear Advice . . . . . . . . . . . . . . . . . . 163

5

CHAPTER 1

Preface

In this chapter, I give a brief overview over each chapter of this thesis. For thosecontaining papers, I will summarize their results and in some places give moreinformal comments. The thesis is split into two parts. Part I contains results ononline algorithms and Part II contains results on online algorithms with advice.

The problems considered in this thesis are all within the scope of online algo-rithms, but none of them are upper or lower bounds for the competitive ratioin classic online problems in the standard model. The title ‘On the Nature ofOnline Computation’ refers to the fact that the papers explore different com-putational models and quality measures with the hopes of learning about theonline nature. Part I contains two papers. One is about the computational com-plexity of applying a certain measure and one is a non-measure-specific resultabout what happens when you add isolated vertices in online graph problems.The papers in Part II all concern advice complexity and competitive analysis.One is a survey, one is a ‘typical’ advice paper about the tradeoff between advicebits and competitive ratio for a specific problem. The last three concern classesof problems in advice complexity. Figure 1.1 is an overview of where each paperlies within the general areas of Advice Complexity, Computational Complexity,Competitive Ratio, and other measures than Competitive Ratio. Each numberrefers to the paper contained in the corresponding chapter in this thesis.

Chapter 2 is an introduction to online algorithms and the areas found in Fig-ure 1.1. The purpose of this section is to give a some background informationto the fields which the papers concern.

Chapter 3 contains the paper ‘Deciding the On-line Chromatic Number of aGraph with Pre-Coloring is PSPACE-Complete’ [116]. I am the sole authorthough my supervisors Joan Boyar and Lene M. Favrholdt gave me feedbackand suggestions during the process. It is based on ideas from my Master’sthesis [113], but additional results are obtained. It was published at CIAC 2015in Paris. In this paper, Online Graph Coloring is studied from a computationalcomplexity point of view. In Online Graph Coloring, a graph is revealed vertexfor vertex. When a vertex is revealed, its edges to previously revealed verticesare revealed along with it. At this point, an algorithm has to give the vertex acolor different from the colors of its neighbors. The goal is to use as few colors aspossible. The Online Chromatic number of a given graph, χo(G), is the smallest

6

Complexity Theory

Advice Complexity

Competitive Analysis

Other Measuresthan Competitive Ratio

3

4

6

5,7,8,9

Figure 1.1: Relationship between papers and areas. A number shows the chapterwhere the corresponding paper is found.

7

number of colors needed to color G online when the vertices are presented in anadversarial order. In the paper, I show that it is PSPACE-complete to decideif χo(G) ≤ k for a given pre-colored graph, G, and integer, k. In my Master’sthesis, NP-completeness was shown for the version without pre-coloring. In thepaper, I conjecture that the problem remains PSPACE-complete even withoutthe pre-coloring. This was proven to be correct later in [33] by Martin Böhm andPavel Veselý. The area of this paper is somewhere between online algorithmsand computational complexity, which I consider somewhat unusual.

Chapter 4 contains the paper ‘Adding Isolated Vertices Makes some GreedyOnline Algorithms Optimal’ [46]. It was published at IWOCA 2015 in Veronaand written with Joan Boyar. The paper concerns mainly results on OnlineIndependent Set. In Online Independent Set, the vertices in a graph are revealedonline similarly to Online Graph Coloring. After each vertex is revealed, analgorithm must decide to accept the vertex or reject it. It is only allowed toaccept a vertex if it has no neighbors which have been accepted. The goal is toaccept as many vertices as possible. In my Master’s thesis, I showed that thegreedy algorithm (the algorithm which always accepts a vertex if possible) isonline optimal when at least half the vertices of the graph have no neighbors.Online optimal means that it performs as well as any other online algorithmagainst a worst ordering of the vertices. In this paper, the result is strengthened.The class of Freckle Graphs is defined and it is shown that the greedy algorithmis online optimal for any Freckle Graph. The class of Freckle Graphs containsthe graphs where at least half the vertices are optimal but it contains many othergraphs as well. It is left as an open question if the following holds: If a graph isnot a Freckle Graph, there is a better algorithm than the greedy one. I do believethis to be the case, but I have been unable to prove it. We discuss what happensif other quality measures are considered, ones which do not only consider theworst case. For some measures and some Freckle Graphs, it turns out thatthere exists a better algorithm than the greedy one. The Online IndependenceNumber can be defined analogously to the Online Chromatic Number. We showthat it is NP-hard and in PSPACE to decide if Io(G) ≥ k (an equivalent resultis shown in my Master’s thesis). Most of the results in the paper are shown toalso hold for online vertex cover and dominating set (for dominating set theyare sometimes slightly modified).

The remaining chapters concern online algorithms with advice. In this model,the algorithm is allowed to receive any information about the entire input beforemaking decisions. The quality of the algorithm is measured both by how wellit performs and how many advice bits it receives about the input (fewer isconsidered better). Advice can generally be applied to any online problem.

Chapter 5 contains the paper ‘Online Algorithms with Advice: A Survey’ [39].It was written with Joan Boyar, Lene M. Favrholdt, Kim S. Larsen, and JesperW. Mikkelsen and published as SIGACT News in 2016. As the title suggests,it is a survey of online algorithms with advice. It describes the historical devel-opment of the area and the different models used. A lot of different results inadvice complexity are given with various level of detail. The problems includepaging, knapsack, k-server, list-update, bin packing, machine scheduling, graphcoloring, and graph exploration. Furthermore, the relationship between adviceand randomization is discussed and some techniques for proving upper bounds

8

and lower bounds are presented. Since this paper is a survey, it does not containany original research. I do think it serves as a strong introduction to the areaof online algorithms with advice.

Chapter 6 contains the paper ‘Advice Complexity of the Online Search Problem’[52]. It was published at IWOCA 2016 in Helsinki and written with JhoireneClemente, Dennis Komm, and Juraj Hromkovič. The work was started duringmy stay at ETH in the spring of 2015, where I shared an office with JhoireneClemente. The paper concerns the online search problem, where a seller wantsto sell an item. Each day, the seller is offered a price between a known fixedlower bound, m, and upper bound, M . The seller can choose to accept thisprice or wait in the hope that a better one is offered later. The number of days,n, may be known or unknown. We show that with b < log n bits of advice, it ispossible for an algorithm to be (M/m)

1

2b+1 -competitive. This holds even if n isunknown. The algorithm works by partitioning the interval [m,M ] in a balancedway. It uses the advice to indentify in which partition the best price resides.The result is complemented with a matching lower bound: An algorithm whichreads b < log n bits of advice cannot be better than (M/m)

1

2b+1 -competitive.This holds even if n is known.

Chapter 7 contains the paper ‘The Advice Complexity of a Class of Hard On-line Problems’ [40]. It was published at STACS 2015 in Munich and writtenwith Joan Boyar, Lene M. Favrholdt, and Jesper W. Mikkelsen. In the paper,we consider a quite natural class of problems, AOC, which contains all onlineproblems of the following type:

• Each request can be accepted or rejected.

• The goal is to maximize (minimize) the number of requests accepted,subject to some constraint.

• A subset (superset) of an optimal solution is still a feasible solution.

The definition fits many natural online problems including independent set,vertex cover, cycle finding, set cover, and dominating set. We describe a c-competitive algorithm which works for every problem in the class and usesBCn+O(log n) bits of advice where

Bc = log(1 + (c− 1)c−1/cc

).

The algorithm is based on covering designs, and it is ensured that the solutionproduced by the algorithm ‘covers’ the optimal solution (in the minimizationversion). Furthermore, we show that for the mentioned problems, no algorithmcan be c-competitive if it reads fewer than BCn−O(log n) bits of advice. Prob-lems in AOC, where that much advice is needed, are called AOC-complete. Thispaper is probably the one in this list which has had (and will have) the largestimpact in the scientific community. It is also that one that I have spent themost time working on.

Chapter 8 contains the paper ‘Advice Complexity of the Online Induced Sub-graph Problem [103]. It was published at MFCS 2016 in Kraków and writtenwith Dennis Komm, Rastislav Královič, and Richard Královič. The work wasstarted during my stay at ETH in the spring of 2015. The paper concerns the

9

online induced subgraph problem. We let π be a graph property, which, if itholds for a graph, G, also holds for any induced subgraph of G. This could forexample be that the graph is planar or does not contain any cycles. We considerthe problem of accepting as large a graph as possible with property π when thegraph is presented online. We show that if π is non-trivial, any algorithm whichis c-competitive needs at least Bcn − O(log2 n) advice bits independent of thechoice of π (Bc is defined as in the previous paragraph). Thus, these problemsare ‘almost AOC-complete’. It would be interesting to find out if these problemsare AOC-complete or if there is a small gap. This can be seen as a further stepin the direction taken in [40] and it is shown using results from that paper com-bined with results from Ramsey theory. This paper also contains some lowerbound results on the variation where the algorithm is allowed to preempt ver-tices it has previously accepted. Interestingly, we did not find any better upperbounds than in the variation without preemption. This means that we do notknow if allowing preemption gives any power to the algorithm (meaning it canuse fewer advice bits or perform better than an algorithm without preemption).

Chapter 9 contains the paper ‘Weighted Online Problems with Advice’ [41]. Itwas published at IWOCA 2016 in Helsinki and written with Joan Boyar, LeneM. Favrholdt, and Jesper W. Mikkelsen. In the paper, we consider weightedversions of the problems in AOC. The objective is to accept as much total weightas possible in the maximization version and as little weight as possible in theminimization version. Surprisingly, these two subclasses of problems, which be-haved similarly in the unweighted case, have drastically different behavior in theweighted case. For maximization problems, there exists a (1 + ε)c-competitivealgorithm which reads Bcn + O((log2 n)/ε) bits of advice. For minimizationproblems, the story is different. For all known AOC-complete minimizationproblems, n − O(log n) bits of advice are required to be f(n)-competitive forany function, f . The results for maximization are obtained by sorting the dif-ferent weights into families of weights, which are not too different. Withineach such family, the covering design scheme is applied. A similar technique isapplied to scheduling to get good algorithms with sublinear advice. Similarlyto the previous two papers, this paper concerns a class of problems and notjust a single problem. I think the three papers show that interesting problem-independent advice complexity results can be obtained. To me, this directionseems very promising in an area where much effort is spent considering individ-ual problems.

10

CHAPTER 2

Introduction

In online problems, an algorithm is faced with making decisions without knowingthe future. In contrast to traditional algorithms, the algorithm is not allowedto inspect the whole input before making decisions. Instead, it is allowed toview only a tiny part of the input and then required to make some irrevocabledecision before seeing more. As an example, consider bin packing. In thisproblem, an algorithm has a number of items which it tries to distribute into thesmallest possible number of unit sized bins. In the traditional (offline) versionof the problem, the algorithm is allowed to inspect all the items carefully beforedeciding where to place the first one. In online bin packing, the algorithm isonly allowed to see one item at a time. Before learning the size of the next item,the current item has to be placed in a bin.

Many online problems can be formulated in the following way: for a giveninput X = x1, x2, . . . , xn, an online algorithm, A, produces an output, Y =y1, y2, . . . , yn, where yi is allowed to depend on x1, . . . xi. Finally, there is ascoring function which maps Y to a score. The goal for the algorithm is to min-imize or maximize the value of this function. To find out if a certain algorithmis good, a measure is needed. In Section 2.1, several measures are presented.

Online problems have been analyzed in several areas including packing problems,paging, scheduling, and various graph problems. Many problems in real life areonline, which likely has served as an inspiration for some of the problems in thearea.

Online algorithms are by nature greedy algorithms, which make decisions basedon some local property. Such algorithms can be desirable because they are easyto implement and often have good running times. It is interesting to consider thelimitations of greedy algorithms. How can you prove that a problem could not besolved by a greedy algorithm? Greedy algorithms can be formalized as priorityalgorithms: These are algorithms which first order the input requests basedon some property and then process each individual request without consideringwhich requests will arrive later. Online algorithms are priority algorithms whichare not allowed to order their input requests. Instead, an adversarial order isusually considered (that is, the worst possible order for the algorithm). In thiscase, it is preferable that the algorithm makes choices which turn out not to betoo bad in all possible futures. This is in contrast to choices which are great in

11

most future scenarios but terrible in a few. Such choices are generally bad inthis setting.

2.1 Measures

There are many measures for the quality of an online algorithm. In this section,we present the measures which are used in this thesis. In each paper, one ormore of these measures appear. For a survey on measures, see [63]. For acomparison, see [43].

2.1.1 Competitive Analysis

The most widely used measure in online algorithms is Competitive Analysis[139]. In competitive analysis, the performance of an algorithm is compared tothat of an optimal offline algorithm, OPT. This is an algorithm, which is allowedto read the entire input tape before making any decisions. More specifically,for maximization problems, an algorithm is said to be c-competitive if for everyinput sequence, I, the following holds:

cALG(I) ≥ OPT(I),

where ALG(I) is the profit of the algorithm on sequence I and OPT(I) is theprofit of the optimal offline algorithm on sequence I. Smaller competitive ra-tios are considered better and a 1-competitive algorithm is said to be optimal.Sometimes, an additive constant, b, is found on the right hand side of thisequality (and sometimes when omitting it, this type of analysis is called ‘strictcompetitive analysis’). This is usually done to allow an algorithm to be, for ex-ample, 2-competitive if it is 2-competitive always except on a constant numberof sequences. In advice complexity, this is generally not necessary since a smallnumber of advice bits can be used to warn the algorithm of a small number ofhard inputs.

For minimization problems, an algorithm is said to be c-competitive if for everyinput sequence, I, the following holds:

ALG(I) ≤ cOPT(I).

Again, smaller competitive ratios are considered better and 1-competitive meansoptimal.

2.1.2 On-line Competitive Analysis

A measure related to competitive analysis is On-line Competitive Analysis [81].The definition is the same, except that OPT(I) is replaced by OPTon(Iw). OPTonis the optimal algorithm which is allowed to know all requests it will receive butnot their ordering. For graph problems, this corresponds to knowing what thegraph will end up looking like but not knowing in which order it is revealed. Iwis the worst ordering for this algorithm. Note that this concept was designedfor graph problems and may not be well defined for all online problems.

12

2.1.3 Relative Worst Order Ratio

Another measure for comparing the quality of online algorithms is RelativeWorst Order Ratio [38]. In this measure, two algorithms are compared directlyrather than comparing each one individually to an optimal offline (or online)algorithm first. For a minimization problem, the definition is the following: Fora given algorithm, A, and input, I, we let

Aw(I) = maxσ

A(σ(I)).

For algorithms, A and B, we let

cl(A,B) = supc | ∃b ∀I Aw(I) + b ≥ cBw(I)cu(A,B) = infc | ∃b ∀I Aw(I) ≤ cBw(I) + b

WRA,B =

cu(A,B) cl(A,B) ≥ 1

cl(A,B) cu(A,B) ≤ 1

Here, WRA,B is the Worst Order Ratio of A and B.

• If WRA,B < 1 then the algorithms are comparable in A’s favor.

• If WRA,B > 1 then the algorithms are comparable in B’s favor.

• If WRA,B = 1 we say, that A and B are equivalent.

For maximization, the definitions are the same except that the algorithms arecomparable in As favor if WRA,B > 1 and in Bs favor if WRA,B < 1.

2.1.4 Bijective Analysis and Avarage Analysis

In Bijective Analysis and Avarage Analysis [7], two algorithms are also compareddirectly. Let In be the set of all inputs of length n. Algorithm A is said to beno worse than algorithm B on inputs of length n, for a minimization problem,according to Bijective Analysis, if the following holds: There exists a bijection,f : In → In such that for all I ∈ In, it holds that A(I) ≤ B(f(I)). Algorithm Ais said to be no worse than algorithm B if A is not worse on inputs of length nfor all n ≥ n0 for some n0. If A is no worse than B and it is not the case thatB is no worse than A, it is said that A is better than B.

The definition of Avarage Analysis is the same with the following difference:A is said to be no worse than B on inputs of length n if

∑I∈In A(I) ≤∑

I∈In B(f(I)). For maximization problems, the inequalities are flipped in thesedefinitions.

13

2.2 Computational Complexity

In computational complexity, one goal is to classify problems. A problem isconsidered more difficult if an algorithm solving it needs a certain amount ofsome resource, typically time or space. Based on these needs, the problems areordered into classes and the classes are related to each other. For an introductionto computational complexity, see [9]. We sometimes say that a problem belongsto a complexity class, but more formally, the language consisting of all yes-instances for that problem is in the complexity class.

The class P consists of every language, which has a turing machine deciding itin polynomial time in the length of the input. In NP, the turing machine isallowed to be non-deterministic. In PSPACE, the turing machine is allowed touse any amount of time as long as it uses only polynomial space in the lengthof the input.

Sometimes, other resources are being measured. In communication complexity,the input is split up between two (or more) parties. The goal is to find out howmuch communication between these parties is needed to compute some function.For a survey of communication complexity, see [117].

In advice complexity, it is considered how much better an online algorithm canperform by having more information about the input available. The main partof this thesis concerns Advice complexity and it is introduced in Chapter 5.

14

Part I

Online Algorithms

15

CHAPTER 3

Deciding the On-line Chromatic Number of a Graph withPre-Coloring is PSPACE-Complete

Christian [email protected]

Department of Mathematics and Computer ScienceUniversity of Southern Denmark

Abstract

In an on-line coloring, the vertices of a graph are revealed one by one. Analgorithm assigns a color to each vertex after it is revealed. When a vertex isrevealed, it is also revealed which of the previous vertices it is adjacent to. Theon-line chromatic number of a graph, G, is the smallest number of colors analgorithm will need when on-line-coloring G. The algorithm may know G, butnot the order in which the vertices are revealed. The problem of determiningif the on-line chromatic number of a graph is less than or equal to k, given apre-coloring, is shown to be PSPACE-complete.

3.1 Introduction

In the on-line graph coloring problem, the vertices of a graph are revealed oneby one to an algorithm. When a vertex is revealed the adversary reveals whichother of the revealed vertices it is adjacent to. The algorithm gives a color tothe vertex. This color has to be different from all colors found on neighboringvertices. The goal is to use as few colors as possible.

We let χ(G) denote the chromatic number of G. This is the number of colorsthat an optimal off-line algorithm needs to color G. Similarly, we let χO(G)denote the on-line chromatic number of G. This is the smallest number ofcolors that the best on-line algorithm needs to guarantee that for any orderingof the vertices, it will be able to color G using at most χO(G) colors. This

1Supported in part by the Villum Foundation and the Danish Council for IndependentResearch, Natural Sciences.

16

algorithm may know the graph in advance but not the vertex ordering. Asan example, χO(P4) = 3, since if two isolated vertices are presented first, thealgorithm will be unable to decide if it is optimal to give them the same ordifferent colors. Clearly, χ(P4) = 2.

The traditional measure of performance of an on-line algorithm is competitiveanalysis [139]. Here, the performance of an algorithm is compared to the perfor-mance of an optimal off-line algorithm. In the on-line graph coloring problem,an algorithm A is said to be c-competitive if it holds, that for any graph G, andfor any ordering of the vertices in G, the number of colors used by A is at most ctimes the chromatic number of G. For the on-line graph coloring problem, theredoes not exist c-competitive algorithms for any c even if the class of graphs isrestricted to trees [83]. This makes this measure less desirable to use in thiscontext.

As an alternative on-line competitive analysis was introduced for on-line graphcoloring [81]. The definition is similar to competitive analysis, but instead ofcomparing with the best off-line algorithm, one compares with the best on-linealgorithm. In the case of on-line graph coloring, an algorithm is on-line c-competitive if for any graph G, and for any ordering of the vertices, the numberof colors it uses is at most c times the on-line chromatic number.

With the definition of on-line competitive analysis, a natural problem arose.How computationally hard is it given a graph G and a k ∈ N to decide ifχO(G) ≤ k. In [118], it was shown that it is possible in polynomial time to decideif χO(G) ≤ 3 when G is triangle free or connected. They conjectured it NP-complete to decide if χO(G) ≤ 4. In this paper, we consider the generalizationof the problem where a part of the graph has already been presented and colored(we refer to this as the pre-coloring). We show that it is PSPACE-completegiven a pre-colored graph and a k to decide if the uncolored parts can be coloredsuch that at most k total colors are used.

3.2 Related work

Studying pre-coloring extensions is not new. In [110], the author studies the pre-coloring extension problem in an offline setting and shows it to be NP-completeeven on bipartite graphs and for three colors. In [124] it is shown to be NP-hardon unit interval graphs. For a survey on offline pre-coloring extensions, see [145].It is an interesting open question how pre-coloring affects on-line graph coloringproblem treated here (see closing remarks).

In [31], the author shows another coloring game to be PSPACE-complete. Inthis version, a graph is known and two players take turns coloring the verticesin a fixed order with a fixed set of colors. The player that first is unable to colora vertex loses the game. In some sense, both players take the role of the painterand the drawer’s strategy is given beforehand.

It was recently shown that the type of online coloring, that is analyzed in thispaper, can be useful when offline coloring certain classes of geometrical graphs[111]. This gives another motivation for studying the complexity of finding the

17

online chromatic number.

3.3 Preliminaries

On-line graph coloring can be seen as a game. The two players are known as thedrawer and the painter. The two players agree on a graph G = (V (G), E(G))and a k ∈ N. A move for the drawer is presenting a vertex (sometimes we sayit request a vertex). It does not specify which vertex in G the presented vertexcorresponds to, but it specifies which of the already presented vertices that thisnew vertex is adjacent to. The presented graph must always be an inducedsubgraph of G.

A move for the painter is assigning a color from 1, . . . , k to the newly presentedvertex. The color has to be different from the colors that he previously assignedto its neighbors. If the painter manages to color the entire graph, he wins.If he is ever unable to color a vertex (because all colors are already found onneighbors to this vertex) he loses.

When analyzing games, one is often interested in finding out which player has awinning strategy. A game is said to be weakly solved if it is known which playerhas a winning strategy from the initial position. It is said to be strongly solvedif it is known which player has a winning strategy from any given position. Thisdefinition is the motivation behind the assumption to have a pre-coloring. Weprove that to strongly solve the game for a given graph, one must, in somecases, solve positions, where it is PSPACE-hard to determine if the drawer orthe painter has a win from that position. Note that it may not be PSPACE-hard to weakly solve the game - see closing remarks.

We consider the state in the game after an even number of moves. This meansthat the game has not started yet or the painter has just assigned a color to avertex. Such a state can be denoted by (G, k,G′, f). Here, G is the graph theyare playing on and k ∈ N is the number of colors the painter is allowed to use.Furthermore, G′ is the induced subgraph that has already been presented andcolored and f : V (G′) → 1, . . . , k is a function that describes what colorshave been assigned to the vertices of G′. Note that the painter does not getinformation on how to map the vertices of G′ into G (in fact, the drawer doesnot have to decide this yet).

We treat the following problem: Let a game state (G, k,G′, f) be given. Doesthe painter have a winning strategy from this state? We show that this problemis PSPACE-complete. The problem is equivalent to deciding if the on-linechromatic number of G is less than or equal to k given that the vertices in aninduced subgraph isomorphic to G′ have already been given the colors dictatedby f . This is also known as a pre-colored graph.

Note that the proof here also works in the model where the painter gets infor-mation on how the vertices in G′ are mapped to those in G. In fact, a slightlysimpler construction would suffice in that case. The model where this infor-mation is not available seems more reasonable though, since the pre-coloringis used to represent a state in the game where this information is indeed not

18

available.

We show a reduction from the totally quantified boolean formula (TQBF) prob-lem. In this problem, we are given a boolean formula:

φ = ∀x1∃x2 . . . ∃xnF (x1, x2, . . . , xn)

We want to decide if φ is true or false. This problem is known to be PSPACE-complete even if F is assumed to be in conjunctive normal form with 3 literalsin each clause ( [143]). Since the complement to any language in PSPACE isalso in PSPACE, this is also PSPACE-complete if F is in disjunctive normalform with 3 literals in each term (3DNF). This is the form we will use here.We let ti denote the i’th term. For convenience, we will assume the numberof variables to be even and that the first quantifier is ∀ followed by alternatingquantifiers. This is possible since any TQBF in 3DNF can be transformed tosuch a formula by adding new variables.

One such formula could for example be:

∀x1∃x2∀x3∃x4 (x1 ∧ x2 ∧ x4) ∨ (x1 ∧ x2 ∧ x3) ∨ (x1 ∧ x2 ∧ x3)

This formula has four variables, x1, x2, x3, and x4, and three terms, t1, t2, andt3. The term t1 contains x1, x2, and x4 (we also say that they are in the firstterm).

3.4 PSPACE Completeness

In this section, we show that it is PSPACE-complete to to decide if the painterhas a winning strategy from a game state (G, k,G′, f). First we note, that theproblem is in PSPACE.Observation 1. The problem of deciding if the drawer has a winning strategyfrom state (G, k,G′, f) is in PSPACE.

To see this, we see that the game always ends within at most 2V (G) moves.We need to show that from each state, the possible following moves can beenumerated in polynomial space. If the painter is about to move, his possiblemoves are one of the colors 1, . . . , k. This can be done is polynomial space,and can be enumerated based on the value of the color. If the drawer is aboutto move, his move consists of presenting a vertex that is adjacent to some ofthe vertices that have already been presented. If v vertices have been presentedalready, this means that there are possibly 2v different moves for him. He canenumerate these but only consider those where the resulting graph is an inducedsubgraph of G. This problem is NP-complete, but it can be solved in polynomialspace.

Using this, we do a post-order search in the game tree. In each vertex, we notewho has a winning strategy from that given state. For the leaves, we note whohas won the game (done by checking if all vertices have been colored). Aftertraversing the tree, we can read in the root if the painter or the drawer has awinning strategy. This shows that the problem is in PSPACE.

19

To prove that the problem is PSPACE-hard, we show how to transform a totallyquantified boolean formula φ = ∀x1∃x2 . . . ∃xn F (x1, . . . , xn) (F is in 3DNF)into a game state (G, k,G′, f) such that φ is true if and only if the painter hasa winning strategy from (G, k,G′, f).

• The number of variables in φ is n.

• The number of terms in F is t.

• We define k = t+ 3n/2 + 2 to be the number of colors that the painter isallowed to use.

We now describe G. It consists of subgraphs X, T , H and A, B, c and m. Therelationship between them is sketched in Figure 3.1.

c

A

B

H

X

Tm

1

4 . . . k

3

1

Figure 3.1: A sketch of the construction. The small circles represent verticesand the big circles represent parts of the graph containing multiple vertices.The blue circles are subgraphs that have been pre-colored (their color is shownabove them). Solid lines are complete connections. Dashed line are connectionswhere not all vertices in both parts are connected. For details on how they areconnected, see the description below.

The pre-coloring consists of A, B, c, and m.

• A is a complete graph with k − 3 vertices with colors 4 . . . k.

• B is an independent set with a large (but polynomial) number of vertices.They all have color 3.

• c is a single vertex with color 1.

• m is a single vertex with color 1.

The vertex c has an edge to each vertex in A and B.

The subgraph X consists of two vertices for each variable, xi and xi, with anedge between them. Each vertex in X is connected to each vertex in A and at

20

least one vertex in B. This means that the only possible colors for vertices inX are 1 and 2 (also called true and false respectively).

The subgraph T corresponds to the terms. It is a complete graph with onevertex, tj for each term, j. Furthermore, a vertex in T has an edge to xi ifxi is in the corresponding term (and similarly one to xi if that is found in thecorresponding term).

Since T is a complete graph, each vertex there must be given a different color.However, if one vertex only has neighbors in X with color true, the painter canintroduce only t−1 new colors in T instead of t new colors (by reusing the colorfalse in T ). This corresponds to one term being satisfied by a truth assignmentand it is key in the construction. However, we do not want a color to be savedif one vertex in T only has neighbors in X with color false. To prevent this, weadd an edge between m (which has color true) and each vertex in T .

The last subgraph is H. Its purpose is to ensure that the painter requests theexistantially quantified vertices in X in ascending order (as they appear in φ).It consists of n/2 copies of P4. The copies are named H1, . . . ,Hn/2. The verticesin Hi are called h1

i , . . . , h4i such that h1

i and h4i are the endpoints. There are

edges between each vertex in Hi and x2i−1. Furthermore, h2i and h4

i have edgesto x2l for l ≥ i. Also, each vertex in H has an edge to each vertex in T .

The purpose of B is to give the painter know some information about whichvertices are being requested based on the number of edges it has to B. Allvertices in the non-pre-colored part (X, T , and H) have at least one edge to avertex in B. These edges are constructed such that the following holds:Lemma 1. When the drawer presents a vertex v, the painter is always able toidentify an i and which one of the following statements about v holds

• v is xi and i is even.

• v is xi and i is even.

• v is either xi or xi and i is odd (the two cases cannot be distinguished).

• v is in Hi (the four cases cannot be distinguished).

• v is ti.

Clearly, it is possible to use the number of edges to B to encode this information.For the specific construction and the proof of this Lemma, we refer to the arxivversion [115].

Formally, we are only allowed to specify the pre-coloring as a graph (with afunction, f , mapping the vertices to colors), but we are not allowed to specifywhere it is induced in G, it is up to the drawer to decide this. In our case, thepre-colored graph, G′ is isomorphic to the graph consisting of A, B, c, and m(with the specified edges between them and the specified colors). In G′, we callthem A′, B′, c′ and m′.

When inducing G′ in G, it is only possible to map c′ to the c (we can choosethe number of vertices in B large enough that no other vertex in G has as highdegree as c). Similarly, the vertices of A′ can only be mapped to those of Aand those of B′ can only be mapped to B. This is because these are the onlyneighbors of c and they are easily distinguishable since A is a complete graph

21

and B is an independent set. Finally, m′ can only be mapped to m since it isthe only vertex outside A that has no neighbor in B.

We are now ready for the main proof. We begin with the easier implication.Lemma 2. If φ is false, then the drawer has a winning strategy from the state(G, k,G′, f).

Proof. We will call color 1 true and the color 2 false. Since φ = ∀x1∃x2 . . . ∃xn F (x1, . . . , xn)is false, it holds that

∃x1∀x2 . . . ∀xn ¬F (x1, . . . , xn)

This means that if two players alternately decide the truth values of x1, x2, . . . xn,there is a strategy S for the player deciding the values of the odd variables whichmakes F false. The drawer is going to implement S.

The drawer will start by presenting the vertices in X and H. It will do this inrounds. In round i, 1 ≤ i ≤ n/2, it first presents x2i−1 and x2i−1 in some order.It then presents the vertices of Hi in some order. Finally, it presents x2i andx2i in some order. There are n/2 rounds. We want to show that the drawer canensure that the following holds after round i:

• Each Hj with j ≤ i has vertices with at least 3 different colors.

• All xj and xj with j ≤ i have been colored with colors true or false.

• Interpreting coloring as an assignment of truth values to variables x1, . . . , xi,the drawer has a winning strategy in the game where the drawer and thepainter alternately decide the truth value for the remaining variables.

• Either the colors true and false are not found in Hi or the painter has lostthe game.

For i = 0, they all hold. Assume that they hold for some i. We show that thedrawer can present the vertices in round i+1 in an order that ensures that theyhold after it.

The drawer starts by presenting x2i−1 and x2i−1. Among the vertices that havealready been presented (including those in the pre-coloring), it holds that eachvertex is either adjacent to both x2i−1 and x2i−1 or none of them (note that Hi

has not been presented yet). This means that the painter is unable to identifywhich is which. Since they are both adjacent to all vertices in A and some in B,the only available colors for them are true and false. The painter has to assigntrue to one of them and false to the other. The drawer now decides which onereceived the color true according to his winning strategy (we know that he hasone from the induction hypothesis). This ensures that he will have a winningstrategy independent of whether variable x2i is set to true or false.

Now, the drawer presents two non-adjacent vertices fromHi. The painter cannotidentify which, since the vertices in Hi are connected to the same vertices amongthose that have been presented. If the painter gives these the same color, thedrawer decides that they were h1

i and h4i . Otherwise, the drawer decides that

they were h1i and h3

i . The drawer now presents the remaining two vertices of Hi

which results in it containing at least three different colors. Note that the color

22

3 cannot be used in Hi, since all four vertices are adjacent to some vertices inB.

The drawer now presents x2i and x2i. According to Lemma 1, the painter canidentify which one is x2i and which one is x2i. Again, the painter must colorone true and the other false. This can be interpreted as the painter assigning atruth value to variable x2i. As we argued earlier, the drawer must still have awinning strategy if he decides the truth value of the remaining odd variables.

We need to argue that if a vertex in Hi receives color true or false, the painterwill immediately lose. We first consider the case where color true is found onh2i or h4

i . The painter will color x2i and x2i. These are adjacent to all verticesin A (and some in B) meaning they can only get color true or false. Since theyare adjacent to h2

i and h4i , they cannot get the color true. Only color false is

not available, and after one gets that, the other cannot get any color. If thecolor true is found on h1

i or h3i instead, the drawer changes the positions of h1

i

and h4i as well as those of h2

i and h3i . This is possible since it is not at this time

possible for the painter to distinguish between h1i and h4

i and between h2i and

h3i . This ensures that the color true does end up on h2

i or h4i so we can argue

in the same way. The argument is similar if it is the color false is found in Hi.

This concludes the induction. We have now shown that after round n/2 allvertices in X have been colored with colors true and false. The truth assignmentgiven to the variables in X makes F false. The drawer now presents all verticesin T in any order. They cannot get the color true, since they are adjacent to mwhich has that color. Each vertex in T is adjacent to a vertex in X with colorfalse since the truth assignment made F false. Furthermore, there are 3n/2colors that cannot be used on T since they were used in H. Also, the color 3cannot be used, since all vertices in T are adjacent to some in B. This leavesk− 2− 3n/2− 1 = t− 1 colors. This is not enough to color the t vertices, sincethey form a clique.

We now show the other implication, which completes the proof.Lemma 3. If φ is true, then the painter has a winning strategy from the state(G, k,G′, f).

Proof. The painter has to color the part of G that is not in G′ such that theresulting colored graph has at most k different colors. We notice that all re-maining vertices have a least one neighbor in B which means that the color 3is not available for any vertices. This means that there are k− 1 = 3n/2 + t+ 1colors left. Moreover, only the colors true and false are available for verticesin X. We have already defined colors 1 and 2 to be called true and false. Wecall the next 3n/2 colors the H-colors. The t − 1 remaining colors, we call theT-colors. The idea is that the H-colors will be used in H, true and false willbe used in X and the T colors will be used in T . Since there are only t − 1T -colors, the painter will need to use true, false or an H-color on a vertex in T .This is only possible because φ is true.

Before defining the painter strategy, we need a few preliminaries. We start bydefining normal play. In normal play, when a vertex xi or xi with i even isrequested, the following must hold. In each Hj with j ≤ i

2 , both h1j and h4

j havebeen requested. We also define a good request. A good request is a request to a

23

Table 3.1: Table defining a painter strategy in Phase 1.

Case Subcase Subsubcase Color givenv ∈ V (Hi) Color greedily with H-colors.v ∈ V (X) i even Normal play Use color p′(xi)

Not normal play Color greedily with true, false and goto phase 2.

i oddNo vertex in H i+1

2has

been requestedColor greedily with true, false.

At least one vertex inH i+1

2has been requested

The painter can identify if the requestis to xi or xi. Use color true for xi andfalse for xi

v ∈ V (T ) Good request Use color false and go to phase 3.Not good request Color greedily with T -colors

vertex ti ∈ T , where the following holds: For each neighbor v ∈ X of ti, v doesnot have color false and v’s neighbor in X does not have color true.

For example, if t1 was requested and x1 was a neighbor, it would be a goodrequest only if x1 did not have the color false (possibly because it had not beenpresented yet) and x1 did not have the color true (it might also not have beenpresented yet). Note that when a vertex in T is requested, the painter canidentify if it is a good request using Lemma 1 and the fact that it knows foreach vertex in T which neighbors in X it has. We call it a good request becauseit results in the painter being able to use the color false on that vertex, whichmeans that he will have enough colors and win the game.

Since φ is true, there must exist a function p, which based on the truth as-signment to x1, . . . , xi−1 computes if variable xi (i is even) should be true orfalse if the painter wants to make F true. We define the function p′, whichcomputes if variable xi should be given color true or false if not all variablesx1, . . . , xi−1 have had their truth assignment decided yet. For even i, we letp′(xi) = p(p′(x1), . . . , p′(xi−1)). For odd i, we define p′(xi) = true if xi has thecolor true, if xi has the color false or if none of them have been presented yet.We define p′(xi) = false otherwise. It useful to think of it the following way:If xi is requested before xj and xj , j < i, the painter will be able to distin-guish between xj and xj when they get requested. Because of this, the painterjust decides that xj is true and it colors xj and xj accordingly when they getrequested.

We now define a strategy for the painter. There are three phases. The painterstarts in Phase 1. Certain events will cause the painter to enter Phase 2 or 3,which in both cases means that the painter from that point can follow a simplestrategy to win. Table 3.1 defines how the painter handles a request to a vertexv when in Phase 1. Phase 2 and 3 will be defined subsequently.

We show, that under normal play, the drawer will have to eventually make agood request, which makes the painter enter Phase 3. First, we show that the

24

truth assignment that x1, . . . , xn gets will make F true. When an xi or xi witheven i is requested, the painter will color it based on the color of x1, . . . , xi−1.However, since the drawer decides the order, it may happen that the truth valuesof these have not already been decided. For the variables with an even index,this is not a problem for the painter, since it can just compute recursively, whichcolor it will apply to it. For a variable with an odd index xj , we defined thatthe painter should consider it true (we set p′(xj) = true for odd j). This ispossible since we are under normal play, which means that h1

j+12

and h4j+12

havealready been requested. When xj and xj are requested, the painter is able touse this to see which one it is. According to Table 3.1, the painter will give xjcolor true and xj color false which is exactly why it is possible for the painter toalready consider xj as true before it has been requested, when xi is requestedunder normal play. Note that φ is true. Since the painter colors according top, the resulting truth assignment makes at least one term true. This also gives,that at least one request to a vertex in T will be good (a request to ti ∈ T is notgood if and only if term ti cannot be satisfied by the current truth assignmentno matter what truth value the undecided variables are given). We have nowshown that the drawer must eventually make a good request under normal play.This shows that the game will either deviate from normal play at some point(making the painter enter Phase 2) or make a good request such that the painterenters Phase 3. We now define how the painter behaves in Phase 2 and Phase3 and show why he will win in both cases.

At the beginning of Phase 2, the drawer has just deviated from normal play. Hehas presented xi (or xi) with i even, even though there exists a Hj with j ≤ i

2where h1

j and h4j have not both been requested. Note that Hj is bipartite (it is a

P4). Since H was colored greedily, and h1j and h4

j have not both been presented,we know that at most one color is already used in each partition and no coloris already used in both partitions. For future requests in Hj , the painter willknow which partition the requested vertex is in, since h2

j and h4j are connected

to the vertex in X that was just requested. Thus, the painter will only have touse 2 colors for Hj . For the remaining requests, the painter colors greedily withH-colors in H. He colors greedily with true,false in X and he colors greedilywith T -colors in T . When the final vertex in T is requested, there will not be aT -color available (since there are only t−1). However, the painter will have oneH-color that is not needed (the color saved in Hj). He uses that as a T -color,which ensures that he wins.

At the beginning of phase 3, the painter has just assigned the color false to avertex in T after a good request. Since the request was good, we know thatall adjacent vertices in X have been or can be colored true. Their neighbors inX have been or can be colored false. The remaining vertices in X get coloredgreedily with true,false. The vertices in H will be colored greedily using H-colors. The remaining vertex in T will be colored greedily using T -colors whichsuffices. This ensures, that the painter wins.

We have now presented a strategy for the painter. We have shown that eitherPhase 2 or Phase 3 will always be entered and we have shown how the painterwins once such a Phase has been entered.

We can now combine Lemmas 2 and 3 and Observation 1 to get the desired

25

theorem.Theorem 1. Given a state (G, k,G′, f) in the on-line graph coloring game, itis PSPACE-complete to decide if the painter has a winning strategy.

3.5 Closing remarks

The complexity of the problem of deciding if χO(G) ≤ k is still open. It wasshown to be coNP-hard in [114] (unpublished work), and it is certainly inPSPACE using the argument presented here. Adding a pre-coloring ensuresthat the problem is PSPACE-complete. That result suggests, that it may beharder to do on-line competitive analysis than it is to do competitive analysis,since deciding if χ(G) ≤ k is ”only” NP-complete. Note, though, that it is onlyan indication since it might be possible to do the analysis without computingχO(G) and furthermore, it is not clear if the complexity is changed by the pre-coloring (as it is the case for some offline coloring problems, see [110] and [124]).

Our work with the problem has led to the following conjecture:Conjecture 1. Let a graph G and a k ∈ N be given. The problem of decidingif χO(G) ≤ k is PSPACE-complete.

It seems likely to us, that a reduction from totally quantified boolean formulain 3DNF is possible. It may be possible to use a similar construction to the oneused here, but special attention has to be given to the case where φ is true. Itis challenging to allow the painter to implement the winning strategy from thesatisfiability game when the drawer is able to request any vertex in the graphwithout the painter knowing which vertex is being requested.

26

CHAPTER 4

Adding Isolated Vertices Makes some Greedy OnlineAlgorithms Optimal

Joan Boyar and Christian KudahlDepartment of Mathematics and Computer Science

University of Southern Denmarkjoan,[email protected]

Abstract

An unexpected difference between online and offline algorithms is observed. Thenatural greedy algorithms are shown to be worst case online optimal for On-line Independent Set and Online Vertex Cover on graphs with “enough”isolated vertices, Freckle Graphs. For Online Dominating Set, the greedyalgorithm is shown to be worst case online optimal on graphs with at least oneisolated vertex. These algorithms are not online optimal in general. The on-line optimality results for these greedy algorithms imply optimality accordingto various worst case performance measures, such as the competitive ratio. Itis also shown that, despite this worst case optimality, there are Freckle graphswhere the greedy independent set algorithm is objectively less good than anotheralgorithm.

It is shown that it is NP-hard to determine any of the following for a givengraph: the online independence number, the online vertex cover number, andthe online domination number.

4.1 Introduction

This paper contributes to the larger goal of better understanding the natureof online optimality, greedy algorithms, and different performance measures foronline algorithms. The graph problems Online Independent Set, OnlineVertex Cover and Online Dominating Set, which are defined below, areconsidered in the vertex-arrival model, where the vertices of a graph, G, are re-vealed one by one. When a vertex is revealed (we also say that it is “requested”),

27

its edges to previously revealed vertices are revealed. At this point, an algorithmirrevocably either accepts the vertex or rejects it. This model is well-studied(see for example, [81, 85,86,118,118,121,148]).

We show that, for some graphs, an obvious greedy algorithm for each of theseproblems performs less well than another online algorithm and thus is not onlineoptimal. However, this greedy algorithm performs (at least in some sense)at least as well as any other online algorithm for these problems, as long asthe graph has enough isolated vertices. Thus, in contrast to the case withoffline algorithms, adding isolated vertices to a graph can improve an algorithm’sperformance, even making it “optimal”.

For an online algorithm for these problems and a particular sequence of requests,let S denote the set of accepted vertices, which we call a solution. When allvertices have been revealed (requested and either accepted or rejected by thealgorithm), S must fulfill certain conditions:

• In the Online Independent Set problem [54, 86], S must form an in-dependent set. That is, no two vertices in S may have an edge betweenthem. The goal is to maximize |S|.

• In the Online Vertex Cover problem [55], S must form a vertex cover.That is, each edge in G must have at least one endpoint in S. The goal isto minimize |S|.

• In the Online Dominating Set problem [146], S must form a dominat-ing set. That is, each vertex in G must be in S or have a neighbor in S.The goal is to minimize |S|.

If a solution does not live up to the specified requirement, it is said to be infea-sible. The score of a feasible solution is |S|. The score of an infeasible solutionis∞ for minimization problems and −∞ for maximization problems. Note thatfor Online Dominating Set, it is not required that S form a dominating setat all times. It just needs to be a dominating set when the whole graph hasbeen revealed. If, for example, it is known that the graph is connected, thealgorithm might reject the first vertex since it is known that it will be possibleto dominate this vertex later.

In Section 4.2, we define the greedy algorithms for the above problems, alongwith concepts analogous to the online chromatic number of Gyárfás et al. [82]for the above problems, giving a natural definition of optimality for online algo-rithms. In Section 4.3, we show that greedy algorithms are not in general onlineoptimal for these problems. In Section 4.4, we define Freckle Graphs, whichare graphs which have “enough” isolated vertices to make the greedy algorithmsonline optimal. In proving that the greedy algorithms are are optimal on FreckleGraphs, we also show that, for Online Independent Set, one can, withoutloss of generality, only consider adversaries which never request a vertex adjacentto an already accepted vertex, while there are alternatives. In Section 4.5, weinvestigate what other online problems have the property that adding isolatedrequests make greedy algorithms optimal. In Section 4.6, it is shown that theonline optimality results for these greedy algorithms imply optimality accordingto various worst case performance measures, such as the competitive ratio. InSection 4.7, it is shown that, despite this worst case optimality, there is a family

28

of Freckle graphs where the greedy independent set algorithm is objectively lessgood than another algorithm. Various NP-hardness results concerning optimal-ity are proven in Section 4.8. There are some concluding remarks and openquestions in the last section. Note that Theorem 10 and Theorem 12 appearedin the second author’s Master’s thesis [113], which served as inspiration for thispaper.

4.2 Algorithms and Preliminaries

For each of the three problems, we define a greedy algorithm.

• In Online Independent Set, GIS accepts a revealed vertex, v, iff noneighbors of v have been accepted.

• In Online Vertex Cover, GVC accepts a revealed vertex, v, iff a neighborof v has previously been revealed but not accepted.

• In Online Dominating Set, GDS accepts a revealed vertex, v, iff noneighbors of v have been accepted.

Note that the algorithms GIS and GDS are the same (they have different names toemphasize that they solve different problems). For an algorithm ALG, we defineALG to be the algorithm that simulates ALG and accepts exactly those verticesthat ALG rejects. This defines a bijection between Online Independent Setand Online Vertex Cover algorithms. Note that GVC = GIS.

For a graph, G, an ordering of the vertices, φ, and an algorithm, ALG, we letALG(φ(G)) denote the score of ALG on G when the vertices are requested in theorder φ. We let |G| denote the number of vertices in G.

For minimization problems, we define:

ALG(G) = maxφ

ALG(φ(G))

That is, ALG(G) is the highest score ALG can achieve over all orderings of thevertices in G.

For maximization problems, we define:

ALG(G) = minφ

ALG(φ(G))

That is, ALG(G) is the lowest score ALG can achieve over all orderings of thevertices in G.

Since we consider a worst possible ordering, we sometimes think of an adversaryas ordering the vertices.Observation 2. Let ALG be an algorithm for Online Independent Set. Leta graph, G, with n vertices be given. Now, ALG is an Online Vertex Coveralgorithm and ALG(G) + ALG(G) = n.

The equality ALG(G) + ALG(G) = n holds, since a worst ordering of G for ALG isalso a worst ordering for ALG.

29

In considering online algorithms for coloring, [82] defines the online chromaticnumber, which intuitively is the best result (minimum number of colors) anyonline algorithm can be guaranteed to obtain for a particular graph (even whenthe graph, but not the ordering, is known in advance). We define analogousconcepts for the problems we consider, defining for every graph a number repre-senting the best value any online algorithm can achieve. Note that in consideringall algorithms, we include those which know the graph in advance. Of course,when the graph is known, the order in which the vertices are requested is notknown to an online algorithm, and the label given with a requested vertex doesnot necessarily correspond to its label in the known graph: The subgraph re-vealed up to this point might be isomorphic to more than one subgraph of theknown graph and it could correspond to any of these subgraphs.

Let IO(G) denote the online independence number of G. This is the largestnumber such that there exists an algorithm, ALG, for Online Independent Setwith ALG(G) = IO(G). Similarly, let V O(G), the online vertex cover number,be the smallest number such that there exists an algorithm, ALG, for OnlineVertex Cover with ALG(G) = V O(G). Also let DO(G), the online dominationnumber, be the smallest number such that there exists an algorithm, ALG, forOnline Dominating Set with ALG(G) = DO(G).

The same relation between the online independence number and the onlinevertex cover number holds as between the independence number and the vertexcover number.Observation 3. For a graph, G with n vertices, we have IO(G) + V O(G) = n.

Proof. Let a graph, G, with n vertices be given. Let ALG be an algorithm forOnline Independent Set such that ALG(G) = IO(G). From Observation2, we have that ALG is an algorithm for Online Vertex Cover such thatALG(G) = n−IO(G). It must hold that ALG(G) = V O(G), since the existence ofan algorithm with a lower vertex cover number would imply the existence of acorresponding algorithm for Online Independent Set with an independencenumber greater than ALG(G) = IO(G).

4.3 Non-optimality of Greedy Algorithms

We start by motivating the other results in this paper by showing that the greedyalgorithms are not optimal in general. In particular, they are not optimal onthe star graphs, Sn, n ≥ 3, which have a center vertex, s, and n other vertices,adjacent to s, but not to each other.

The algorithm, IS-STAR (see Algorithm 1), does much better than GIS for theindependent set problem on star graphs.Theorem 2. For a star graph, Sn, IS-STAR(Sn) = n− 1 and GIS(Sn) = 1.

Proof. We first show that IS-STAR never accepts the center vertex, s. If s ispresented first, it will be rejected. If it is presented second, it will have an edgeto the first vertex and be rejected. If it is presented later, it will have morethan one neighbor and be rejected. Since IS-STAR never accepts s, it produces

30

Algorithm 1 IS-STAR, an online optimal algorithm for independent set for Sn1: for request to vertex v do2: if v is the first vertex then3: reject v4: else if v is the second vertex and it has an edge to the first then5: reject v6: else if v has more than one neighbor already then7: reject v8: else9: accept v

an independent set. For every ordering of the vertices, IS-STAR will reject thefirst vertex. If the first vertex is s, it will reject the second vertex. Otherwise,it will reject s when it comes. Thus, IS-STAR(Sn) = n− 1. On the other hand,GIS(G) = 1, since it will accept s if it is requested first.

Since n − 1 > 1 for n ≥ 3, we can conclude that GIS is not an optimal onlinealgorithm for all graph classes.Corollary 1. For Online Independent Set, there exists an infinite familyof graphs, Sn for n ≥ 3, and an online algorithm, IS-STAR, such that GIS(Sn)< IS-STAR(Sn).

Note that if some algorithm, ALG, rejects the first vertex requested, ALG(Sn) ≤n− 1, and if it accepts the first vertex, ALG(Sn) = 1. Thus IS-STAR is optimal.

To show that GVC is not an optimal algorithm for Online Vertex Cover, weconsider IS-STAR.Corollary 2. For Online Vertex Cover, there exists an infinite family ofgraphs, Sn for n ≥ 3, and an online algorithm, IS-STAR, such that IS-STAR(Sn)< GVC(Sn).

Proof. Using Observation 2 and Theorem 2, we have that IS-STAR(Sn) = n +1− IS-STAR(Sn) = 2 and GVC(Sn) = n+ 1− GIS(Sn) = n.

Finally, for Online Dominating Set, we have a similar result.Corollary 3. For Online Dominating Set, there exists an infinite family ofgraphs, Sn for n ≥ 3, and an online algorithm, IS-STAR, such that IS-STAR(Sn)< GDS(Sn).

Proof. Requesting s last ensures that GDS accepts n vertices. It can never acceptall n+ 1 vertices, so GDS(Sn) = n. On the other hand, IS-STAR(Sn) = 2 (as inthe proof of Corollary 2). We note that a vertex cover is also a dominating setin connected graphs. This means that IS-STAR always produces a dominatingset in Sn.

31

4.4 Optimality of Greedy Algorithms on FreckleGraphs

For a graph, G, we let

• k denote the number of isolated vertices,

• G′ denote the graph induced by the non-isolated vertices,

• b(G′) be a maximum independent set in G′, and

• s(G′) be a minimum inclusion-maximal independent set in G′ (that is, asmallest independent set such that including any additional vertex in theset would cause it to no longer be independent).

Note that |s(G′)| is also known as the independent domination number of G′(see [5] for more information).

Using this notation, we define the following class of graphs.Definition 1. A graph, G, is a Freckle Graph if k + |s(G′)| ≥ IO(G′).

Note that all graphs where at least half the vertices are isolated are FreckleGraphs. If the definition was changed to this (which might be less artificial),the results presented here would still hold, but our definition gives strongerresults. The name comes from the idea that such a graph in many cases has alot of isolated vertices (freckles). Furthermore, any graph can be turned into aFreckle Graph by adding enough isolated vertices. Note that a complete graphis a Freckle Graph. To make the star graph, Sn, a freckle graph, we need toadd n−2 isolated vertices. We show that GIS and GVC are online optimal on allFreckle Graphs. For the proof, we need a little more terminology and a helpfullemma.Definition 2. A request is pointless if it is to a vertex which has a neighborwhich was already accepted.Definition 3. For a graph, G, an adversary is said to be conservative if it doesnot make pointless requests unless only such requests remain.Lemma 4. For Online Independent Set, for every graph, G, there existsa conservative adversary, ADV, which ensures that every algorithm accepts anindependent set in G of size at most IO(G).

Proof. Assume, for the sake of contradiction, that there exists an algorithmALG, which accepts an independent set of size at least IO(G) + 1 against everyconservative adversary. We now describe an algorithm, ALG′, which accepts anindependent set of size at least IO(G)+1 against any adversary. This contradictsthe definition of IO(G).

Intuitively, since pointless requests must be rejected by any algorithm, ALG′

can reject pointless requests and otherwise ignore them, reacting as ALG wouldagainst a conservative adversary on the other requests. ALG′ works as follows:It maintains a virtual graph, G′, which, inductively, is a copy the part of Grevealed so far, but without the pointless requests. When a new non-pointlessvertex is requested, the same vertex is added to G′, including only the edges toprevious vertices which are not pointless (the pointless requests are not in G′).

32

ALG′ now accepts this request if ALG accepts the corresponding request in G′.When a pointless request is made, ALG′ rejects it and does not add it to G′.

Note that every time ALG accepts a vertex in G′, ALG′ accepts the correspondingvertex in G. Thus, ALG′(G) ≥ ALG(G′) ≥ IO(G) + 1 which is a contradiction.

Theorem 3. For any algorithm, ALG, for Online Independent Set, and forany Freckle Graph, G, GIS(G) ≥ ALG(G).

Proof. First, we note that GIS will accept the k isolated vertices. In G′, it willaccept an inclusion-maximal independent set. Since we take the worst ordering,it accepts |s(G′)| vertices. We get GIS(G) = k + |s(G′)|. Now we describe anadversary strategy which ensures that an arbitrary algorithm, ALG, accepts atmost k + |s(G′)| vertices.

The adversary starts by presenting isolated vertices until ALG either accepts|s(G′)| vertices or rejects k vertices.

If ALG accepts |s(G′)| vertices, the adversary decides that they are exactly thosein s(G′). This means that ALG will accept no other vertices in G′. Thus, itaccepts at most k + s(G′) vertices.

If ALG rejects k vertices, the adversary decides that they are the k isolatedvertices. We now consider G′. At this point, up to |s(G′)| − 1 isolated verticesmay have been requested and accepted. Using Lemma 4, we see that requestingindependent vertices up to this point is optimal play from an adversary playingagainst an algorithm which has accepted all of these isolated requests. Followingthis optimal conservative adversary strategy ensures that the algorithm acceptsan independent set of size at most IO(G′) ≤ k + |s(G′)| = GIS(G).

Corollary 4. For any Freckle Graph, G, GIS(G) = IO(G).

Intuitively, GIS becomes optimal on Freckle Graphs because the isolated verticesallow it to accept a larger independent set, even though it still does poorly onthe connected part of the graph. Any algorithm, which outperforms GIS on theconnected part of the graph, must reject a large number of the isolated verticesin order to keep this advantage.

In contrast, for vertex cover adding isolated vertices to a graph does not makeGVC accept fewer vertices. GVC becomes optimal on Freckle Graphs becausethe isolated vertices force any other online algorithm to accept some of thoseisolated vertices.Corollary 5. For any algorithm, ALG, for Online Vertex Cover, and forany Freckle Graph, G, GVC(G) ≤ ALG(G).

Proof. This follows from Theorem 3, Observation 2, and the fact that GVC =GIS.

Corollary 6. For any Freckle Graph, G, GVC(G) = V O(G).

33

For Online Dominating Set something similar holds, but only one isolatedvertex is needed. GDS becomes optimal because any dominating set has toinclude that isolated vertex.Theorem 4. For any algorithm, ALG, for Online Dominating Set and forany graph, G, with at least one isolated vertex, GDS(G) ≤ ALG(G).

Proof. Recall that k denotes the number of isolated vertices inG, andG′ denotesthe subgraph of G induced by the non-isolated vertices. Note that GDS alwaysproduces an independent set. Thus, GDS accepts at most k + |b(G′)| vertices;it accepts exactly the k isolated vertices and the vertices in b(G′) if these arepresented first.

Let an algorithm, ALG, be given. The adversary can start by presenting k +|b(G′)| isolated vertices. If at least one of these vertices is not accepted by ALG,the adversary can decide that this was in fact an isolated vertex, which can nowno longer be dominated. Thus, ALG(G) = ∞. If ALG accepts all the presentedvertices, it gets a score of at least k + |b(G′)|.

Corollary 7. For any graph, G, with an isolated vertex, GDS(G) = DO(G).

4.5 Adding Isolated Elements in Other Problems

These results, showing that adding isolated vertices to a graph can make thegreedy algorithms for Online Independent Set and Online Vertex Coveroptimal, lead one to ask if similar results hold for other problems. The answer isclearly “yes”: We give similar results for Online Matching, Online DisjointPath Allocation, and Maximum Online Set (including Online MatroidIntersection as a special case).

We consider Online Matching in the edge-arrival model, so each request isan edge which must be accepted or rejected. If one or both of the vertices thatare endpoints of the edge have not been revealed yet, they are revealed with theedge. The goal is to accept as large a set, S, as possible, under the restrictionthat S is a matching. Thus, no two edges in S can be incident to each other.One can define MO(G), the online matching number of G, analogously to theonline independence number, to be the largest number such that there exists analgorithm, ALG, for Online Matching with ALG(G) = MO(G). Let GM be thenatural greedy algorithm for Online Matching, which accepts any edge notincident to any edge already accepted. Instead of adding isolated vertices, weadd isolated edges, edges which do not share any vertices with any other edges.The number of isolated edges to add would be k, where MO(G) ≤ GM(G) + k.We get the following theorem: Let G′ denote the graph G induced by the non-isolated edges.Theorem 5. Let G be a graph where MO(G′) ≤ GM(G′) + k. For OnlineMatching, we have that

GM(G) = MO(G).

Proof. Note that a matching in a graph G = (V,E) corresponds to an indepen-dent set in the line graph L(G), where the vertices of L(G) correspond to the

34

edges of G, and two vertices of L(G) are adjacent, if and only if the correspond-ing edges are incident to each other in G. Thus, since GIS is optimal for thegraph with IO(L(G))− GIS(L(G)) isolated vertices (or more), GM is optimal forthe graph with MO(G′)− GM(G′) isolated edges (or more).

The Online Disjoint Path Allocation problem is similar. It was studiedwith advice in [29]. A graph, G, is given (and fully visible to the algorithm).The requests are paths in G which must be accepted or rejected, and the goalis to accept as large a set, S, of paths, as possible, under the restriction thatno two paths in S may share an edge. The instances of the problem correspondto instances of Online Independent Set, by letting the vertices in the On-line Independent Set instance be the paths in the Online Disjoint PathAllocation instance and letting two vertices be adjacent if the correspondingpaths have an edge in common. Thus, adding isolated paths of length 1 makesthe natural greedy algorithm optimal.

All of the above problems are in the class AOC [40], so one is tempted to askif all problems in AOC have a similar property, or if all maximization problemsin AOC do. This is not the case.Definition 4. A problem is in AOC (Asymmetric Online Covering) if the fol-lowing hold:

• Each request must be either accepted or rejected on arrival.

• The cost (profit) of a feasible solution is the number of accepted requests.

• The cost (profit) of an infeasible solution is ∞ (−∞).

• For any request sequence, there exists at least one feasible solution.

• A superset (subset) of a minimum cost (maximum profit) solution is fea-sible.

An upper bound on the advice complexity of all problems in AOC was provenin [40], along with a matching lower bound for a subset of these problems, theAOC-complete problems.Theorem 6. There exists a maximization problem in the class AOC, whereadding isolated requests which are independent of all others in the sense thatthese requests can be added to any feasible set, maintaining feasibility, does notmake the natural greedy algorithm optimal.

Proof. Consider the problem Online Maximal Forest, in the vertex arrivalmodel, where the goal is to accept as large a set, S, of vertices, as possible,under the restriction that S may not contain a cycle. Consider the followinggraph, G′n = (V,E), where

V =x, y, v1, v2, . . . , vn andE =(x, y) ∪ (x, vi), (y, vi) | 1 ≤ i ≤ n.

Figure 4.1 shows G′5.

We letW = v1, v2, . . . , vn. Consider Gn which is G′n with an arbitrary numberk of isolated vertices added. Let GF be the natural greedy algorithm for OnlineMaximal Forest, which accepts any vertex which does not create a cycle. If

35

x

v1 v2 v3 v4 v5

y

Figure 4.1: The graph G′5.

the adversary requests x and y before any vertex in W , GF cannot accept anyvertex in W , so GF(G′n) = k + 2. But there is another algorithm, ALG, whichaccepts more. The algorithm ALG accepts a vertex v if

• v has degree at most two and

• all neighbors of v have degree at least three (degree two before the currentrequest).

We claim that ALG cannot accept both x and y. Assume x was requested before yand accepted. Now, y can only be accepted if x already has two other neighbors,vi and vj , when y is requested. However, this means that y will also have thesetwo neighbors and ALG will reject it because its degree is at least three. Theargument is symmetric if y is requested before x.

We now show that ALG accepts at least k+n−1 vertices. The k isolated verticeswill be accepted by ALG regardless of when they are requested.

Now assume x is requested before all vertices in W and before y. In this case, xis accepted. We have shown that y will be rejected when it is requested. At mosttwo vertices from W can be rejected. When at least two vertices from W havealready been requested and a new vertex vi is requested, it holds that vi hasdegree at most two and that all neighbors of vi (x and possibly y) have degreeat least three. Thus, vi is accepted. In total, at least k + 1 + n− 2 = k + n− 1vertices are accepted. A symmetric situation holds if y is presented before xand all vertices in W .

We now consider the case where a vertex, vi ∈ W is requested before x and y.In this case, vi is accepted. When x and y are requested, they will be rejectedsince they have a neighbor (vi) whose degree it at most two. At most onevertex in W can be rejected, since after that, two vertices in W have alreadybeen requested (vi was requested first). When another vertex vj is requested, itholds that its possible neighbors (x and y) have degree at least three. In total,at least k + n− 1 vertices are accepted.

Hence, the greedy algorithm is not optimal for Gn with n ≥ 4.

We now consider another class of problems where the property does hold. This

36

is a generalization of Online Independent Set. We consider the problemMaximum Online Set. In this problem, an instance consists of a base setE and a set of forbidden subsets F ⊆ P (E). The forbidden subsets have theproperty that any superset of a forbidden subset is also forbidden. We letx = 〈x1, . . . , x|E|〉 denote the request sequence. There is a bijective function,f , mapping the xi’s to E. For a set S = xi, xj , xh, . . ., we let f(S) denotef(xi), f(xj), f(xh), . . .. This function f is not known to the algorithm. Inrequest i, the algorithm receives request xi. The request contains a list of allminimal subsets A ⊆ x1, . . . , xi−1 such that f(A) ∪ f(xi) ∈ F (note thatthis list may be empty). The algorithm must reject or accept xi. The producedsolution is said to be feasible if it does not contain any subsets from F . Thescore of a feasible solution is the number of accepted elements. The score of aninfeasible solution is −∞. Note that if all minimal sets in F have size two, thisis equivalent to Online Independent Set.

In Maximum Online Set, an isolated element is an element from E which isnot in any sets of F . Note that such an element can be added to any solution.We let s(E,F ) denote a smallest S ⊆ E such that adding any element to Sresults in a set which contains a forbidden subset.

The greedy algorithm, GMOS, is the algorithm which always accepts a request ifthe resulting solution is feasible.

For an algorithm, ALG, we let ALG(E,F ) be the smallest number such that thereexist an ordering of E which causes ALG to accept at most ALG(E,F ) elements(using F as forbidden subsets). We let MSO(E,F ) be the largest number suchthat there exists an algorithm with ALG(E,F ) = MSO(E,F ).Theorem 7. Let (E,F ) be a Maximum Online Set instance, and let E′be E with the isolated elements removed. Let k denote the number of isolatedelements. If k + |s(E′, F )| ≥ MSO(E′, F ), then

GMOS(E,F ) = MSO(E,F ).

Proof. This proof is similar to that of Theorem 3. First note that GMOS acceptsthe k isolated elements and at least |s(E′, F )| elements from E′. For any algo-rithm the adversary can start by requesting elements, each with an empty listof forbidden sets it is already contained in. It continues until the algorithm haseither accepted |S(E′, F )| elements or rejected k. The key argument is that thealgorithm cannot distinguish between these initial elements.

If the algorithm accepts at least |s(E′, f)| elements, the adversary can decidethat they were exactly those in s(E′, f), which GMOS also accepts. In this case,the algorithm cannot accept more than k + |s(E′, f)| elements in total.

If the algorithm rejects k elements, we need a result similar to that of Lemma 4.For Maximum Online Set, a pointless request is one, which reveals a forbiddenset which contains only elements that have been accepted. Accepting a pointlessrequest would result in an infeasible solution. The same argument as in the prooffor Lemma 4 shows that an adversary loses no power by being conservative.Thus, when k elements have been rejected (and up to |s(E′, f)| − 1 have beenaccepted), the adversary has a strategy for the remaining elements which ensuresthat the algorithm accepts at most MSO(E′, F ) ≤ k + |s(E′, F )| elements.

37

This problem is quite flexible. As we have mentioned, it can model indepen-dent set, but it could also model matroid intersection problems such as bipartitematching (though, even with more than two matroids). In this case, the forbid-den sets, F , are the dependent sets in the union of the matroids.

4.6 Implications for Worst Case Performance Mea-sures

Do the results from the previous section mean that GIS is a good algorithm forOnline Independent Set if the input graph is known to be a Freckle Graph?The answer to this depends on how the performance of online algorithms ismeasured. In general, the answer is yes if a measure that only considers theworst case is used.

The most commonly used performance measure for online algorithms is compet-itive analysis [139]. For maximization problems, an algorithm, ALG, is said tobe c-competitive if there exists a constant, b, such that for any input sequence,I, OPT(I) ≤ cALG(I) + b where OPT(I) is the score of the optimal offline algo-rithm. For minimization problems, we require that ALG(I) ≤ cOPT(I) + b. Thecompetitive ratio of ALG is inf c : ALG is c-competitive. For strict competitiveanalysis, the definition is the same, except there is no additive constant.

Another measure is on-line competitive analysis [81], which was introduced foronline graph coloring. The definition is the same as for competitive analysisexcept that OPT(I) is replaced by OPTON(I), which is the score of the best onlinealgorithm that knows the requests in I but not their ordering. For graph prob-lems, this means that the vertex-arrival model is used, as in this paper. Thealgorithm is allowed to know the final graph.Corollary 8. For Online Independent Set on Freckle Graphs, no algorithmhas a smaller competitive ratio, strict competitive ratio, or on-line competitiveratio than GIS.

Proof. Let ALG be a c-competitive algorithm for some c. Theorem 3 implies thatGIS is also c-competitive. This argument also holds for the strict competitiveratio and the on-line competitive ratio.

Corollary 9. For Online Vertex Cover on Freckle Graphs, no algorithmhas a smaller competitive ratio, strict competitive ratio, or on-line competitiveratio than GVC.Corollary 10. For Online Dominating Set on the class of graphs with atleast one isolated vertex, no algorithm has a smaller competitive ratio, strictcompetitive ratio, or on-line competitive ratio than GDS.

Similar results hold for relative worst order analysis [38]. According to relativeworst order analysis, for minimization problems in this graph model, one algo-rithm, A, is at least as good as another algorithm, B, on a graph class, if for allgraphs G in the class, A(G) ≤ B(G). The inequality is reversed for maximiza-tion problems. It follows from the definitions that if an algorithm is optimalwith respect to on-line competitive analysis, it is also optimal with respect to

38

relative worst-order analysis. This was observed in [42]. Thus, the above resultsshow that the three greedy algorithms in the corollaries above are also optimalon Freckle Graphs, under relative worst order analysis.

4.7 A Subclass of Freckle Graphs Where GreedyIs Not Optimal (Under Some Non-Worst CaseMeasures)

Although these greedy algorithms are optimal with respect to some worst casemeasures, this does not mean that these greedy algorithms are always the bestchoice for all Freckle Graphs. There is a subclass of Freckle Graphs whereanother algorithm is objectively better than GIS, and bijective analysis andaverage analysis [7] reflect this.Theorem 8. There exists an infinite class of Freckle Graphs G = Gn |n ≥ 2and an algorithm Almost-GIS such that for all n ≥ 2 the following holds:

∀φ Almost-GIS(φ(Gn)) ≥ GIS(φ(Gn))

∃φ Almost-GIS(φ(Gn)) > GIS(φ(Gn))

Proof. Consider the graph Gn = (Vn, En), where

Vn =x1, x2, . . . , xn, y1, y2, . . . , yn, z, u1, u2, . . . , unEn =(xi, yi), (yi, z), (z, ui) | 1 ≤ i ≤ n.

Figure 4.2 shows the graph G4.

x1 x2 x3 x4

y1 y2 y3 y4

z

u1 u2 u3 u4

Figure 4.2: The graph G4.

We start by showing that Gn is a Freckle Graph. The smallest maximal inde-pendent set has size n + 1. We want to show that IO(G) = n + 1, that is noalgorithm can get an independent set of size more than n+ 1 in the worst case.We consider an arbitrary algorithm, ALG, and the situation where the adversarystarts by presenting n isolated vertices. If ALG rejects all of these, the adversary

39

can decide that it was u1, . . . , un. In the remaining graph, it is not possible toaccept more than n+ 1 vertices. Otherwise, ALG accepts i > 0 of the n+ 1 iso-lated vertices. The adversary can decide that one was z and that the remainingwere x1, . . . xi−1. Since ALG accepted z, it can never accept any of the verticesy1, . . . , yn or u1, . . . , un. Thus, it can at most accept n+ 1 vertices. This showsthat Gn is a Freckle Graph.

The algorithm, Almost-GIS, is identical to GIS, except that it rejects a vertex ifit already has two neighbors when it is presented. Consider any ordering of thevertices of G where GIS and Almost-GIS do not accept the same independentset. There must exist a first vertex, w, which is accepted by one of the algorithmsand rejected by the other. By definition of the algorithms, it must be the casethat w is rejected by Almost-GIS and accepted by GIS. It must hold that whas two neighbors, which have not been accepted by either algorithm. Thiscan only happen if w = z and the two neighbors are yi and yj where xi andxj have already been presented and accepted by both algorithms and no ukhave been presented yet. In this case, z is accepted by GIS and rejected byAlmost-GIS. However, u1, . . . , un are accepted by Almost-GIS and rejected byGIS. Since n ≥ 2 and since both GIS and Almost-GIS accept exactly one of xiand yi, 1 ≤ i ≤ n, we get that on every ordering, φ, where GIS and Almost-GISaccept a different independent set, Almost-GIS(φ(G)) < GIS(φ(G)). Such anordering always exists (the ordering x1, . . . , xn, y1, . . . , yn, z, u1, . . . , un achievesthis).

Competitive analysis, on-line competitive analysis, and relative worst order ratiodo not identify Almost-GIS as a better algorithm than GIS on the class of graphsG defined in the proof of Theorem 8. There are, however, other measures whichdo this. Bijective analysis and average analysis [7] are such measures. Let Inbe the set of all input sequences of length 3n + 1. Since we are consideringthe rather restricted graph class G, In denotes all orderings of the vertices inGn (since these are the only inputs of length 3n + 1). For an algorithm A tobe considered better than another algorithm B for a maximization problem, itmust hold for sufficiently large n that there exists a bijection f : In → In suchthat the following holds:

∀I ∈ In A(I) ≥ B(f(I))

∃I ∈ In A(I) > B(f(I))

Theorem 9. Almost-GIS is better than GIS on the class G according to bijectiveanalysis.

Proof. We let the bijection f be the identity and the result follows from Theo-rem 8.

Average analysis is defined such that if one algorithm is better than anotheraccording to bijective analysis, it is also better according to average analysis.Thus, Almost-GIS is better than GIS on the class G according to average anal-ysis.

40

Note that Almost-GIS is not an optimal algorithm for all Freckle Graphs. Theclass of graphs, Kn,n, for n ≥ 2, consisting of complete bipartite graphs with nvertices in each side of the partition, is a class where Almost-GIS can behavevery poorly. Note that on these graphs, GIS is optimal and always finds anindependent set of size n, which is optimal, so these graphs are Freckle Graphs,even though they have no isolated vertices. If the first request to Almost-GISis a vertex from one side of the partition and the next two are from the otherside of the partition, Almost-GIS only accepts one vertex, not n.

4.8 Complexity of Determining the Online In-dependence Number, Vertex Cover Number,and Domination Number

Given a graph, G, it is easy to check if it has an isolated vertex and applyTheorem 4. However, Theorem 3 and Corollary 5 might not be as easy toapply, because it is not obvious how one can check if a graph is a Freckle Graph(k + |s(G′)| ≥ IO(G′)). In some cases, this is easy. For example, any graphwhere at least half the vertices are isolated is a Freckle Graph. We leave thehardness of recognizing Freckle Graphs as an open problem, but we show ahardness result for deciding if IO(G) ≤ q.Theorem 10. Given q ∈ N and a graph, G, deciding if IO(G) ≤ q is NP-hard.

Proof. Note that it is NP-complete to determine if the minimum maximal inde-pendent set of a graph, G = (V,E), has size at most L, for an integer L [76]. Toreduce from this problem, we create G = (V , E) which is the same as G, but has|V | extra isolated vertices, and a bound L = L+ |V |. G is a Freckle Graph, since|V | ≥ IO(G). By Corollary 4, GIS(G) = IO(G). Since GIS(G) = |s(G)| + |V |,the original graph, G, has a minimum maximal independent set of size L, if andonly if G has online independence number at most L.

The hardness of computing the online independence number implies the hard-ness of computing the online vertex cover number.Corollary 11. Given q ∈ N and a graph, G, deciding if V O(G) ≥ q is NP-hard.

Proof. This follows from Observation 3 and Theorem 10.

Theorem 11. Given q ∈ N and a graph, G, deciding if DO(G) ≥ q is NP-hard.

Proof. We make a reduction from Independent Set. In Independent Set,a graph, G and an L ∈ N is given. It is a yes-instance if and only if there existsan independent set of size at least L. We reduce instances of IndependentSet, (G,L), to instances of Online Dominating Set, (G,L), such that thereexists an independent set in G of size at least L if and only if DO(G) ≥ L. Thereduction is very simple. We let G be the graph which consists of G with oneadditional isolated vertex. We set L = L+1. Assume first that any independentset in G has size at most L− 1. This means that any independent set in G hassize at most L. Since GDS produces an independent set, it will accept at most

41

L < L vertices. Assume now that there is an independent set of size at leastL in G. Then, there exists an independent set of size at least L + 1 in G. Ifthese vertices are presented first, GDS will accept them. From Theorem 4, weget that no algorithm for Online Dominating Set can do better (since G hasan isolated vertex), which means that DO(G) ≥ L.

Theorem 12. Given q ∈ N and a graph, G, the problem of deciding if IO(G) ≤q is in PSPACE.

Proof. Let q ∈ N and a graph, G = (V,E), be given. We sketch an algorithmthat uses only polynomial space which decides if IO(G) ≤ q. We view theproblem as a game between the adversary and the algorithm where the algorithmwins if it gets an independent set of size at least q+1. A move for the adversaryis revealing a vertex along with edges to a subset of the previous vertices suchthat the resulting graph is an induced subgraph of G. These are possible toenumerate since induced subgraph can be solved in polynomial space. A move bythe algorithm is accepting or rejecting that vertex. We make two observations:The game has only polynomial length (each game has length 2|V |), and it isalways possible in polynomial space to enumerate the possible moves from agame state. Thus, an algorithm can traverse the game tree using depth firstsearch and recursively compute for each game state if the adversary or thealgorithm has a winning strategy.

Similar proofs can be used to show that the problems of deciding if V O(G) ≥q and DO(G) ≥ q are in PSPACE as well. It remains open whether theseproblems are NP-complete, PSPACE-complete, or neither. In [116] it wasshown that determining the online chromatic number is PSPACE-complete ifthe graph is pre-colored and extended in [33] to hold even if the graph is notpre-colored.

4.9 Concluding Remarks

A strange difference between online and offline algorithms is observed: Addingisolated vertices to a graph can change an algorithm from not being optimal tobeing optimal (according to many measures). This holds for Online Indepen-dent Set, Online Vertex Cover, and Online Dominating Set. It is alsoshown that adding isolated elements can make the natural greedy algorithmoptimal for Online Matching, Online Disjoint Path Allocation, andMaximum Online Set (which includes Online Matroid Intersection asa special case), but not for all problems in the class AOC.

It is even more surprising that this difference occurs for vertex cover than forindependent set, since in the offline case, adding isolated vertices to a graph canimprove the approximation ratio in the case of the independent set problem. Itis hard to see how adding isolated vertices to a graph could in any way help anoffline algorithm for vertex cover.

We have shown that for Freckle Graphs, the greedy algorithm is optimal forOnline Independent Set, but what about the converse? If a graph is not

42

Freckle, is it the case that the greedy algorithm is not optimal? Let G bea graph, that is not a Freckle Graph. By definition, we have that IO(G′) >|s(G′)| + k = GIS(G). To show that the greedy algorithm is not optimal, wewould have to show that IO(G) > GIS(G). To show this, it would suffice toshow that IO(G) ≥ IO(G′). That is, the online independence number can neverdecrease when isolated vertices are added to a graph. We leave this as an openquestion.

Note that GIS = GDS. This means that for Freckle Graphs with at least oneisolated vertex, GIS is an algorithm which solves both online independent set(a maximization problem) and online dominating set (a minimization problem)online optimally. This is quite unusual, since the independent sets and dom-inating sets it will find in the worst case can be quite different for the samegraph.

As mentioned earlier, the NP-hardness results presented here do not answer thequestion as to how hard it is to recognize Freckle Graphs. This is left as anopen problem.

We have shown it to be NP-hard to decide if IO(G) ≤ q, V O(G) ≥ q, andDO(G) ≥ q, but there is nothing to suggest that these problems are containedin NP. They are in PSPACE, but it is left as an open problem if they areNP-complete, PSPACE-complete or somewhere in between.

Acknowledgments

The authors would like to thank Lene Monrad Favrholdt for interesting andhelpful discussions. This research was supported by grants from the VillumFoundation (VKR023219), and the Danish Council for Independent Research,Natural Sciences (DFF–1323-00247). The second author was also supported bya travel stipend from the Stibo-Foundation.

43

Part II

Online Algorithms withAdvice

44

CHAPTER 5

Online Algorithms with Advice: A Survey

Joan Boyar and Lene M. Favrholdt and Christian Kudahl and Kim S. Larsenand Jesper W. Mikkelsen

University of Southern Denmark, Odense, Denmarkjoan,lenem,kudahl,kslarsen,[email protected]

Abstract

Online algorithms with advice is an area of research where one attempts tomeasure how much knowledge of the future is necessary to achieve a givencompetitive ratio. The lower bound results give robust bounds on what ispossible using semi-online algorithms. On the other hand, when the advice is ofan obtainable form, algorithms using advice can lead to semi-online algorithms.There are strong relationships between advice complexity and randomization,and advice complexity has led to the introduction of the first complexity classesfor online problems.

This survey concerning online algorithms with advice explains the models, mo-tivates the study in general, presents some examples of the work that has beencarried out, and includes a fairly complete set of references, organized by prob-lem studied.

5.1 Introduction

Online algorithms solve optimization problems where the input is a finite re-quest sequence, with one request arriving at a time. On receiving a request,the online algorithm must make some irrevocable decision, generally withoutany knowledge of future requests, attempting to minimize or maximize someobjective function. There are various measures for the quality of online algo-rithms [43, 63], but the most standard is the competitive ratio [98, 139], whichis essentially the approximation ratio. The performance of an online algorithm

1Supported in part by the Danish Council for Independent Research, Natural Sciences,grant DFF-1323-00247 and the Villum Foundation.

45

Alg is compared to the performance of an optimal offline algorithm Opt. LetAlg(I) denote the value of the objective function applied to the output com-puted by Alg when given the request sequence I as input. Define Opt(I)similarly. For minimization problems, Alg is c-competitive if there exists aconstant b, such that for all finite request sequences, I,

Alg(I) ≤ c ·Opt(I) + b.

Similarly, for maximization problems,

Opt(I) ≤ c ·Alg(I) + b.

In both cases, if the inequality holds with b = 0, the algorithm is strictly c-competitive. The (strict) competitive ratio of an algorithm is the infimum overall values of c for which the algorithm is (strictly) c-competitive.

Note that competitive analysis is a worst-case measure. Thus, it can be useful tothink of the input as being generated by a malicious adversary who knows Alg.When studying an online problem, it is customary to consider both deterministicand randomized online algorithms. A randomized online algorithm Alg is c-competitive if it is c-competitive in expectation, i.e., if there exists a constantb such that E[Alg(I)] ≤ c · Opt(I) + b for all inputs I. This corresponds tothe adversary being oblivious to the random choices made by the algorithm.See [20,34] for further details.

We use n to denote the length of an input sequence. In some cases the competi-tiveness, c, is a function of n. When referring to optimal algorithms or solutions,we always refer to a solution which could have been produced by an optimaloffline algorithm.

Advice complexity. Note that there are three basic assumptions underlyingcompetitive analysis: The input is adversarial, decisions are irrevocable, and anonline algorithm knows nothing about the requests before they arrive. Manypossible ways of relaxing one or more of these assumptions have been studied.In the advice complexity model, the “no knowledge” assumption is relaxed ina problem-independent and quantitative way (while the two first assumptionsremain unaltered). In this model, an online algorithm with advice is providedwith some bits of advice about the request sequence I. These bits are providedby a trusted oracle that knows the entire request sequence and has no com-putational limitations (the formal definition of the advice complexity model(s)can be found in Section 5.2). Obviously, an online algorithm with advice mayperform better than a traditional online algorithm, but if the amount of adviceit receives from the oracle is bounded, it may perform less well than an opti-mal offline algorithm. The advice complexity of an algorithm is the maximumnumber of bits read by that algorithm on any request sequence of a given length.

Motivation. Given an online problem considered in the advice model, themajor question asked is:

How many bits of advice are necessary and sufficient to obtain acompetitive ratio c?

46

This includes determining the number of bits to become optimal (strictly 1-competitive) or to beat the best deterministic or randomized algorithms. Italso includes considerations in the other direction, such as determining whatcan be obtained using a constant number of bits, for instance.

In what follows, we have attempted to list the most important reasons the advicecomplexity model is interesting and relevant.

• Lower bounds give robust bounds on what is possible using semi-onlinealgorithms. The value of certain upper bounds, on the other hand, maybe questioned for the following reason. Since the advice is not restrictedexcept by its size, it can happen that algorithms with advice are not ofpractical interest; they sometimes rely on advice that we do not expectto possess. However, lower bounds in the advice complexity model arevery strong, exactly because we do not impose any restrictions on thetype of advice: They apply to any possible information about the requestsequence that can be encoded using a sufficiently small number of bits.Thus, they can be very relevant for the study of semi-online algorithms(see Section 5.3).

• There are strong connections between online algorithms with advice andrandomized online algorithms. For example, important open problemsregarding randomized online algorithms (such as the best possible com-petitive ratio of a randomized k-Server or List Update algorithm) canbe stated equivalently as problems about online algorithms with advice.Some results on online algorithms with advice lead to new lower and/orupper bounds on randomized online algorithms (see Section 5.4).

• It may be possible to use online algorithms with advice in settings whereit is feasible to run multiple algorithms and output the best solution.For example, Boyar et al. [44] gave an algorithm using two bits of adviceto choose between three algorithms for List Update, obtaining a com-petitive ratio better than any deterministic online algorithm. In using aList Update algorithm as a post-processing step of the Burrows-WheelerTransform, the algorithm performing the compression can compare the re-sults obtained from more than one algorithm, choose the best, and includeas part of the compressed data which algorithm was actually used; see Ka-mali and López-Ortiz [97].

• Suppose that an online algorithm with b bits of advice runs in timeO(T (n)). Then one may convert the algorithm into an offline approx-imation algorithm with time complexity O(2b · T (n)), by running thealgorithm on all possible 2b advice strings. For Reordering BufferManagement, the currently fastest (1 + ε)-approximation algorithm isobtained in exactly this way by using an online algorithm with advice dueto Adamaszek et al. [2].

• Online algorithms with advice may be viewed as non-deterministic onlinealgorithms since one may think of the online algorithm as non-deterministicallyguessing the advice which it then uses to compute its output. Thus, theadvice complexity of a problem measures the amount of non-determinisman online algorithm needs to achieve a given solution quality. Understand-ing the power of non-determinism (as compared to determinism and ran-

47

domization) is one of the main challenges and most well-studied problemsin theoretical computer science (P vs. NP, DFA vs. NFA, etc.). It seemsnatural to try to improve our understanding of how non-determinism mayhelp when solving problems in an online environment.

• The first complexity classes for online algorithms have been based on ad-vice complexity (see Section 5.7.3). The first class, Asymmetric OnlineCovering (AOC), contains many problems where the algorithm’s irrevo-cable decisions are whether or not to accept or reject each request. AllAOC-complete problems, such as Vertex Cover, Independent Set,Dominating Set, Cycle Finding, and Disjoint Path Allocation,have essentially the same advice complexity (linear in n

c , where c is thedesired competitive ratio). Weighted versions of AOC-complete minimiza-tion problems are even harder. These complexity classes are not only in-teresting with respect to advice; in the online setting without advice, thecomplete problems are also exceptionally hard.

We now give two examples of simple advice complexity results. Note that theminimum number of bits required to encode the decisions Opt makes is anobvious upper bound for the amount of advice needed to be optimal. Sometimes,though, encoding Opt’s decisions requires fewer bits than one first expects.Paging is an example of this.

Example 1: In Paging, there is a set of N pages. A request sequence arrivesonline; each request is a page. The algorithm has a cache which starts out emptyand can contain up to k < N pages. When a page not in cache is requested, thepage must be brought into cache, at a cost of 1. This is referred to as a pagefault. If the cache is already full, the algorithm must select another page fromits cache to evict to make room for the new page; this is the irrevocable onlinedecision.

The optimal offline Paging algorithm is Longest Forward Distance (Lfd) [19]that always evicts the page, which will not be requested for the longest time.For deterministic online Paging algorithms without advice, the best attainablecompetitive ratio is k [139], and for randomized algorithms it is Hk [1, 125],where Hk ≈ ln k is the kth harmonic number.

How many advice bits does an algorithm need to be optimal? Clearly, dlog kenbits of advice are enough to simulate Lfd by specifying the index in cache of thepage to evict (if any) at each request. However, using a more clever encoding,one can obtain the following result:Theorem 13 (Dobrev et al. [60]). There is an optimal Paging algorithm, Alg,which reads n bits of advice.

Proof. Using a fixed optimal solution for the given input, the oracle providesone bit of advice per request. That bit indicates whether or not, in the optimalsolution, the page requested is kept in cache until the next time it is requested.Alg only evicts pages which will cause faults on their next request in the optimalsolution as well. Thus, Alg is optimal.

48

Figure 5.1 shows that for the question of the number of advice bits necessaryand sufficient to achieve a certain competitive ratio, the “phase transitions”are essentially completely understood for Paging. Mikkelsen [128] proved thefollowing thresholds: For any fixed cache size k, a large but constant totalnumber of advice bits is sufficient to achieve a competitive ratio of Hk + ε (forany ε > 0), and a linear number of advice bits is necessary to be better thanHk-competitive.

The connection between advice complexity and randomization is key to provingboth the upper and lower bounds of Hk. For more details on Paging, seeSection 5.8.

Advice bits

CompetitiveRatio

1

Hk

O(log k)

k2b

O( k2b + b)

k2

k

0 1 2 b log k f(k) Θ(n) n

Figure 5.1: The (asymptotic) trade-off between competitive ratio and advicefor Paging. The function f(k) is a rapidly growing function of k (but doesnot depend on n). Consider a trade-off point (b, c) where b is a number ofadvice bits and c is a competitive ratio. The red area shows those trade-offswhich provably cannot be achieved. The green area shows those trade-offs thatwe currently have algorithms achieving. It is an open problem if trade-offs inthe white area are achievable or not. The horizontal dashed lines are the bestpossible competitive ratios of deterministic and randomized algorithms withoutadvice.

Another problem with a sharp phase transition is Uniform Knapsack.

Example 2: In Uniform Knapsack, a sequence of requests arrives online.Each request is a value in the range (0, 1]. When a request arrives, the on-line algorithm decides irrevocably whether to pack it in the knapsack or rejectit. The total size of accepted requests is not allowed to exceed 1, the size ofthe knapsack. The goal is to maximize the sum of the sizes of the acceptedrequests. Uniform Knapsack is the special case of the standard knapsackproblem where the size and the value of requests are always equal to each other.

49

The problem is analyzed using strict competitive analysis, since setting the ad-ditive constant in the definition of competitiveness equal to 1 would make anyalgorithm 1-competitive.

A deterministic algorithm without advice for Uniform Knapsack has un-bounded competitive ratio [123]. However, with just one bit of advice it ispossible to be 2-competitive. The one bit of advice is used to indicate whetheror not there is an item in the input sequence of size at least 1

2 . That informationmight actually be available in some applications, so it can also be viewed as asemi-online algorithm; an online algorithm which knows something about therequest sequence in advance.Theorem 14 (Böckenhauer et al. [30]). There exists a 2-competitive Uniform Knapsackalgorithm which reads one bit of advice.

Proof. The oracle writes a 0 on the advice tape if no request of size at least 12

will arrive and a 1 otherwise. The algorithm reads this one bit, b, of advice. Ifb = 0, it packs each request if it has enough space left for it. If b = 1, it rejectseverything until it encounters an item of size at least 1

2 , which it packs (it maypack additional items that fit after this point).

For b = 0, if the total size of all requests arriving is less than 1, the algorithmwill be optimal; otherwise, its knapsack will be at least half full the first time itrejects a request. If b = 1, the knapsack will again be at least half full. Thus,the algorithm is 2-competitive.

Böckenhauer et al. [30] also prove that to obtain a competitive ratio betterthan 2, Ω(log n) advice bits are required. See Section 5.9 for more about theknapsack problem.

Organization of survey. First, we introduce the advice models in Section 5.2.Then, we discuss the relationship between advice and semi-online algorithms inSection 5.3.

The strong connections between advice complexity and randomization, showingthat results in either area can often be carried over to the other, is discussedin Section 5.4. Some of the techniques which can be used in designing onlinealgorithms with advice are discussed in Section 5.5 and lower bound techniquesare discussed in Section 5.6.

A specific frequently used lower bound technique is based on String Guessingand its variants, which can also sometimes be used for proving upper bounds.These problems are discussed in Section 5.7, along with the first complexityclasses for online algorithms, developed based on String Guessing results.

Note that all problems discussed in this survey are online unless explicitlystated otherwise. Metrical Task System problems, including k-Server andPaging, are discussed in Section 5.8. Bin Packing, Scheduling, and furtherresults on Uniform Knapsack are discussed in Section 5.9. Graph Col-oring is discussed in Section 5.10 and Graph Exploration in Section 5.11.Problems studied using advice complexity, along with references, are listed inthe appendix.

50

5.2 Advice Models

In this section, we define advice models and describe the historical development.We compare the models and also discuss alternative views on what an advicemodel represents.

All models make use of a trusted oracle that knows the entire request sequenceand has unlimited computational power. Bits that we refer to as advice bitsare supplied to the algorithm by the oracle in some manner. These bits can beassumed to give a correct answer to any question the online algorithm poses.For example, an online algorithm could pose the question of how many futurerequests there are of a certain type and interpret the bits that are made availableas the number of interest. Note that the oracle knows the online algorithm, sothe questions are not explicitly asked; the oracle simply writes the answers andthe algorithm reads them.

The term advice complexity for online algorithms was coined by Dobrev, Královič,and Pardubská [60]. They suggest two models, referred to as the helper modeand the answerer mode. In the helper mode, the online algorithm receives anumber of advice bits, which could be zero, prior to processing each request.The advice complexity is defined to be the total number of bits received from aperfectly designed oracle for the online algorithm to be optimal. The answerermode is similar, except that advice bits are only given when requested by theonline algorithm, in which case at least one bit is given. Note that the lengthitself of the bit sequence given as response to a request for advice may transferinformation in both the helper and the answerer mode.

Allowing the online algorithm to gain knowledge from not receiving any bits(or, in general, receiving a varying number of bits) may be reasonable in someapplications (see [60] for a discussion), but it also introduces an additionalcomplication which is not always desirable. Following the introduction of onlinealgorithms with advice in [60], two other models were suggested, both avoidingthis complication in different ways.

One model was introduced by Böckenhauer, Komm, Královič, Královič, andMömke [29]. They suggest using an infinite advice tape, written by the oracle;we refer to this model as the Tape Model. The online algorithm may consult thisadvice tape at its discretion, and the advice complexity is simply the numberof bits read. The term “tape” is likely suggested by tradition; the importantproperties are that the algorithm has an a priori unbounded supply of bits thatit can receive one at a time on request, and that there is no indication of an“end”, i.e., it is the algorithm that stops asking for bits, not the supply that runsout. In other words, the Tape Model is similar to the answerer mode of [60],except that the algorithm must specify how many bits it wants to receive whenasking for advice.

Another model was introduced by Emek, Fraigniaud, Korman, and Rosén [68].They define a universe, U , of all possible answers, assume that dlog |U|e advicebits are given to the online algorithm with every request, and define the advicecomplexity to be dlog |U|e. Phrased in terms of the first models from [60], thiscorresponds to an advice complexity of dlog |U|en, where n is the length of therequest sequence. Thus, to obtain a good advice complexity, the size of the

51

universe must be minimized, which is equivalent to using as few advice bits perrequest as possible. We refer to this model as the Per Request Model. Theoptimal solution for Paging discussed in the introduction falls naturally intothis model.

In the Per Request Model, any algorithm employing advice uses at least a linearnumber of bits, making it impossible to explore a lack of information that canbe overcome using a sublinear number of advice bits (which is possible in thepreviously discussed models). Algorithms with sublinear advice are of significantinterest for Bin Packing (see Section 5.9) and several other online problems.

Earlier than [60], a similar notion of advice complexity was introduced byFraigniaud, Ilcinkas, and Pelc [74] in the setting of graph exploration (see Sec-tion 5.11), rather than for traditional online algorithms. Here, all the advice isgiven in the beginning, and the algorithm learns the length of the advice.

Unless explicitly stated otherwise, the results in this survey are in the TapeModel.

Further Technical Details. Now, we discuss some technical details that,although they can be useful to know, are not essential to get an overview of themodels.

Reading the advice from an infinite tape (as opposed to receiving a fixed numberof bits) as in the Tape Model comes at a small price. If an online algorithm wantsto read a number of bits encoding an integer X without a (good) known upperbound, the number of bits to be read must also be provided as information. Thestandard technique for this is to use a so-called self-delimiting encoding (alsoknown as a prefix code), as in [28]. For example, one may write dlog(X + 1)e inunary (using ones), then a zero as a delimiter, followed by X in binary, using2 dlog(X + 1)e + 1 bits in total (this is similar to Elias gamma coding [67]).Slightly more efficient encodings may be obtained by iterating this construction.The next iteration (similar to Elias delta coding) uses logX+2 log logX+O(1)bits to encode an integer X. However, by Kraft’s inequality [53], there doesnot exist a self-delimiting encoding of the integers using, for example, logX +log logX +O(1) bits, and so we cannot obtain significantly better encodings.

In the model of [74], using a self-delimiting encoding may be unnecessary, sinceall of the advice is given at the beginning and the algorithm learns its length.Furthermore, their oracle is able to send

∑bi=0 2i = 2b+1 − 1 different advice

strings using at most b bits of advice.

Most lower bounds stated in the Tape Model in the literature are in realityshown in a model similar to that of [74], not using that the algorithm does notknow the length of the advice. This means that upper bounds often contain alogarithmic lower-order term which is not present in the lower bounds.

Some bounds transfer between the Per Request Model and the Tape Model. Anupper bound from the Per Request Model of b bits for each of n requests givesan upper bound of bn bits in the Tape Model (assuming that b is known tothe algorithm, otherwise bn+O(log b) bits may be required). Similarly, a lower

52

bound stating that b bits are necessary in the Tape Model implies that at least⌈bn

⌉bits per request are required in the Per Request Model.

Allowing the algorithm random access to the tape in the Tape Model (as opposedto sequential access) does not make a difference: Since the oracle knows boththe algorithm and the input when preparing the advice tape, it can predictwhich bits the algorithm would access in a random access model and simplyplace them first sequentially on the advice tape.

Comparison to other computational models. The traditional approachof providing an online algorithm with a specific type of knowledge is discussedin detail in Section 5.3 on semi-online algorithms.

Hromkovič et al. [92] proposed parameterizing the Tape Model with an upperbound on the running time of the algorithm to obtain an analogue of resource-bounded Kolmogorov complexity. These ideas do not appear to have been in-vestigated much yet.

As mentioned in [60, 68], the advice complexity model for online problems issimilar to an earlier advice complexity model for distributed computing [75].There, the question was how much advice the nodes in a network need in or-der to complete some task using as little communication as possible (such asbroadcasting, leader election, or coloring the nodes of the network).

Note that what is traditionally called a Turing machine with advice (see [9], forexample) does not correspond well to an online algorithm with advice. A Turingmachine with advice receives advice which may only depend on the length ofthe input, not the input itself.

5.3 Relationship to Semi-Online Algorithms

A major motivation for considering advice complexity is the relationship it hasto semi-online algorithms. In the literature, the term “semi-online” is used formany quite different types of problems. For example, a semi-online algorithmmay have a look-ahead, i.e., the ability to see some of the future requests; thealgorithm may be allowed to postpone some decisions or modify some of themafter arrival of more input; or the algorithm may be allowed to make assumptionsabout the request sequence, such as non-increasing sizes. These types of semi-online problems have little known relation to advice complexity. Those that doare the type that either assume some advance knowledge about the input ormaintain more than one solution and choose the best solution at the end.

5.3.1 Assuming Advance Knowledge

Having advance knowledge available to a semi-online algorithm corresponds toadvice from an oracle in the advice complexity setting. Thus, depending on thetype of advice an oracle provides, an online algorithm with advice can be seen asa semi-online algorithm. Uniform Knapsack, mentioned in the introduction,

53

is a good example of where the advice model can lead to potentially practicalsemi-online algorithms; it is only necessary to know if there exists an item of sizeat least 1

2 . Similarly, there is an online algorithm with advice for Bin Packing,where only knowledge of the number of items with sizes in ( 1

2 ,23 ] is necessary

(see Section 5.9).

Lower bounds on advice complexity, on the other hand, can give proofs thatno good semi-online algorithm (of a certain type) exists. For example, a linear(or even super-logarithmic) lower bound on the advice necessary to obtain acompetitive ratio of c shows that knowing the number of requests, which wouldonly require a logarithmic number of bits of advice, cannot be sufficient to obtaina competitive ratio of c. At the same time, it would also rule out many othersemi-online algorithmic possibilities.

We present some examples to show the interest in semi-online algorithms as-suming advance knowledge and show some of the types of advance knowledgethat have been considered. Most of the work of this type focuses on schedulingproblems (see Section 5.9 for definitions of scheduling and the makespan objec-tive), and much of it has been for cases where the number of machines is a smallconstant. In the examples we give, the number of machines, m, is unbounded.

For Scheduling on identical machines for makespan, Fleischer and Wahl [71]present an upper bound of 1.9201 on the competitive ratio of deterministicalgorithms, and Rudin reports a lower bound of 1.88 [136]. However, if a semi-online algorithm knows the total sum of processing times, algorithms can dobetter. A lower bound of 1.585 is proven in [3], and this lower bound is met bythe algorithm in [100] when the number of machines tends to infinity. On theother hand, knowing the value of the optimal makespan, the problem becomesidentical to Bin Stretching. This problem was introduced in [11], and thecurrently best lower bound, 1.3, was proven there. A 1.5-competitive algorithmfor Bin Stretching was presented in [32].

For Scheduling preemptively on uniformly related machines for makespan, ifthe value of the optimal makespan is given in advance, an optimal schedule ispossible [65]. If only an approximation to the optimal value is known, evenfor identical machines, the competitive ratio is increasing with respect to bothm and the uncertainty [94]. Note that in terms of advice complexity, moreuncertainty would generally imply less advice.

Seiden et al. [138] present a best possible online algorithm for Schedulingpreemptively on identical machines for makespan, assuming decreasing job sizes(a competitive ratio of about 1.36603), and remark that the assumption ofdecreasing job sizes can be replaced with knowledge of the size of the largestjob.

As an example where quite a bit of advance knowledge (or advice) is used, fittingwell into the Per Request Model, Scheduling parallel batches with knownarrival time of the first job among those remaining with the longest processingtimes for makespan is considered in [155].

For Machine Covering (maximizing the minimum load), it was shown in [152]that the List Scheduling algorithm [78] is m-competitive; it is well known thatthis is best possible (see [10]). The ratio goes down to m− 1 (m ≥ 3) if either

54

the total sum of processing times or the longest processing time is known [144].Even if not all machines become available at the same time, the ratio goes downto m− 2 (m > 3) if both of these are known [93]. If the optimal value is known,the ratio is only 2− 1

m [10].

5.3.2 Parallel Solutions

This model was considered by Albers and Hellwig [4] for Scheduling on iden-tical machines for makespan. For example, one of their results is a ( 4

3 + ε)-competitive algorithm using ( 1

ε )O(log 1ε ) parallel schedules. A corresponding

( 43 +ε)-competitive algorithm with advice would receive the index of the best of

the ( 1ε )O(log 1

ε ) parallel schedules from an oracle using O(log2 1ε ) bits of advice

and perform the same computations as the algorithm with parallel schedules,but only using the schedule indexed by the advice. Similarly, any algorithmwith b(n) bits of advice to achieve competitive ratio c can be converted into2b(n) algorithms, each giving a schedule, and choosing the best schedule willgive a c-competitive result. Thus, advice complexity can conveniently be usedto give lower bounds for parallel solutions approaches.

Maintaining parallel solutions was also considered for the independent set prob-lem in [87] in a slightly different model. Their upper and lower bound resultswere asymptotically tight for this model. However, using advice complexitytechniques, asymmetric string guessing, and the AOC-completeness (see Sec-tion 5.7) of the problem, both the upper and lower bounds were improved in [40],determining the exact constant for the high-order term in the number of parallelsolutions.

5.4 Advice vs. Randomization

Before covering algorithmic techniques for advice more broadly, we discuss thestrong connection to randomization as further motivation for studying advicecomplexity.

Derandomization using advice. It is trivial to see that if an online algo-rithm uses b random bits, then an at least as good deterministic algorithm usingb advice bits also exists: The oracle chooses the random bits giving the best per-formance. However, it seems reasonable to ask for derandomization results notdepending on the number of random bits used by the algorithm. Using deran-domization techniques, Böckenhauer et al. [28] obtained the following result:Let I(n) denote the number of inputs of length n to some minimization onlineproblem (later extended to maximization problems [26,64,128]). If there existsa randomized c-competitive algorithm without advice, then for every constantε > 0, there exists a deterministic (c + ε)-competitive algorithm with advicecomplexity O(log n + log log I(n)). For a large number of online problems, thenumber of possible inputs of length n is at most 2n

O(1)

. Thus, for these prob-lems, it is possible to convert any randomized algorithm into an (almost) equallygood deterministic algorithm with advice complexity O(log n).

55

We remark that this result is essentially tight. It is shown by Mikkelsen [128]that for any increasing function I(n), there exists (pathological) online prob-lems where Ω(log log I(n)) bits of advice are indeed needed for such a conversion.Thus, for online problems with large input spaces, it is possible that a lot of ad-vice is required to simulate randomization. However, so far no one has stumbledupon a “natural” online problem (that is, a problem not specifically constructedfor this purpose) where more than O(log n) bits of advice are needed to simulaterandomization.

Finally, we note that this derandomization result can, of course, also be used toconvert an algorithm which uses both advice and randomization into a deter-ministic algorithm with advice (see [128]). Therefore, randomized algorithmswith advice are rarely studied explicitly.

Replacing advice bits with random bits. Intuitively, it might appearthat having access to even a rather small number of advice bits provided by anomniscient oracle knowing the entire input should often be more powerful thansimply having access to (any number of) random bits. Perhaps surprisingly, itturns out that for many important online problems, this is not the case.

Let us first consider the naive idea of simply running an algorithm with advice,Alg, with a tape full of random bits (instead of bits provided by an oracle). Callthe resulting randomized algorithm Rand. It is easy to construct a pathologicalminimization problem where a single bit of advice yields an optimal algorithmwhile no randomized algorithm can achieve any meaningful competitive ratio(consider a problem where one of the first two requests should be chosen overthe other, and either can have arbitrarily larger weight than the other). Onthe other hand, for a maximization problem with non-negative weights, thenaive conversion will turn a c-competitive algorithm reading b bits of adviceinto a (c · 2b)-competitive randomized algorithm. Indeed, for every input I, wehave Rand(I) = Alg(I) with probability at least 1

2b . Since scores cannot benegative, this implies that E[Rand(I)] ≥ Alg(I)

2b .

It is possible to do significantly better than the naive conversion for a large classof important online minimization problems. In particular, it is possible to dobetter for any problem which can be modeled as a Metrical Task System(see Section 5.8). Before the introduction of advice models, this was studied asthe problem of “combining online algorithms online”. In [24], Blum and Burchshowed how to use the celebrated machine learning algorithm RandomizedWeighted Majority to obtain the following result: For every ε > 0, it is possibleto combine m algorithms for a Metrical Task System, Alg1, . . . ,Algm,into a single randomized algorithm, Rand, such that for every input I,

E[Rand(I)] = (1 + ε) · min1≤i≤m

Ai(I) +O(∆ logm).

Here, ∆ is the normalized diameter of the underlying metric space. Note thatif m ∈ O(1), then O(∆ logm) is just an additive constant. Thus, using ourterminology, Blum and Burch show that for any Metrical Task System, ac-competitive algorithm with advice complexity O(1) can be converted into a(c + ε)-competitive randomized algorithm without advice! The result of Blum

56

and Burch was later extended in [128] by showing that such a conversion isalso possible if the algorithm uses o(n) bits of advice instead of constant ad-vice. Together with the derandomization result, this gives a striking equivalencebetween advice and randomization for many online problems, including thosementioned in the following theorem:Theorem 15 (Mikkelsen [128]). Let P be Metrical Task System, k-Server,List Update, Paging, or Dynamic Binary Search Tree and assume thatthe underlying metric/node set is finite. Let c be a constant independent of theinput length. The following are equivalent

• For every ε > 0, there exists a (c+ ε)-competitive P algorithm with advicecomplexity o(n).

• For every ε > 0, there exists a (c+ε)-competitive randomized P algorithmwithout advice.

Note that for k-Server, for example, determining the best possible competitiveratio of a randomized algorithm is a long-standing open problem. In particular,the randomized k-Server conjecture states that for every metric space, thereexists an O(log k)-competitive randomized algorithm [107]. It was noted in [28]that, due to the derandomization result, a sufficiently large advice complexitylower bound would disprove this conjecture. Theorem 15 shows that the ran-domized k-Server conjecture is in fact equivalent to the conjecture that thereexists an O(log k)-competitive deterministic algorithm with advice complexityo(n) (assuming the underlying metric space is finite). See Section 5.8 for moreinformation on k-Server.

5.5 Algorithmic Techniques

We discuss general techniques for designing algorithms with advice.

Derandomization using advice. It is often possible to convert a randomizedonline algorithm into a deterministic online algorithm reading O(log n) bits ofadvice. Section 5.4 was devoted to the treatment of the relationship betweenadvice and randomization.

Adapting offline algorithms. It is sometimes possible to convert an existing(exact or approximation) offline algorithm into an online algorithm using arelatively small number of advice bits. This has been done for Bin Packingand Scheduling [135] and Multi-Coloring [51] (see Sections 5.9 and 5.10.2).It can also be possible to convert streaming algorithms, for example, into onlinealgorithms with advice, as has been done for bipartite matching [64].

The now-or-later technique. The now-or-later technique is based on givingone bit of advice per request. The technique has been used for Paging asdescribed in Example 1 in the introduction: Each time a page is requested,one bit of advice is given, indicating whether the requested page can safely be

57

evicted the next time a page fault occurs, or if the algorithm should keep thepage in cache until it has been requested at least once more.

Reordering Buffer Management is similar to paging: A buffer of a certainsize is given, and the input is a sequence of items. For each request, if the bufferis full, an item must be removed from the buffer. Each item has a color, andif the evicted item has a color different from the previously evicted item, acost of 1 is incurred. A slightly more complicated version of the now-or-latertechnique (using two advice bits per request to also include a “soon, but notnow”-option) was applied to Reordering Buffer Management in [2] (seealso [133]), resulting in a 3

2 -competitive algorithm, which was extended to a(1 + ε)-competitive algorithm using O(log 1

ε ) bits per request.

The follow-OPT technique. This technique was introduced in [68] and hasbeen used for Metrical Task System and k-Server [28, 68, 134]. In theseproblems, there is a bounded number of possible states. With a lot of advice,it is possible to specify exactly which state the algorithm should be in aftereach request. With fewer bits, the idea is to specify the exact state as often aspossible, ensuring that the state of the algorithm often coincides with the stateof Opt. When serving those requests for which the precise state of Opt is notspecified, the algorithm tries to be conservative and not make risky decisions.

Combinatorial designs. In many cases, the amount of advice needed toachieve a given competitive ratio is closely related to the minimum size of certaincombinatorial structures. The idea is to “compress” the optimal set S of advicestrings into a smaller set S′. The strings in S′ have the same length as those inS, and each string in S is “close to” some string S′, i.e., each string in S′ can bethought of as representing a subset of S. The advice given is an index to a stringin the smaller set S′. If the aim is simply to minimize the Hamming distancebetween each string in S and its representative in S′, covering codes can be used.However, in many cases, it must be ensured that all ones (or all zeros) in thestring in S be present in its representative in S′. In this case, covering designscan be used. For example, upper bounds on the size of covering designs havebeen used to obtain algorithms with advice for Paging (see Section 5.8) andminASG (see Section 5.7.2). Similarly, upper bounds on the size of coveringcodes have been used to construct algorithms with advice for, for example,String Guessing (see Section 5.7.1) and Matching on paths and trees [99].

Note that since we generally do not restrict the running time of our online algo-rithms, the upper bounds on the size of the given combinatorial structure neednot be constructive. This is important for the applications involving coveringdesigns, for example, where good upper bounds proven via the probabilisticmethod exist, but where it is not known how to construct such covering designsefficiently (see [40] for details).

The warning signal technique. An obvious technique for designing algo-rithms with advice is to consider an online algorithm Alg without advice andtry to use advice to pinpoint exactly when Alg makes mistakes. The idea isthat simply warning the algorithm of mistakes that it is about to make might

58

be much cheaper than telling the algorithm exactly what to do. This has beendone for edge coloring of trees (see Section 5.10.2).

Exponential sparsification. For weighted problems where a good advicealgorithm exists for the case where there are only few different weights, expo-nential sparsification can sometimes be used. The requests are grouped basedon their weights into intervals ((1 + ε)k, (1 + ε)k+1] for k = −∞ . . .∞.

The first idea is to treat requests with weights in the same interval ((1+ε)k, (1+ε)k+1] as having weight (1 + ε)k+1. For some problems, this gives only a smallloss in competitive ratio for the algorithm. This idea was used in [135]. Ithas also been used for different variants of approximation problems (no adviceinvolved), such as developing a PTAS for minimizing makespan in scheduling;see [147], for example.

The second idea is that requests with weights in an interval ((1+ε)k, (1+ε)k+1],for sufficiently small (or large) k (compared to that of the other requests),may be served in some simple way without using any advice with only a smallloss in competitive ratio. For example, for Weighted Independent Set, apolicy could be to always reject vertices with a weight below some threshold.Depending on this threshold, this might only give a small loss in competitiveratio. Note that in the beginning, some scheme should be used to identify whichrequests have (relatively) small weights. This could for example involve usingO(log n) bits to give the index of the first request which does not have a smallweight.

Combining the two ideas, we now just need an algorithm (for the remainingrequests) which solves the problem well when only few different weights areallowed. This approach was used in [41].

5.6 Lower Bound Techniques

We discuss general techniques for establishing lower bounds against algorithmswith advice.

The pigeonhole technique. Construct a set of inputs, I, where |I| = m.Suppose that an algorithm reads at most b bits of advice on any input fromI. By the pigeonhole principle, this algorithm must read the same advice forat least

⌈m2b

⌉of the inputs in I. Thus, it suffices to show that for any subset,

I ′ ⊂ I, of size at least⌈m2b

⌉and any fixed deterministic algorithm (without

advice), there is an input from I ′ on which the algorithm performs poorly. Inmany cases, this is achieved by designing I such that all inputs have somecommon prefix. On this common prefix, a deterministic algorithm selected forI ′, based on the advice, will always produce the same output. So, if differentinputs in I ′ require different outputs for the common prefix, this yields a lowerbound. More generally, one may use a partition tree [16], where nodes in thetree represent sets of inputs with a common prefix. The pigeonhole techniqueis applied in [23,28,30,45,104,135], for example.

59

The multiple algorithms technique. Any algorithm Alg reading b bits ofadvice can be converted into 2b algorithms, Alg1, . . . ,Alg2b , without advicesuch that for every input I, Alg(I) = min1≤i≤2b Algi(I) for minimizationproblems (for maximization problems, min is replaced by max). Thus, we canget a lower bound by showing how an adversary can construct an input such thatall of the 2b algorithms perform poorly on that input. One can, for example,create an input in rounds, where each round ensures that some fraction of thealgorithms perform poorly. This technique is applied in [40, 52, 105, 106, 127],for example.

The probabilistic method. Suppose that we are able to construct a proba-bility distribution over a set of inputs I and show that for any deterministic algo-rithm without advice, the probability that the algorithm performs “well” is verysmall. Then this gives an advice complexity lower bound. For example, let Algbe an algorithm reading b bits of advice. Then Alg can be converted into 2b de-terministic algorithms, Alg1, . . . ,Alg2b , without advice (as done in the multi-ple algorithms technique). Assume that for every deterministic algorithm, Det,without advice, it holds that Pr[Det(I) ≤ c·Opt(I)] < δ, where I is drawn fromI according to our input distribution. Then, by the union bound, this impliesthat Pr[Alg(I) ≤ c · Opt(I)] = Pr[min1≤i≤2bAlgi(I) ≤ c · Opt(I)] ≤ 2bδ.If 2bδ < 1, then this implies that there exists an input I ∈ I such thatAlg(I) > c ·Opt(I), and hence Alg is not strictly c-competitive. The proba-bilistic method is applied in [15,77,128], for example. See also Section 5.7.1 fora simple but useful lower bound obtained via this technique.

Advice-preserving reduction. Suppose that we already have a lower boundon the advice complexity for a problem P. An easy way to obtain a lowerbound on the advice complexity for a related problem P′ is to reduce P to P′

in a suitable way. A number of abstract guessing games have been introducedspecifically with the purpose of serving as the starting point of such reductions(see Section 5.7).

Σ-repeatable online problems. It was shown by Mikkelsen [128] that foronline problems which are “repeatable”, it is often possible to translate lowerbounds for algorithms without advice into lower bounds for algorithms withsublinear advice. Informally, an online problem is Σ-repeatable if it is possibleto combine r inputs I1, . . . , Ir into a single input I = f(I1, . . . , Ir) such thatserving I essentially amounts to serving each Ii independently and adding thecosts incurred. In particular, the way an algorithm serves I1, . . . , Ii−1 shouldnot significantly affect how efficiently the algorithm can serve Ii. Paging is Σ-repeatable since one may simply concatenate the inputs I1, . . . , Ir. The onlydependency between the number of page faults of two different rounds is thatour initial cache when serving the requests of Ii corresponds to our final cachewhen serving the requests of Ii−1. However, if we make sure that Opt(Ii) ismuch larger than the cache size, then this small dependency can essentially beignored when proving lower bounds. A problem which is not Σ-repeatable isBin Packing. Consider inputs I1 = ( 1

2−ε, . . . ,12−ε) and I2 = ( 1

2 +ε, . . . , 12 +ε),

both of length n. While concatenating I1 and I2 does give a valid Bin Packing

60

input I, if we pack the items of I1 two per bin, then we have to open n new binsfor serving the items of I2. On the other hand, if we pack each item of I1 in aseparate bin, we may pack the items of I2 without opening any new bins at all.Thus, the choice of how to serve the items of I1 has a significant influence onthe number of bins needed to serve the items of I2. Of course, one might try toconstruct I = f(I1, I2) in a more clever way than just concatenating I1 and I2,but it can be shown that no choice of f will work for Bin Packing.

For a Σ-repeatable problem, we have the following result [128] (omitting someminor technical conditions): Let P be a Σ-repeatable online problem, where, foreach n, the number of inputs of length n is finite. Suppose that a randomizedalgorithm without advice cannot be better than c-competitive, where c doesnot depend on the input length n. Furthermore, suppose that this lower boundholds even if the adversary has to reveal an upper bound on the length of theinput in advance. Then, an algorithm reading o(n) bits of advice must havecompetitive ratio at least c.

The currently best known lower bounds (for algorithms with sublinear advice)for Paging, k-Server, List Update, Max-SAT, Unit Clustering, Bipar-tite Matching and several other problems have been achieved by combiningthe result above with the currently best known lower bounds for randomizedalgorithms without advice [128].

∨-repeatable online problems. For a Σ-repeatable problem, the total costhas to be essentially the sum of costs incurred in each individual round. Itis also possible to consider another collection of repeatable problems, wherethe total cost is the maximum cost incurred in a single round. We call suchproblems ∨-repeatable. For those problems, we have the following lower boundresult [128]: Let P be an ∨-repeatable online problem. Suppose that a deter-ministic algorithm without advice cannot be better than c-competitive, wherec does not depend on n. Furthermore, assume that the lower bound holds evenif the algorithm knows Opt(I) in advance and knows an upper bound on thenumber of requests. Then no (possibly randomized) algorithm reading o(n) bitsof advice can be better than c-competitive. This result is similar to the result forΣ-repeatable problems, but note that for ∨-repeatable problems, we only needa lower bound for deterministic algorithms without advice in order to apply thetechnique. On the other hand, for ∨-repeatable problems, we have an additionalassumption regarding the cost of an optimal solution. This assumption turnsout to be crucial (see the Machine Covering results in Section 5.9). Infor-mally, since knowledge about Opt(I) would not help the algorithm, we mayessentially assume that the cost of Opt is the same in each round. Intuitively,even if the algorithm uses o(n) bits of advice, there will still be a single roundwhere it has almost no advice available and, hence, will perform as poorly asan algorithm without advice. The main examples of ∨-repeatable problems aregraph coloring problems.

Direct product theorems. Direct product theorems were introduced as away to prove lower bounds in [128]. Intuitively, a direct product theorem saysthat if b bits of advice are needed for an online algorithm to ensure a cost of at

61

most t when faced with requests drawn from a probability distribution p, thenr · b bits of advice are needed to ensure a total cost of at most r · t when rindependent rounds of requests are drawn from p.

The result for Σ-repeatable online problems discussed earlier is proven by havingthe requests of each round be an entire input itself (drawn from some hard inputdistribution) and then applying a direct product theorem. However, it is alsosometimes possible to have each round be only a single request of the input.Obviously, this approach will usually require more effort since one no longertreats the hard input distribution just as a black box (as was the case withthe result for Σ-repeatable problems). On the other hand, this approach canlead to significantly stronger lower bounds than what can be achieved by onlyusing the general result for Σ-repeatable problems. For example, a super-linearlower bound for Vertex Coloring has been proven using this approach (seeSection 5.10.1).

5.7 String Guessing and Complexity Classes

String Guessing is a rather artificial problem which is used primarily to showlinear lower bounds on the advice complexity of certain problems, so most ofthe problems considered here are hard from an advice complexity point of view,i.e., much advice is needed to obtain a good competitive ratio. Some of theproblems are even hard offline.

There are several types of string guessing problems. We start with the simplestversion.

5.7.1 String Guessing

String Guessing was introduced by Böckenhauer et al. [27], and it is es-sentially the same as Generalized Matching Pennies, defined and studiedearlier by Emek et al. [68]. Both of these problems consider strings of lengthn over an alphabet of size q. The goal is to guess as many of the characters ofthe input string as possible correctly. There are two versions of the problem:String Guessing with known history, where the correct answer to the pre-vious request is revealed with each new request, and String Guessing withunknown history, where the correct answers are revealed only at the end of theinput.

Note that an algorithm which answers uniformly at random in each round willguess n

q characters correctly in expectation. Clearly, one can achieve the sameguarantee with a deterministic algorithm, reading dlog qe bits of advice (identi-fying the most frequent character in the input string). The following theoremgives a lower bound on the advice needed to guess more than a fraction of 1

q ofthe input characters correctly.Theorem 16 (Böckenhauer et al. [27]). Any online algorithm with advice forString Guessing with known history (over an alphabet of size q), guaranteeingguessing γn characters of the input correctly, for some constant 1

q < γ < 1, must

62

read at least(1 + (1− γ) logq

(1− γq − 1

)+ γ logq γ

)n log q ∈ Ω(n log q)

advice bits.

The lower bound (Theorem 16) can equivalently be written as (1 − Hq(1 −γ))(log q)n, where Hq is the q-ary entropy function [27]. Also, it may be usefulto know that Theorem 16 is closely related to the Chernoff bound [91]. Indeed,it can be proven using the probabilistic method (see Section 5.6) as follows:Choose the input string uniformly at random. Let Det be a fixed deterministicalgorithm without advice. In each round, the probability that Det guesses thecorrect character is exactly 1

q , and this probability is independent of all otherrounds. Thus, the number of characters guessed correctly by Det is a sum ofindependent identically distributed Bernoulli random variables with expectedvalue 1

q . By the Chernoff bound, the probability that Det guesses γn (or more)characters correctly is at most 2−(1−Hq(1−γ))(log q)n. It follows that an algorithmwith advice needs at least b ≥ (1 −Hq(1 − γ))(log q)n bits of advice to ensurethat it always correctly guesses at least γn characters [128]. This is exactly thelower bound of Theorem 16.

Via reductions, String Guessing with known history has been used to provemany advice complexity lower bounds, including some in [2, 8, 21, 25, 27, 44, 45,64,68,79,103].

For String Guessing with unknown history, Böckenhauer et al. [27] give (usingknown bounds on the size of covering codes) an upper bound which matchesthe lower bound for String Guessing with known history up to an additiveO(log n) term. Note that the lower bound is for the easier of the two problems,and the upper bound is for the harder version. Thus, both bounds are as generalas possible.

Other String Guessing Problems. Other string guessing variants were an-alyzed by Mikkelsen [128]: Anti-String Guessing yields better lower boundsfor Paging with advice and for Induced Subgraph [103] and WeightedBinary String Guessing yields a better lower bound for Bin Packing.

5.7.2 Asymmetric String Guessing

Consider accept/reject minimization problems, i.e., minimization problems wherethe irrevocable decision for each request is either to accept or reject it. Assumethat the problem is such that a superset of a feasible solution is always feasible.An example one could keep in mind is Vertex Cover. This is the standardvertex cover problem in the vertex arrival model, so the vertices arrive online,and each vertex arrives with a list of all previous vertices to which that vertexis adjacent. The accepted vertices must form a vertex cover, so at least oneendpoint of each edge must be chosen. The fact that edges to vertices that havenot been seen yet are unknown when a vertex arrives means that the well known

63

2-approximation algorithm, accepting both endpoints of some edges, cannot beused.

The obvious advice to give is a string of bits, one for each request, with onesindicating acceptance and zeros indicating rejection for an optimal solution.One can also use this idea in a c-competitive algorithm with advice. Supposethat for each request sequence length n and each t ≤ dnc e, the algorithm cancompute a set of binary strings, Sn,t,c, such that for every request sequenceof length n with a minimum solution of size t, there is a string in Sn,t,c whichindicates a superset of a minimum solution, where the superset must have size atmost ct. Then, the oracle can give the algorithm n, t, and an appropriate indexinto Sn,t,c. The algorithm can be c-competitive by using the indexed string andanswering “accept” or “reject” based on that string, ignoring the actual requestsequence. If t > dnc e, it is safe to answer “accept” for every request. Note thatthe value n must be given in a self-delimiting encoding, and the total lengthof the advice is dlog |Sn,t,c|e + O(log n). One can think of the above algorithmas trying to guess a string corresponding to a minimum solution, but beingallowed to make a few errors in the direction of guessing ones for some zeros inthat optimal solution.

Realizing that many problems exhibit the same characteristics as Vertex Cover,Boyar et al. [40] study an abstraction of this problem in the form of MinimumAsymmetric String Guessing, minASG. As with other string guessing problems,minASG does not appear interesting in its own right, but the above exampleshows its relation to other problems. In minASG, the request sequence is asequence of bits that the algorithm must try to guess (for example indicating aminimum solution to an instance of Vertex Cover). The cost is the numberof ones guessed, unless the algorithm at some point guesses zero, when the cor-rect bit was a one. In the latter case, the cost is infinite (this corresponds to a,possibly, infeasible answer in the Vertex Cover case). The goal is, of course,to minimize cost.

As with String Guessing, there are two variants of minASG, known historyand unknown history. There is also a maximization version of AsymmetricString Guessing, maxASG. For that version, Independent Set could be theproblem to keep in mind. The objective is to guess as many zeros correctly aspossible, and guessing a zero where the correct answer is a one gives a profitof −∞. Again, there is a version with known history and one with unknownhistory.

Using results on covering designs, tight bounds are proven in [40] on all fourversion of Asymmetric String Guessing, showing that the number of advice bitsnecessary and sufficient to achieve competitive ratio c is

B(n, c) = log

(1 +

(c− 1)c−1

cc

)n±Θ(log n), (5.1)

where1

e ln 2

n

c≤ log

(1 +

(c− 1)c−1

cc

)n ≤ n

c.

Returning to the motivating Vertex Cover example, the closed formula (5.1)bounds the term dlog |Sn,t,c|e + O(log n) from that example. Vertex Cover

64

is not exactly the same problem as either minASG with known or unknownhistory, since it may be possible to deduce some but not all information aboutpast mistakes during the processing of vertices. However, minASG with knownhistory can be used to provide lower bounds, whereas minASG with unknownhistory can be used for upper bounds.

5.7.3 Complexity Classes

Problems such as minASG and Vertex Cover led to the definition of the firstcomplexity class for online algorithms, AOC, Asymmetric Online Covering [40],which contains many accept/reject problems, both minimization and maximiza-tion problems. The minimization problems have the property that any supersetof a feasible solution is a feasible solution, and the maximization problems havethe property that any subset of a feasible solution is a feasible solution. In bothcases, the cost/profit of a feasible solution is the size of the accepted set, andthe cost (profit) of an infeasible solution is ∞ (−∞). Maximization versions ofminASG have the same advice complexity as minASG. This is used to show anupper bound on the advice complexity of all problems in AOC.

The hardest problems in AOC, those which require

log

(1 +

(c− 1)c−1

cc

)n±Θ(log n)

bits of advice to be c-competitive, are called AOC-complete [40]. Using reduc-tions from the asymmetric string guessing problems, Vertex Cover, Inde-pendent Set, Dominating Set, Disjoint Path Allocation, Set Cover,and Cycle Finding are shown to be AOC-complete. All but the last of thesecorrespond to offline problems which are NP-hard. Note that although theseproblems are proven to be complete via reductions, there are no unproven as-sumptions, such as P 6= NP. Tight bounds on the advice complexity of theseproblems are known. The AOC-complete problems are all hard online problems:Without advice, these problems have competitive ratios which are Ω( n

logn ), andin fact, all the known AOC-complete problems [40] have Ω(n) competitive ratios(actually, n or n− 1 for all but one problem). Examples of problems which arein AOC, but not AOC-complete, are Uniform Knapsack and Matching.

Corresponding to each AOC problem is a weighted version of the problem, whichis still an accept/reject problem, but the cost/profit of each request may varydue to a weight associated with the request. For example, for Weighted Inde-pendent Set, the vertex arrival model is used, but each vertex arrives with aweight, in addition to a list of all previous vertices adjacent to it. The goal is toaccept a maximum weight independent set. In contrast to the unweighted case,when weights are added to AOC-complete problems, the maximization and min-imization problems have different advice complexities. Boyar et al. [41] showedthat the weighted versions of the complete maximization problems have advicecomplexity at most an additive term O(log2 n) worse than the unweighted ver-sions, but the weighted versions of the known complete minimization problemsall have unbounded competitive ratios with fewer than n−O(log n) bits of ad-vice. This latter result is proven using length-preserving advice reductions; all

65

known AOC-complete minimization problems were proven complete for AOCusing this type of reduction. Thus, the class containing the weighted versions ofthese complete minimization problems is harder with respect to advice complex-ity than the class containing the weighted versions of the complete maximizationproblems.

The maximization (and not the minimization) problems in AOC are examplesof problems where the greedy algorithm is best possible according to onlinebounded analysis, which is defined by Boyar et al. in [37].

In [103], Komm et al. consider the following problem: A graph property, Π, isa set of graphs. It is said to be

• hereditary if for every graph G in Π, all induced subgraphs of G are alsoin Π.

• cohereditary if for every graph G in Π, all graphs containing G as aninduced subgraph are also in Π.

• non-trivial if there are an infinite number of graphs in Π and an infinitenumber of graphs not in Π.

Examples of non-trivial hereditary graph properties include independent sets,forests, and planar graphs. Examples of non-trivial cohereditary graph proper-ties include graphs containing a cycle and non-planar graphs. Let a non-trivialhereditary graph property Π be given. A graph is presented in the vertex arrivalmodel and the goal is for the algorithm to accept as many vertices as possible,such that the induced subgraph defined by the accepted vertices is in Π. Theyshow that at least

log

(1 +

(c− 1)c−1

cc

)n−Θ(log2 n)

bits of advice are required to be c-competitive for these problems (independentof the choice of Π). These problems are, in some sense, shown to be almostAOC-complete. For a cohereditary graph property Π, the problem considered isthe same, except that the goal is to accept as few vertices as possible, such thatthe induced subgraph defined by the accepted vertices at the end is in Π (it isguaranteed that the graph presented is in Π). They show that for this problem,the advice complexity depends crucially on the choice of graph property. Forsome properties, it is AOC-complete; for others, it is possible for an algorithmto be optimal using only O(log n) advice bits.

5.8 K-Server, Paging, and Friends

A Metrical Task System [36] is defined by a tuple (S, T , d) where S is aset of N states, T is a set of tasks, and d : S × S → [0,∞) is a metric distancefunction. A task is a mapping t : S → [0,∞] satisfying that there exists atleast one state s ∈ S such that t(s) 6= ∞. An input consists of an initial states0 ∈ S and n tasks t1, . . . , tn. Immediately after a task ti arrives, the onlinealgorithm must choose a state si for serving ti: The online algorithm movesfrom its current state si−1 to the state si at a cost of d(si−1, si) and serves the

66

task ti at a cost of ti(si). The goal is to minimize the total cost incurred. Eachof the classic online problems of Paging, k-Server, and List Update can bemodeled as a Metrical Task System (see [34], for example).

For the classic online scenario, a matching upper and lower bound of 2N −1 is known for deterministic Metrical Task System algorithms [36]. Forthe randomized case, there exists a randomized O(log2N log logN)-competitivealgorithm [70], whereas the best known lower bound on the competitive ratio isthe Ω(logN) lower bound arising from Paging.

The advice complexity of Metrical Task System is well understood. Weknow that sublinear advice is equivalent to randomization (Theorem 15). Fur-thermore, it was shown in [68] that b bits of advice per request are both necessaryand sufficient to be Θ( logN

b )-competitive. The upper bound is achieved usingthe follow-OPT technique. The matching lower bound is proven via a reductionfrom Generalized Matching Pennies (see Section 5.7.1).

Advice bits

CompetitiveRatio

1

O(

log kb

)Hk

O(log2 k log3N)

k

2k − 1

0 f(k,N,∆) Ω(n) 4n bn Θ(n log k)

Figure 5.2: The asymptotic trade-off between competitive ratio and advice for k-Server. N is the number of points in the metric space and ∆ the (normalized)diameter. The function f(k,N,∆) is a rapidly growing function of k,N and ∆(but does not depend on n). For the randomized algorithm with a competitiveratio depending on N , we assume that N is relatively small; polynomial in k,for example.

The advice complexity of k-Server (see Figure 5.2) is not as well understoodas for Metrical Task System. Again, we know that randomization is equiv-alent to sublinear advice. Depending on the size N of the metric space, the cur-rently best known randomized algorithm without advice for k-Server is eitherthe O(log2 k log3N log logN)-competitive algorithm due to Bansal et al. [13]or simply the deterministic (2k − 1)-competitive Work Function Algorithm byKoutsoupias and Papadimitriou [108]. By [128], randomized k-Server algo-rithms (on finite metric spaces) can be simulated using a number of advice bits

67

depending only on k and the metric space. The current best upper bound for al-gorithms using b ≥ 3 bits of advice per request is O( log k

b ), using the follow-OPTtechnique [28, 134]. However, no matching lower bound is known. The lowerbound used for Metrical Task System does not seem to be applicable to k-Server. For k-Server, we only know that Ω(n log k) bits of advice are neededto be optimal [28] and that Ω(n) bits of advice are needed to be better thanHk-competitive (the last lower bound follows since Paging is a special case ofk-Server; see the next paragraph for details). In particular, it is an intriguingopen problem whether or not it is possible to be (1 + ε)-competitive using O(n)bits of advice for arbitrarily small ε. It was shown in [28] that this is in fact thecase if the underlying metric space is the Euclidean plane: Along with everyrequest, one may use O(1) bits of advice to indicate as precisely as possible inwhich “direction” the server used by Opt for serving this request is currentlylocated. Also, for various sparse metric spaces (such as paths, trees, and planargraphs), algorithms which are better than the algorithm for the general case areknown [79,134].

The asymptotic advice complexity of Paging is essentially completely under-stood (see Figure 5.1). Recall that the best possible competitive ratio for arandomized Paging algorithm without advice is Hk ∈ Θ(log k). Using b bits ofadvice, it is possible to be ( 2k+2

2b + 3b)-competitive, while any algorithm usingonly b bits of advice must have a competitive ratio of at least k

2b [29]. In par-ticular, O(log k) bits of advice suffice to be O(log k)-competitive while o(log k)bits of advice is not enough to be, for example, k0.99-competitive. Furthermore,it is possible to be (Hk + ε)-competitive using a number of advice bits depend-ing only on k and ε (and not the input length n) [128]. In order to achieve acompetitive ratio better than Hk, we need Ω(n) bits of advice (since readingo(n) bits of advice is equivalent to randomization, according to Theorem 15).On the other hand, n bits of advice suffice to be optimal using the algorithmdescribed in the introduction.

The exact trade-off between advice and the competitive ratio for Paging is stillopen. For constant competitive ratios, the current best upper bound is (perhapsa bit surprisingly) achieved by using the upper bound for the AOC-completeproblem minASG (see Section 5.7.2 and in particular Equation (5.1)): Letx = x1 . . . xn be a binary string such that xi is 0 if and only if the page requestedin round i will be requested once more before it is removed from the cache ofOpt. As already mentioned in the introduction, a Paging algorithm which isgiven x as advice can be optimal. It was observed in [29] that if an algorithmis given an n-bit string x′ such that xi = 1 ⇒ x′i = 1 and such that |x′| ≤ c |x|(where |x| is the Hamming weight of x), then a Paging algorithm which knowsx′ can be c-competitive. This means that (for all cache sizes) there exists ac-competitive Paging algorithm reading B(n, c) + O(log n) bits of advice oninputs of length n. In particular, (log 5

4 )n+O(log n) > 0.3219n+O(log n) bitsof advice suffice to be 2-competitive. The best known lower bound on the exactadvice complexity of Paging was proven in [128] by a reduction from Anti-String Guessing. This lower bound is quite far from the AOC-based upperbound. For example, it only shows that at least 0.00877n − O(log n) bits areneeded to be 2-competitive.

List Update has been studied with advice in [44]. The main result is a 53 -

68

competitive algorithm using just two bits of advice (in total). The advice tellswhich of the three classic algorithms Timestamp, MoveToFront-Even, andMoveToFront-Odd is the best algorithm for the current input. An interest-ing application of the idea of choosing the better of two classic algorithms forList Update to a data compression problem is described in [97].

5.9 Bin Packing, Machine Scheduling, and Knap-sack

In this section, we consider three related problems.

Bin Packing. In Bin Packing, requests are sizes in the range (0, 1]. Binsof size one are available, and items must be placed in a bin such that the totalvolume of items placed in that bin does not exceed one. The objective is tominimize the number of bins used.

The ultimate advice for any online problem is to be informed of exactly howOpt behaves on the request sequence. For Bin Packing, Opt uses Opt(I)bins on a request sequence I, so with n dlog Opt(I)e bits of advice, it is possibleto mimic the behavior of Opt. This was observed by Boyar et al. [45], whereit was also established that this is essentially tight, in that a lower bound of(n− 2 Opt(I)) log Opt(I) was given. They employed the pigeonhole technique,giving a long prefix which has to be packed exactly right, depending on theunknown suffix, in order to pack all the items in the optimal number of bins.

To beat the best known lower bound for Bin Packing of 1.54037 [12] (the bestknown upper bound is 1.5815 [90]), a ratio of 3

2 was obtained using log n +o(log n) bits of advice [45]. The observation underlying this result is that largeitems fill bins sufficiently and small items are easy to pack effectively, so weneed to know about medium-sized items (concretely in the range ( 1

2 ,23 ]), and

dlog(n+ 1)e bits are sufficient to specify the number of such items in the input.Different categorization schemes by Angelopoulos et al. [8] led to a competitiveratio of 1.47012 + ε, for any fixed ε, using a constant number of advice bits,dependent on ε. They also show that 16 bits of advice are sufficient to beat thebest algorithm without advice, obtaining a competitive ratio of 1.530.

Using a linear number of bits, 2n + o(n), to get limited information regardingOpt’s packing, a ratio of 4

3 + ε, for any ε, was obtained in [45]. Asymptotically,for quite large input, Renault et al. [135] proved that one can get arbitrarilyclose to optimal, establishing (1 + ε)-competitiveness using O( 1

ε log 1ε ) bits of

advice per request.

A further improvement of the 43 result is claimed in [156], but we have not been

able to verify the result. An example problematic sequence for their algorithmis n

2 items of size 13 − ε followed by n

2 items of size 23 + ε.

For negative results, Boyar et al. [45] showed that 98 −δ is a lower bound for any

δ and sublinear advice. Refining those methods, Angelopoulos et al. [8] raisedthis lower bound to 7

6 = 1.16 and Mikkelsen [128] to 4− 2√

2 > 1.1715.

69

An overview of these results is given in Figure 5.3.

Advice bits

CompetitiveRatio

1

1 + ε

1.1715

1.333

1.47012

1.5

1.54037

1.5815

0 1 16 O(1) Ω(n) 2n O(n) Θ(n log n)

Figure 5.3: The best known bounds on the advice complexity of Bin Pack-ing. The horizontal dashed line is the currently best lower bound on (possiblyrandomized) Bin Packing algorithms without advice.

Finally, we mention some special cases. For a limited number m of differentitems, by using m dlog(n+ 1)e+ o(log n) bits of advice to inform the algorithmin advance of how many items to expect of the different types, one can beessentially optimal, achieving a packing of (1 + ε) Opt(I) + 1 bins. This isessentially tight, since (m − 1) log n − 2m logm bits of advice are required tobe optimal [45]. If all items are known to be larger than 1

3 , one bit of advice issufficient to be 1.3904-competitive [8].

Machine scheduling. Consider Scheduling onm machines, where requestsare real numbers, referred to as job sizes, and the irrevocable decision is to assigna request to a particular machine.

In Section 5.3, we discuss a parallel solutions algorithm from [4], where theobjective is to minimize the makespan, i.e., the maximum sum of job sizesassigned to any one machine. The parallel solutions algorithm can be viewedas a ( 4

3 + ε)-competitive algorithm using O(log2 1ε ) bits of advice in the Tape

Model. The same paper gives a (1 + ε)-competitive algorithm which can beviewed as using O( 1

ε log mε log 1

ε ) advice bits.

Boyar et al. [41] give (1+ε)-competitive algorithms for weighted scheduling prob-lems with various objective functions: For minimizing a norm (the makespan,for example) on related machines, an algorithm reading O( 1

ε log2 n) advice bitsis given. For minimizing a norm on a constant number of unrelated machines,an algorithm reading O( 1

εm logm+1 n) bits of advice is given. The same ad-vice complexity is obtained for maximizing a semi-norm (the minimum load as

70

in Machine Covering, for example) on a constant number of unrelated ma-chines. For a non-constant number of unrelated machines, the expressions forthe advice complexity are more complicated; see the paper for details.

For the Per Request Model, Renault et al. [135] obtain a competitive ratioof 1 + ε for minimizing makespan using O( 1

ε log 1ε ) bits of advice per request.

Similar results are obtained for Machine Covering and minimization of theLp norm, p ≥ 2. Complementing these results, using the pigeonhole technique,they establish a (1 − 2m

n ) logm lower bound on advice per request in order toobtain optimality, i.e., almost as much advice is required as is used by the trivialoptimal algorithm with advice that receives dlogme bits of advice per request,indicating which machine to place a job on.

Knapsack. In the introduction, Uniform Knapsack was used as an example(Example 2). An algorithm from [30] using one advice bit was described: theadvice bit indicates whether the input sequence contains an item of size at least12 . If it does, the first item accepted by the algorithm is the first item of sizeat least 1

2 . Otherwise, the algorithm accepts items greedily. In this section, wedescribe other results from this paper.

The competitive ratio of the above algorithm cannot be improved using a fewadditional advice bits; no algorithm reading fewer than blog(n−1)c bits of advicehas a competitive ratio better than 2. On the other hand, for any constant ε > 0,there is a (1 + ε)-competitive algorithm reading O( 1

ε log n) bits of advice. Foroptimality, n− 1 bits of advice are necessary and sufficient.

If one considers the obvious randomized algorithm based on the above 2-competitivealgorithm, its competitive ratio is 4: Simply flip a coin instead of reading anadvice bit. There is a related 2-competitive randomized algorithm using onlyone bit of randomness: One of the deterministic algorithms to choose betweenis again just accepting everything possible; the other is rejecting until the firstitem, which would have been rejected had everything before it been accepted,and then accepting from there when possible. This is best possible, since norandomized algorithm can have competitive ratio better than 2.

For the weighted version, where each item has both a size and a value, any(possibly randomized) algorithm reading fewer than log n bits of advice hasunbounded competitive ratio. On the other hand, if all values and weights canbe represented within polynomial space, then O(

√1+ε√

1+ε−1log n) advice bits suffice

to be (1 + ε)-competitive.

5.10 Graph Coloring

Being of both practical and theoretical interest, graph coloring problems havebeen extensively studied from an online perspective. In fact, some of the earliestresults on online graph coloring predate the formal introduction of competitiveanalysis. We refer to [102] for a good (although slightly dated) survey on Ver-tex Coloring. In this section, we survey some of the results obtained onvarious graph coloring problems in the advice complexity model. With a single

71

notable exception, it generally turns out that a lot of advice is needed in orderto obtain good online graph coloring algorithms.

5.10.1 Vertex Coloring

The most classic graph coloring problem is Vertex Coloring, where the ver-tices of a graph must be colored such that no two neighbors receive the samecolor. The aim is to use as few colors as possible. In the most studied onlinemodel, the vertex-arrival model, vertices arrive one by one, each with informa-tion about its edges to vertices that have already arrived. Usually, the colorsare enumerated starting from one.

Without advice, Vertex Coloring is an extremely difficult problem; Halldórs-son and Szegedy [88] showed that any (possibly randomized) online algorithmhas a competitive ratio of Ω( n

log2 n). The hardness of the problem carries over

to the advice setting; applying a direct product theorem (see Section 5.6) to thelower bound of [88], Mikkelsen showed in [128] that any O(n1−ε)-competitiveVertex Coloring algorithm must read Ω(n log n) bits of advice. This isan unusually strong advice complexity lower bound. Vertex Coloring isso far the only known example of a natural online problem where linear ad-vice is not enough to obtain a truly sublinear competitive ratio. Also, notethat O(n log n) bits of advice trivially suffice to achieve optimality. In fact,n log n − n log log n + O(n) advice bits are necessary and sufficient for an opti-mal coloring, even necessary if the vertices arrive in a breadth-first order [73].Thus, Vertex Coloring has a sharp phase transition in its advice complexity.

On trees, First-Fit (the greedy algorithm using the lowest available color) usesat most blog nc+1 colors [83], thus obtaining a competitive ratio of 1

2 log n. Thisis a best possible result, since, even on trees, any deterministic online algorithmcan be forced to use blog nc + 1 colors [84], while Opt, of course, only needstwo. Since Vertex Coloring is ∨-repeatable, and since the lower bound of12 log n ∈ ω(1) does not depend on the algorithm not knowing n or Opt(I),it follows that no Vertex Coloring algorithm with o(n) bits of advice canachieve a constant competitive ratio, even on trees [128] (see Section 5.6).

For bipartite graphs, any deterministic online algorithm without advice can beforced to use 2 log n − 10 colors [80]. Thus, coloring bipartite graphs is harderthan coloring trees. On the other hand, the online algorithm (without advice)Bipartite First-Fit (BFF) uses at most 2 log n colors [101] for n ≥ 2. For eachvertex v, BFF simply uses the smallest color not used in the opposite partitionof the connected component containing v.

Building on BFF, a family of algorithms, Ak, with advice for coloring bipartitegraphs was given by Bianchi et al. [22], obtaining a trade-off between competitiveratio and advice. For k ≥ 2, the algorithm Ak uses advice to ensure that thecolor k−1 is only used in one partition of the final graph, and that the color k isonly used in the other partition. For each vertex v, if BFF would use a color nolarger than k−2, Ak uses this color. Otherwise, if at least one of the colors k−1and k is already used in the connected component containing v, the algorithmcan deduce which color to use. If this is not the case, the algorithm reads onebit of advice to decide which of the colors k − 1 and k to use. Since BFF uses

72

color k−1 only if the requested vertex is contained in a connected component ofat least 2

k−12 vertices, and since the algorithm does not use advice for more than

one vertex within a connected component, the number of advice bits used is atmost n−1

2k−12

= n−1√2k−1

. This shows that O(√n) advice bits suffice to use fewer

than log n colors, beating the lower bound for deterministic online algorithmswithout advice. For 2 and 3 colors, the upper bound is complemented withessentially tight lower bounds of n− 3 and n

2 − 4 bits, respectively.

Note that the approach taken by the algorithms Ak resembles the warning signaltechnique described in Section 5.5. However, a sublinear number of advice bits isobtained, because the algorithms can detect themselves when they need advice.

In [142], Steffen specialized the algorithm Ak from [22] to trees, using First-Fitinstead of Bipartite First-Fit. Since, on trees, First-Fit uses the color k−1, onlyif the requested vertex belongs to a component of at least 2k−2 vertices [84], a k-coloring is obtained using at most n−1

2k−2 bits of advice. Thus, for each additionalcolor, the number of advice bits is halved.

For 3-coloring of trees, a linear lower bound of approximately 0.0328n advice bitsis given in [142]. The dissertation also contains linear lower (and upper) boundsfor coloring combs and caterpillars with 2 or 3 colors. Note that caterpillars canbe 4-colored without advice, since no vertex has more than three neighbors.

For 3-colorable graphs, the trivial upper bound of (log 3)n is essentially tight,even if the graphs are chordal [137].

5.10.2 Edge Coloring and Variants of Vertex Coloring

Many variants of Vertex Coloring have been studied. Here we mention twoof them.

For L(i, j)-Coloring, each pair of neighboring vertices must receive colors thatare at least i apart, and each pair of vertices at distance two must receive colorsthat are at least j apart. The aim is to minimize the span of the coloring, i.e.,the difference, λ, between the largest and smallest color used (thus, potentially,λ+ 1 colors are used).

For Multi-Coloring, a graph is given from the beginning and the requestsare vertices, with possible repetitions. For each request, an (additional) colormust be assigned to the requested vertex. The colors assigned to a vertex andits neighbors must all be distinct.

Though not as famous as Vertex Coloring, many papers have been devotedto Edge Coloring. Analogous to Vertex Coloring, the edges of a graphmust be colored such that no two adjacent edges receive the same color, and theaim is to use as few colors as possible. In the online version, one typically usesthe edge arrival model, where the edges arrive one by one, each with informa-tion about adjacent edges among those that have already arrived. Adhering tostandard notation in graph theory, where n denotes the number of vertices andm the number of edges, we let m denote the sequence length for this particularproblem.

73

Bianchi et al. [23] studied L(2, 1)-Coloring of paths. In the offline setting, thecolor range 0, 1, . . . , λ = 4 is sufficient. For the best possible online algorithmwithout advice, the color span is λ = 6 in the worst case, resulting in a compet-itive ratio of 3

2 . To obtain a better competitive ratio, a linear number of advicebits are necessary (a lower bound of approximately 3.9402 · 10−10n bits for ob-taining λ = 5 is given in [23]). This was the first example of a natural onlineproblem with the property that beating the best deterministic online algorithmwithout advice requires a linear number of advice bits. Note that linear advicetrivially suffices to be optimal (in fact, approximately 0.6955n bits of adviceare sufficient [23]). Since λ = 4 is obtainable in the offline setting, the linearlower bound for λ = 5, together with the derandomization technique of [28]mentioned in Section 5.4, implies a lower bound of 5

4 on the competitive ratioof any randomized online algorithm for the problem.

For edge coloring of a graph withm vertices and maximum degree ∆,m dlog(∆ + 1)ebits of advice trivially suffice for an optimal solution (by Vizing’s Theorem [149]).Mikkelsen [127] showed that, in the Per Request Model, this bound is asymp-totically tight. On the other hand, the paper also shows that, for graphs ofbounded degeneracy (including planar graphs), O(m) advice bits are sufficientto be optimal. For trees, the warning signal technique applied to First-Fit yieldsan optimal algorithm for trees using exactly one bit of advice per edge: If theadvice bit is a 0, then First-Fit colors the current edge as usual. If the advice bitis a 1, then First-Fit will skip the lowest numbered color available and insteaduse the second lowest numbered color available.

For edge coloring without advice, the competitive ratio is 2, on trees as well asin general [14]. In [127], Mikkelsen showed that, even for trees, linear advice isnecessary to beat the best deterministic online algorithm without advice. Com-paring the proof and the proof of the corresponding result for L(2, 1)-Coloring,it turns out that they are in fact quite similar. Based on this observation,Mikkelsen showed in [128] that these problems are indeed hard for essentiallythe same reason; they are both ∨-repeatable (see Section 5.6).

Recall that a problem being ∨-repeatable is not enough for a lower bound tocarry over from deterministic online algorithms to online algorithms with sub-linear advice; it is required that the lower bound does not depend on the onlinealgorithm not knowing Opt(I) (or n). It turns out that this requirement isvital for the lower bound technique to work. In fact, Christ et al. showed in [51]that sublinear advice suffices to be optimal for Multi-Coloring on a pathwhereas it is known that an algorithm without advice cannot be better than43 -competitive [50]. This may seem at odds with the previously mentioned re-sult, but the reason is that the 4

3 lower bound relies heavily on the algorithmnot knowing Opt(I). In fact, it is shown in [51] that if Opt(I) is known (notethat Opt(I) can be encoded using O(log n) bits of advice), then it is easy foran online algorithm to be optimal. The case where the exact value of Opt(I) isnot known (or not communicated to the algorithm) is also considered, resultingin a trade-off, where the competitive ratio ranges from 1 to 9

8 and the numberof advice bits ranges from log n+O(log log n) to O(log log n).

On hexagonal graphs, no Multi-Coloring algorithm without advice can bebetter than 3

2 -competitive [49]. In [51], it is shown that Ω(n) bits are necessaryfor obtaining a ratio better than 5

4 , n+ 2|V | bits are sufficient to obtain a ratio

74

of 43 , and log n+O(log log n) bits suffice to obtain a ratio of 3

2 .

5.11 Graph Exploration

Graph Exploration is a family of problems where an agent (sometimes calleda robot) with a fixed starting point explores an unknown graph. The goal isusually to visit each vertex of the graph, minimizing the total cost of followingedges. Sometimes assumptions are made on the structure of the graph.

These problems are unusual online problems in the following sense: For mostother online problems, it is possible to fix an input sequence, I = x1, . . . , xn, suchthat xi is revealed in round i no matter how the algorithm behaves (of course,if it is deterministic, we know what it will do). In Graph Exploration, evenwhen an input is fixed, the new information the algorithm gains in each stepstill depends on what it has done in previous steps. Thus, an input sequencecannot be defined independently of an algorithm.

In [95], Kalyanasundaram and Pruhs present an algorithm for Graph Ex-ploration which is 16-competitive on planar graphs. Megow, Mehlhorn, andSchweitzer [126] show that this algorithm does not have a constant competitiveratio on general graphs, but is 16(1 + 2g)-competitive for graphs with genus atmost g. Furthermore, [126] give an algorithm with constant competitive ratiofor general graphs with a bounded number of distinct weights. The main openquestion is whether there exists an algorithm which has a constant competitiveratio for arbitrary graphs with arbitrary weights.

In [74], Tree Exploration with advice is considered by Fraigniaud et al.A robot explores an unknown undirected tree and its goal is to visit everyvertex at least once. Each move incurs a cost of one. When the robot isat a given vertex, it can see the labels of the neighboring vertices, but theadvice is only allowed to depend on the structure of the tree and not the labels(which are assigned adversarially after the advice is given). Without advice, thebest possible competitive ratio for deterministic online algorithms is 2. This isachieved by depth-first-search, DFS. It is shown that roughly log logD bits ofadvice are necessary and sufficient to achieve a better competitive ratio (D is thediameter of the graph). For the upper bound, one bit is used to choose betweentwo algorithms; one is DFS and the other is a more sophisticated algorithmusing an approximation of D. The model used is the Tape Model, except thatthe length of the advice is known to the algorithm (see Section 5.2). The lowerbound is shown on paths.

The more general case, Graph Exploration, is studied by Dobrev et al.in [59]. Here, the unknown undirected graph is arbitrary and edges have non-negative weights. When the robot is at a vertex, it can see the weight of eachadjacent edge and the label of its other endpoint. The goal is to visit each vertexand return to the starting point. Each time an edge is traversed, it costs theweight of that edge. Here the advice is allowed to depend on the labels. It isshown that Θ(n log n) bits are necessary and sufficient to be optimal. A (6 + ε)-competitive algorithm with O(n) advice is also given. The algorithm works bytraversing edges of a minimum spanning tree and some additional light edges.

75

A related problem, Treasure Hunt, is studied by Komm et al. in [104]. Themodel is the same as in [59] with the following difference: The robot is giventhe label of a target vertex and the goal is to visit that vertex. It is observedthat a simple greedy algorithm has competitive ratio Θ(n) and this is bestpossible for online algorithms without advice (even on unweighted graphs andif randomization is allowed). It is shown that there is an optimal algorithmreading n bits of advice. For each vertex, one bit of advice indicates if thatvertex is on a fixed shortest path. For the unweighted case, it is shown thatΘ(nc ) bits are necessary and sufficient to achieve a competitive ratio of c (wherec has to be of a certain form, but may depend on n).

5.12 Open Problems

We end the survey with a few open problems:

• Can advice complexity be used to build a complexity theory for onlinecomputation?

The study of online algorithms with advice has led to the first complexityclasses in online algorithms and to new possibilities for proving results onrandomized online algorithms and semi-online algorithms. Further studymay lead to additional meaningful complexity classes and new fundamen-tal insights into the properties of online problems.

• Is it possible for a k-Server algorithm to be (1+ε)-competitive with O(1)bits of advice per request?

Currently, it is known that the answer to this problem is “yes” if theunderlying metric space is the Euclidean plane. It is also known thatΩ(log k) bits per request are required to be 1-competitive.

• How small a competitive ratio can be achieved for Bin Packing usingconstant advice?

• Are there further connections between advice and randomization in onlinecomputation which have not yet been discovered?

76

5.13 Appendix: Problems Studied in Advice Com-plexity Models

We list problems explicitly studied in advice complexity models.

• Inherently online problems

– K-server [28,68,128,134]

– K-server on sparse graphs [79]

– K-server on a path [140]

– List update [44,128]; application in [97]

– Paging [29,128]

– Metrical task systems [68]

– Sleep state management [25,128]

– Online search [52]

• Scheduling and packing problems

– Scheduling on identical machines with constant advice [4, 61]

– Scheduling with sublinear advice [41]

– Job shop scheduling [105,150,151]

– Job shop with randomized adversary [150]

– Linear advice approximation schemes for bin packing and schedul-ing [135]

– Bin packing with sublinear advice [8, 45,128]

– Dual bin packing [133]

– Bin packing [156] (see Section 5.9, though)

– Square packing [96]

– Reordering buffer management [2, 128]

– Buffer management [62]

– Knapsack [30]

– Set cover [106]

• Coloring problems

– 2-vertex coloring [22,128]

– 3-vertex coloring [137]

– Graph coloring, general graphs [73,128]

– Graph coloring on paths [72]

– Multi-coloring paths and grids [51]

77

– Edge coloring [127]

– L(2, 1)-coloring on paths [23]

• Other graph problems

– Tree exploration with advice [74]

– Graph exploration [18,59]

– Treasure hunt [104]

– Bipartite matching [64,128,130]

– Independent set [40,87]

– Independent set with known supergraph [58]

– Vertex cover on restricted graph classes [142]

– Steiner trees [15]

– Disjoint path allocation [16,77]

– Minimum spanning tree [21]

– Matching on restricted graph classes [99]

• Asymmetric online covering

– AOC [40] (complexity class comprising, among other problems, inde-pendent set, vertex cover, dominating set, disjoint path allocation)

– Induced subgraph [103]

– Weighted AOC [41]

• Miscellaneous

– String guessing/generalized matching pennies [27,68,112]

– Repeated matrix games [128]

– Graph coloring with randomized adversary [48]

– Brief survey [109]

78

CHAPTER 6

Advice Complexity of the Online Search Problem

Jhoirene Clemente, University of the Philippines Diliman,[email protected] 1

Juraj Hromkovič, ETH Zürich, [email protected] Komm, ETH Zürich, [email protected] 2

Christian Kudahl, University of Southern Denmark, [email protected] 3

Abstract

The online search problem is a fundamental problem in finance. The numerousdirect applications include searching for optimal prices for commodity tradingand trading foreign currencies. In this paper, we analyze the advice complexityof this problem. In particular, we are interested in identifying the minimumamount of information needed in order to achieve a certain competitive ratio.We design an algorithm that reads b bits of advice and achieves a competitiveratio of (M/m)1/(2b+1) where M and m are the maximum and minimum pricein the input. We also give a matching lower bound. Furthermore, we comparethe power of advice and randomization for this problem.

6.1 Introduction

We study the online search problem (abbreviated Online Search), which isformulated as an online (profit) maximization problem. For such problems, theinput arrives gradually in consecutive time steps. Each piece of input is calleda request. After a request is given, an online algorithm (also called the onlineplayer) has to produce a definite piece of the output, called an answer. Eachanswer is thus computed without any knowledge about further requests [34].The goal is to produce an output with a profit that is as large as possible. InOnline Search, the online player searches for the maximum price of a certainasset that unfolds sequentially. Suppose the player, in this context a trader,

1Supported by ERDT Scholarship. Sandwich program funded by PCIEERD-BCDA.2Supported by SNF grant 200021-146372.3Supported by the Villum Foundation and the Stibo-Foundation.

79

would like to transfer its assets from, say, USD to CHF in one transaction.Each day (formally, each time step), the trader receives a quotation of thecurrent exchange rate and decides whether to trade on the same day or to wait.The trading duration is finite, and it may be known or unknown to the trader.Formally, we define Online Search as follows.Definition 5 (Online Search Problem). Let σ = (p1, p2, . . . , pn), with 0 < m ≤pi ≤ M for all 1 ≤ i ≤ n, be a sequence of prices that arrives in an onlinefashion. Here, M and m are upper and lower bounds on the prices, respectively.For each day i, price pi is revealed, and the online player has to choose whetherto trade on the same day or to wait for the new price quotation on the next day.If the player trades on day i, its profit is pi. If the player did not trade for thefirst n−1 days, it must accept pn. The player’s goal is to maximize the obtainedprice (i. e., its profit).

We assume that the parameters m and M for the price range are fixed andknown to the online algorithm in advance. The duration of the trading period nis finite and may or may not be known to the online algorithm. We do not takeinto account sampling costs in the profit, i. e., the price for each day is freelygiven by the market to the trader. However, some direct applications of OnlineSearch may do require to consider the sampling costs. For instance, obtainingprices of a certain product may induce some cost, either in the form of time ormoney, from the player. For a study of such more involved cost variants, werefer the reader to Xu et al. [153], where the authors considered the accumulatedsampling cost while maximizing the player’s profit.

6.1.1 Competitive Analysis and Advice Complexity

Competitive analysis was introduced by Sleator and Tarjan in 1985 [139] toanalyze the solution quality of online algorithms. The measure used in theanalysis is called the competitive ratio, which can be obtained by comparing theprofit of the online algorithm to the one of an optimal offline solution. The term“offline” is used when the whole input sequence is known in advance. Note thatit is generally not possible for an online algorithm to compute the optimal offlinesolution in advance, because parts of the output have to be specified before thewhole input is known. It is merely taken into account to analyze the profit thatcan hypothetically be obtained if the whole input is known in advance. Thecompetitive ratio of an online algorithm is formally defined as follows.Definition 6 (Competitive Ratio). Let Π be an online maximization problem,let Alg be an online algorithm for Π, and let c > 1. Alg is said to be c-competitive if, for every instance I of Π, we have

c · profit(Alg(I)) ≥ profit(Opt(I)) ,

where profit(Alg(I)) is the profit of Alg on input I, and profit(Opt(I)) de-notes the optimal offline profit.

In this paper, we study the advice complexity of Online Search. More specif-ically, we ask about the additional information both sufficient and necessary inorder to improve the obtainable competitive ratio. In a way, this approach canbe seen as measuring the information content of the problem at hand [92].

80

This tool, which was introduced by Dobrev et al. in 2008 [60] and then re-vised by Böckenhauer et al. [29], Hromkovič et al. [92], and Emek et al. [68],is a complementary tool to analyze online problems. In order to study theinformation that is needed in order to outperform purely deterministic (or ran-domized) online algorithms, we introduce a trusted source, referred to as anoracle, which sees the whole input in advance and may write binary informationon a so-called advice tape. These advice bits are allowed to be any function ofthe entire input. The algorithm, which is called an online algorithm with advicein this setting, may then use the advice to compute the output for the giveninput. The approach is quantitative and problem-independent. In other words,the information supplied can be arbitrary (as long as it is computable). Thisis in particular interesting to give lower bounds for many other measurementsor relaxations of online problems. More specifically, hardness results in advicecomplexity give useful negative results about various semi-online approaches.If it is for example shown that O(log2 n) bits of advice do not help any onlinealgorithm to achieve a better competitive ratio, this gives a negative answer toquestions of the form: Would it help the algorithm to know the length of theinput? Would it help the algorithm to know the number of requests of a certaintype?

Many prominent online problems have been studied in this framework, includingpaging [29,60], the k-server problem [29,68,79,134], metrical task systems [68],and the online knapsack problem [30]. Negative results on the advice complexitycan be transferred by a special kind of reduction [27, 40, 68]. Moreover, advicecomplexity has a close and non-trivial relation to randomization [28, 105, 128].We now define online algorithms with advice formally.Definition 7 (Advice Complexity). Let x1, . . . , xn be the input for an onlineproblem Π. An online algorithm with advice, Alg, for Π computes the outputsequence y1, . . . , yn, where yi is allowed to depend on x1, . . . , xi−1 as well as onan advice string φ. The advice, φ, is written in binary on an infinite tape andis allowed to depend on the request sequence x1, . . . , xn. The advice complexityof Alg is the largest number of advice bits it reads from φ over all inputs oflength at most n.

Our paper is devoted to both creating online algorithms with advice for OnlineSearch that achieve a certain output quality while using a certain numberof advice bits, and to show that such algorithms cannot exist if the advicecomplexity is below some certain threshold.

Most of the work in advice complexity theory considers problems where at leastn advice bits are required for an algorithm to be optimal. Here, we study aproblem where only log2 n bits give an optimal algorithm. We investigate howthis problem behaves when the number of advice bits is in the interval [1, log2 n].For the ease of presentation, we assume that log2 n is integer.

6.2 Related Work

The search problem in an offline setting, i. e., where the set of prices is knownin advance, can easily be solved optimally in time O(n). However, for a lot

81

of online environments such as stock trading and foreign exchange, decisionsshould be made even though there is no knowledge of the future prices of thecurrencies. These problems are intrinsically online.

The most common approaches are Bayesian. These approaches rely on a priordistribution of prices where the online algorithm computes a certain reservationprice based on the distribution. The trader accepts any price that is larger thanor equal to the reservation price. If this certain price is not met, the player hasto trade on the last day (according to Definition 5). Throughout this paper,Alg[p] denotes the algorithm that accepts the first price it sees that is at least p.

Since the prior distribution of prices is not necessarily known in advance, El-Yaniv et al. [66] proposed to measure the quality of online trading algorithmsusing competitive analysis. Moreover, for some assets, the goal is not just toincrease the profit but to minimize the loss by considering the possible worst-case scenarios in the market. Competitive analysis in financial problems such asOnline Search can provide a guaranteed performance measure for the trader’sprofit. The best deterministic online algorithm with respect to competitiveanalysis is Alg[

√Mm], i. e., the algorithm that accepts the first price it sees that

is at least√Mm (or it accepts pn if no such price is ever seen). This algorithm

has a competitive ratio of√M/m, which is provably the best competitive ratio

any deterministic online algorithm without advice can achieve [34].

Boyar et al. [47] studied how the problem behaves when applying a variety ofdifference performance measures (and not just competitive ratio).

6.3 Advice for the Online Search Problem

In this section, we explore the advice complexity of Online Search. We startby studying how much advice is necessary and sufficient in order to obtain anoptimal output. After that, we study general c-competitiveness.

6.3.1 Advice for Optimality

It is possible for an algorithm to be optimal using log2 n bits of advice if n isknown in advance by simply encoding the day where the largest price is offered.If n is not known in advance, it has to be encoded with a self-delimiting encoding,for example, by writing the length of log2 n in unary followed by log2 n. Thisrequires 2 log2 n bits [29].

Moreover, optimality can also be achieved by encoding the value of pmax usingO(log2(M/m)) bits, but since M and m can be arbitrarily large, this may bevery expensive. We now give a complementing lower bound.Theorem 17. At least log2 n bits of advice are necessary to obtain an optimalsolution for Online Search. This holds even if n is known to the algorithm.

Proof. We use that an algorithm with b advice bits can be viewed as dealingwith the best of 2b algorithms without advice, for the particular instance chosen.First, we generate a set of request sequences S. Then, we show that, for S, there

82

is no set of n − 1 or fewer deterministic algorithms, which can ensure that atleast one algorithm always gets the optimal solution.

We construct the set S in such a way that each request has a unique optimalsolution. The construction is as follows. Let S = σ1, σ2, . . . , σn, such that

σi = (m+ δ,m+ 2δ, . . . ,m+ iδ︸ ︷︷ ︸i

,m, . . . ,m︸ ︷︷ ︸n−i

) ,

where δ = (M −m)/n. Each σi is thus a sequence of n prices that follow anincreasing order until day i. Then the price drops to the minimum m for theremaining n − i days. The optimal solution for each σi clearly is to trade onday i and obtain a profit of m+ iδ. From the construction, it is impossible forany deterministic online algorithm to distinguish the request sequence σi fromany other sequence of requests σj , for j > i, until the price for day i + 1 isoffered. This is due to the fact that the set of requests σi, σi+1, . . . , σn havethe same prices offered from day 1 up to day i. Since we have n such inputinstances with different optimal solutions, and fewer than n algorithms, thereis one algorithm that gets chosen for at least one the above instances. Clearly,this algorithm cannot be optimal for both these instances. Thus, any onlinealgorithm with advice needs log2 n bits of advice to identify the actual inputfrom these n possible cases.

Note that if it is required that the prices are integral, this construction stillworks by picking m and M such that δ is an integer.

6.3.2 Advice for c-Competitiveness

Next, we investigate the advice complexity of Online Search if we have lessthan log2 n advice bits. This means we study a tradeoff between the numberb of advice bits supplied and the competitive ratio c obtainable. Recall that,without advice bits, the optimal trader strategy is to use Alg[p], where thereservation price is p =

√Mm.

Before we present the upper bounds for online algorithms with advice for On-line Search that achieve c-competitiveness, we give a simple intuition behindour strategy. We can think of it as having 2b deterministic algorithms withdifferent reservation prices. The computation of each reservation price pi isobtained by computing the solution of the following equation.

p1

m=p2

p1= . . . =

p2i

p2i−1= . . . =

M

p2b

Theorem 18. For every b > 0, there exists an online algorithm with advice forOnline Search which reads b bits of advice and achieves a competitive ratioof at most (M/m)

1

2b+1 . This holds even if n is unknown.

Proof. We describe an algorithm Alg with advice which reads b bits of adviceand achieves the claimed competitive ratio. First, the oracle simulates the

83

algorithms

Alg[m

2b+1−i

2b+1 Mi

2b+1

]for i = 1, . . . , 2b. Let A denote the set of these algorithms. Then, it writes thevalue of i for the algorithm that achieves the best competitive ratio. We arguethat at least one of the algorithms gets a competitive ratio of at most(

M

m

) 1

2b+1

.

We have three cases for pmax. The first case is when pmax < m2b

2b+1M1

2b+1 . Here,each algorithm in A will get the price offered on the last day, which is at leastm. The competitive ratio for Alg is at most

m2b

2b+1M1

2b+1

m=

(M

m

) 1

2b+1

.

The second case is when pmax ≥ m1

2b+1M2b

2b+1 . In this case,

Alg[m

1

2b+1M2b

2b+1

]

gets a price of at leastm1

2b+1M2b

2b+1 . Since Opt gets at mostM , the competitiveratio for Alg is again at most

M

m1

2b+1M2b

2b+1

=

(M

m

) 1

2b+1

.

The last case is when m2b+1−i

2b+1 Mi

2b+1 ≤ pmax < m2b−i

2b+1Mi+1

2b+1 for some i < 2b.In this case,

Alg[m

2b+1−i

2b+1 Mi

2b+1

]gets at least its reservation price. Thus, also here, the competitive ratio for Algis at most

m2b−i

2b+1Mi+1

2b+1

m2b+1−i

2b+1 Mi

2b+1

=

(M

m

) 1

2b+1

.

All in all, we have shown that, in each case, Alg obtains a competitive ratio ofat most (M/m)

1

2b+1 as we claimed.

We now present a matching lower bound.Theorem 19. Let Alg be an algorithm with advice for Online Search whichreads b < log2 n bits of advice. The competitive ratio of Alg is at least (M/m)

1

2b+1 .

84

Proof. For any given b < log2 n, let Alg be an algorithm with advice that readsat most b bits of advice. Again, we view this advice as 2b deterministic onlinealgorithms. We now give a class of request sequences that ensure that each ofthem gets a competitive ratio of at least(

M

m

) 1

2b+1

.

Consider the sequence (p1, p2, . . . , p2b) with

pi = m2b+1−i

2b+1 Mi

2b+1 .

The adversary simulates all 2b algorithms on this sequence. We consider twocases. If a request pi is rejected by all algorithms, it requests p1, p2, . . . , pifollowed by requests that are all equal to m. For the first case, assume thatthere exists a request pi, which is rejected by all 2b algorithms. The remainingrequests are all m. This means that Alg gets a price of at most pi−1 (thelargest request that was not pi) while Opt gets a price of pi. Note that, if thefirst request is rejected, Alg gets a price of at most m = p0. In this case, thecompetitive ratio for Alg is at least

pipi−1

=m

2b+1−i

2b+1 Mi

2b+1

m2b+2−i

2b+1 Mi−1

2b+1

=

(M

m

) 1

2b+1

.

Thus, Alg cannot obtain a competitive ratio which is better than (M/m)1

2b+1

if a request is rejected by all the algorithms.

Next, we consider the second case. Here, every request in σ is accepted by somealgorithm. Since there are 2b requests in σ, it follows that all algorithms accepta price that is at most p2b . Since 2b < n, the adversary can still make a request.The final request is thenM . The competitive ratio for Alg is therefore boundedfrom below by

M

p2b

=M

m1

2b+1M2b

2b+1

=

(M

m

) 1

2b+1

.

In both cases, Alg has a competitive ratio of at least (M/m)1

2b+1 as claimedby the theorem.

6.4 Advice and Randomization

Randomization is often used to improve the competitive ratio of online algo-rithms (in expectation). Here, the online player is allowed to base some of itsanswers on a random source. An oblivious adversary knows the algorithm, butnot the outcome of the random decisions. To provide an improvement over thelower bound of deterministic online algorithms for Online Search, El-Yanivet al. [66] provided an upper bound by presenting a randomized algorithm with

85

0 b∗ log2

(Mm

)01

log2(Mm )

2

log2

(Mm

)(Mm

) 12

advice bits b

com

peti

tive

rati

oc

(Mm

) 1

2b+1

Figure 6.1: Plot comparing the competitive ratio of the online algorithm withadvice with respect to the lower bound for deterministic and randomized algo-rithms.

an expected competitive ratio of log2(M/m). Lorenz et al. [120] provided anasymptotically matching lower bound of (log2(M/m))/2 for randomized onlinealgorithms for Online Search.

In this section, we compare the power of advice to the ability of an onlinealgorithm to access random bits for Online Search. The competitive ratioof online algorithms with advice (with an increasing number of advice bits) isshown in Figure 6.1. We fixed a fluctuation ratio M/m, and we highlighted thecompetitive ratio of the best deterministic algorithm, i. e., (M/m)

12 , and the

corresponding upper (i. e., log2(M/m)) and lower (i. e., log2(M/m)/2) boundsof randomized algorithms for Online Search.

It is interesting to point out that, with the number of advice bits greater than

b∗ = log2

log2(M/m)

log2

(log2(M/m)

2

) − 1

,

our online algorithm for Online Search outperforms the lower bound of ran-domized online algorithms. And as we increase the number of advice bits, thebetter the competitive ratio we get. In the plot shown in Figure 6.1, we consid-ered a fluctuation ratioM/m = n. Note that the competitive ratio is asymptoticto 1, but it is actually possible to get an optimal solution with log2 n advicebits.

86

6.5 Conclusion and Future Work

We studied the advice complexity of Online Search and determined upperand lower bounds on the advice complexity to achieve both optimality and c-competitiveness. We presented a tight lower bound of log2 n for the number ofadvice needed by any online algorithm to obtain optimal solutions, as shown inTheorem 17. We also provided a strategy with b bits of advice and achieved atight bound of (M/m)

1

2b+1 for the competitive ratio as shown in Theorems 18and 19.

We compared the power of advice and randomization in terms of competitiveratio. The comparison of the competitive ratio is shown in Figure 6.1.

For future work, it would be interesting to extend the results to the One-WayTrading problem with advice. It is known that Online Search and the One-Way Trading are closely related. In fact, they are equivalent in the sense that,for every randomized algorithm for Online Search, there exists an equivalentdeterministic algorithm for One-Way Trading [66]. Although randomizationsignificantly improved the competitive ratio of algorithms for Online Search,it can be shown that it cannot help to improve the competitive ratio of al-gorithms for One-Way Trading. It would be interesting to investigate thetradeoff between advice and competitive ratio in One-Way Trading.

87

CHAPTER 7

The Advice Complexity of a Class of Hard Online Problems

Joan Boyar and Lene M. Favrholdt and Christian Kudahl and Jesper W.Mikkelsen

Department of Mathematics and Computer Science, University of SouthernDenmark

joan,lenem,jesperwm,[email protected] 1

Abstract

The advice complexity of an online problem is a measure of how much knowledgeof the future an online algorithm needs in order to achieve a certain competitiveratio. Using advice complexity, we define the first online complexity class, AOC.The class includes independent set, vertex cover, dominating set, and severalothers as complete problems. AOC-complete problems are hard, since a singlewrong answer by the online algorithm can have devastating consequences. Foreach of these problems, we show that log

(1 + (c− 1)c−1/cc

)n = Θ(n/c) bits

of advice are necessary and sufficient (up to an additive term of O(log n)) toachieve a competitive ratio of c.

The results are obtained by introducing a new string guessing problem relatedto those of Emek et al. (TCS 2011) and Böckenhauer et al. (TCS 2014). Itturns out that this gives a powerful but easy-to-use method for providing bothupper and lower bounds on the advice complexity of an entire class of onlineproblems, the AOC-complete problems.

Previous results of Halldórsson et al. (TCS 2002) on online independent set, ina related model, imply that the advice complexity of the problem is Θ(n/c).Our results improve on this by providing an exact formula for the higher-orderterm. For online disjoint path allocation, Böckenhauer et al. (ISAAC 2009)gave a lower bound of Ω(n/c) and an upper bound of O((n log c)/c) on theadvice complexity. We improve on the upper bound by a factor of log c. Forthe remaining problems, no bounds on their advice complexity were previouslyknown.

1This work was partially supported by the Villum Foundation and the Danish Council forIndependent Research, Natural Sciences

88

7.1 Introduction

An online problem is an optimization problem in which the input is divided intosmall pieces, usually called requests, arriving sequentially. An online algorithmmust serve each request without any knowledge of future requests, and thedecisions made by the online algorithm are irrevocable. The goal is to minimizeor maximize some objective function.

Traditionally, the quality of an online algorithm is measured by the compet-itive ratio, which is an analog of the approximation ratio for approximationalgorithms: The solution produced by the online algorithm is compared to thesolution produced by an optimal offline algorithm, Opt, which knows the entirerequest sequence in advance, and only the worst case is considered.

For some online problems, it is impossible to achieve a good competitive ratio.As an example, consider the classical problem of finding a maximum indepen-dent set in a graph. Suppose that, at some point, an online algorithm decidesto include a vertex v in its solution. It then turns out that all forthcomingvertices in the graph are connected to v, but not to each other. Thus, the onlinealgorithm cannot include any of these vertices. On the other hand, Opt knowsthe entire graph, and so it rejects v and instead takes all forthcoming vertices.In fact, one can easily show that, even if we allow randomization, no onlinealgorithm for this problem can obtain a competitive ratio better than Ω(n),where n is the number of vertices in the graph.

A natural question for online problems, which is not answered by competitiveanalysis, is the following: Is there some small amount of information such that,if the online algorithm knew this, then it would be possible to achieve a signif-icantly better competitive ratio? Our main result is a negative answer to thisquestion for an entire class of hard online problems, including independent set.We prove our main result in the recently introduced advice complexity model.In this model, the online algorithm is provided with b bits of advice about theinput. No restrictions are placed on the advice. This means that the advicecould potentially encode some knowledge which we would never expect to be inpossession of in practice, or the advice could be impossible to compute in anyreasonable amount of time. Lower bounds obtained in the advice complexitymodel are therefore very robust, since they do not rely on any assumptions aboutthe advice. If we know that b bits of advice are necessary to be c-competitive,then we know that any piece of information which can be encoded using lessthan b bits will not allow an online algorithm to be c-competitive.

In this paper, we use advice complexity to introduce the first complexity classfor online problems. The complete problems for this class, one of which isindependent set, are very hard in the online setting. We essentially show thatfor the complete problems in the class, a c-competitive online algorithm needs asmuch advice as is required to explicitly encode a solution of the desired quality.One important feature of our framework is that we introduce an abstract onlineproblem which is complete for the class and well-suited to use as the startingpoint for reductions. This makes it easy to prove that a large number of onlineproblems are complete for the class and thereby obtain tight bounds on theiradvice complexity.

89

7.1.1 Advice Complexity

Advice complexity [29,60,68,92] is a quantitative and standardized, i.e., problemindependent, way of relaxing the online constraint by providing the algorithmwith partial knowledge of the future. The main idea of advice complexity isto provide an online algorithm, Alg, with some advice bits. These bits areprovided by a trusted oracle, O, which has unlimited computational power andknows the entire request sequence.

In the first model proposed [60], the advice bits were given as answers (of varyinglengths) to questions posed by Alg. One difficulty with this model is that usingat most 1 bit, three different options can be encoded (giving no bits, a 0, ora 1). This problem was addressed by the model proposed in [68], where theoracle is required to send a fixed number of advice bits per request. However,for the problems we consider, one bit per request is enough to guarantee anoptimal solution, and so this model is not applicable. Instead, we will use the“advice-on-tape” model [29], which allows for a sublinear number of advice bitswhile avoiding the problem of encoding information in the length of each answer.Before the first request arrives, the oracle prepares an advice tape, an infinitebinary string. The algorithm Alg may, at any point, read some bits from theadvice tape. The advice complexity of Alg is the maximum number of bits readby Alg for any input sequence of at most a given length.

When advice complexity is combined with competitive analysis, the centralquestion is: How many bits of advice are necessary and sufficient to achieve agiven competitive ratio c?Definition 8 (Competitive ratio [98,139] and advice complexity [29,92]). Theinput to an online problem, P, is a request sequence σ = 〈r1, . . . , rn〉. An onlinealgorithm with advice, Alg, computes the output y = 〈y1, . . . , yn〉, under theconstraint that yi is computed from ϕ, r1, . . . , ri, where ϕ is the content of theadvice tape. Each possible output for P is associated with a score. For a requestsequence σ, Alg(σ) (Opt(σ)) denotes the score of the output computed by Alg(Opt) when serving σ.

If P is a maximization problem, then Alg is c(n)-competitive if there exists aconstant, α, such that, for all n ∈ N,

Opt(σ) ≤ c(n) ·Alg(σ) + α,

for all request sequences, σ, of length at most n. If P is a minimization problem,then Alg is c(n)-competitive if there exists a constant, α, such that, for alln ∈ N,

Alg(σ) ≤ c(n) ·Opt(σ) + α,

for all request sequences, σ, of length at most n. In both cases, if the inequalityholds with α = 0, we say that Alg is strictly c(n)-competitive.

The advice complexity, b(n), of an algorithm, Alg, is the largest number ofbits of ϕ read by Alg over all possible inputs of length at most n. The advicecomplexity of a problem, P, is a function, f(n, c), c ≥ 1, such that the smallestpossible advice complexity of a strictly c-competitive online algorithm for P isf(n, c).

90

In this paper, we only consider deterministic online algorithms (with advice).Note that both the advice read and the competitive ratio may depend on n, but,for ease of notation, we often write b and c instead of b(n) and c(n). Also, bythis definition, c ≥ 1, for both minimization and maximization problems. Forminimization problems, the score is also called the cost, and for maximizationproblems, the score is also called the profit. Furthermore, we use output andsolution interchangeably. Lower and upper bounds on the advice complexityhave been obtained for many problems, see e.g. [15, 22, 27–30, 44, 45, 58, 60, 68,72,79,92,106,127,130,135,137].

7.1.2 String guessing

In [27,68], the advice complexity of the following string guessing problem, SG, isstudied: For each request, which is simply empty and contains no information,the algorithm tries to guess a single bit (or more generally, a character fromsome finite alphabet). The correct answer is either revealed as soon as thealgorithm has made its guess (known history), or all of the correct answers arerevealed together at the very end of the request sequence (unknown history).The goal is to guess correctly as many bits as possible.

The problem was first introduced (under the name generalized matching pen-nies) in [68], where a lower bound for randomized algorithms with advice wasgiven. In [27], the lower bound was improved for the case of deterministic algo-rithms. In fact, the lower bound given in [27] is tight up to lower-order additiveterms. While SG is rather uninteresting in the view of traditional competi-tive analysis, it is very useful in an advice complexity setting. Indeed, it hasbeen shown that the string guessing problem can be reduced to many classi-cal online problems, thereby giving lower bounds on the advice complexity forthese problems. This includes bin packing [45], the k-server problem [79], listupdate [44], metrical task system [68], set cover [27] and a certain version ofmaximum clique [27].

Asymmetric string guessing

In this paper, we introduce a new string guessing problem called asymmetricstring guessing, ASG, formally defined in Section 7.2. The rules are similar tothose of the original string guessing problem with an alphabet of size two, but thescore function is asymmetric: If the algorithm answers 1 and the correct answeris 0, then this counts as a single wrong answer (as in the original problem).On the other hand, if the algorithm answers 0 and the correct answer is 1, thesolution is deemed infeasible and the algorithm gets an infinite penalty. Thisasymmetry in the score function forces the algorithm to be very cautious whenmaking its guesses.

As with the original string guessing problem, ASG is not very interesting inthe traditional framework of competitive analysis. However, it turns out thatASG captures, in a very precise way, the hardness of problems such as onlineindependent set and online vertex cover.

91

7.1.3 Problems

Many of the problems that we consider are graph problems, and most of them arestudied in the vertex-arrival model. In this model, the vertices of an unknowngraph are revealed one by one. That is, in each round, a vertex is revealedtogether with all edges connecting it to previously revealed vertices. For theproblems we study in the vertex-arrival model, whenever a vertex, v, is revealed,an online algorithm Alg must (irrevocably) decide if v should be included inits solution or not. Denote by VAlg the vertices included by Alg in its solutionafter all vertices of the input graph have been revealed. The individual graphproblems are defined by specifying the set of feasible solutions. The cost (profit)of an infeasible solution is ∞ (−∞).

The problems we consider in the vertex-arrival model are:

• Online Vertex Cover. A solution is feasible if it is a vertex cover inthe input graph. The problem is a minimization problem.

• Online Cycle Finding. A solution is feasible if the subgraph induced bythe vertices in the solution contains a cycle. We assume that the presentedgraph always contains a cycle. The problem is a minimization problem

• Online Dominating Set. A solution is feasible if it is a dominating setin the input graph. The problem is a minimization problem.

• Online Independent Set. A solution is feasible if it is an independentset in the input graph. The problem is a maximization problem.

We emphasize that the classical 2-approximation algorithm for offline vertexcover cannot be used in our online setting, even though the algorithm is greedy.That algorithm greedily covers the edges (by selecting both endpoints) one byone, but this is not possible in the vertex-arrival model.

Apart from the graph problems in the vertex-arrival model mentioned above,we also consider the following online problems. Again, the cost (profit) of aninfeasible solution is ∞ (−∞).

• Online Disjoint Path Allocation. A path with L+1 vertices v0, . . . , vLis given. Each request (vi, vj) is a subpath specified by the two endpointsvi and vj . A request (vi, vj) must immediately be either accepted or re-jected. This decision is irrevocable. A solution is feasible if the subpathsthat have been accepted do not share any edges. The profit of a feasiblesolution is the number of accepted paths. The problem is a maximizationproblem.

• Online Set Cover (set-arrival version). A finite set U known as theuniverse is given. The input is a sequence of n finite subsets of U ,(A1, . . . , An), such that ∪1≤i≤nAi = U . A subset can be either acceptedor rejected. Denote by S the set of indices of the subsets accepted in somesolution. The solution is feasible if ∪i∈SAi = U . The cost of a feasiblesolution is the number of accepted subsets. The problem is a minimizationproblem.

92

7.1.4 Preliminaries

Throughout the paper, we let n denote the number of requests in the input.

We let log denote the binary logarithm log2 and ln the natural logarithm loge.

By a string we always mean a bit string. For a string x ∈ 0, 1n, we denoteby |x|1 the Hamming weight of x (that is, the number of 1s in x) and we define|x|0 = n− |x|1. Also, we denote the i’th bit of x by xi, so that x = x1x2 . . . xn.

For n ∈ N, define [n] = 1, 2, . . . , n. For a subset Y ⊆ [n], the characteristicvector of Y is the string y = y1 . . . yn ∈ 0, 1n such that, for all i ∈ [n], yi = 1if and only if i ∈ Y . For x, y ∈ 0, 1n, we write x v y if xi = 1⇒ yi = 1 for all1 ≤ i ≤ n.

If the oracle needs to communicate some integer m to the algorithm, and if thealgorithm does not know of any upper bound on m, the oracle needs to usea self-delimiting encoding. For instance, the oracle can write dlog(m + 1)e inunary (a string of 1’s followed by a 0) before writing m itself in binary. In total,this encoding uses 2dlog(m + 1)e + 1 = O(logm) bits. Slightly more efficientencodings exist, see e.g. [28].

7.1.5 Our contribution

In Section 7.3, we give lower and upper bounds on the advice complexity ofthe new asymmetric string guessing problem, ASG. The bounds are tight upto an additive term of O(log n). Both upper and lower bounds hold for thecompetitive ratio as well as the strict competitive ratio.

More precisely, if b is the number of advice bits necessary and sufficient toachieve a (strict) competitive ratio c > 1, then we show that

b = log

(1 +

(c− 1)c−1

cc

)n±Θ(log n), (7.1)

where1

e ln 2

n

c≤ log

(1 +

(c− 1)c−1

cc

)n ≤ n

c.

This holds for all variants of the asymmetric string guessing problem (minimiza-tion/maximization and known/unknown history). See Figure 7.1 on page 102for a graphical plot. For the lower bound, the constant hidden in Θ(log n)depends on the additive constant α of the c-competitive algorithm. We onlyconsider c > 1, since in order to be strictly 1-competitive, an algorithm needsto correctly guess every single bit. It is easy to show that this requires n bits ofadvice (see e.g. [27]). By Remark 1 in section 7.3, this also gives a lower boundfor being 1-competitive.

In Section 7.4, we introduce a class, AOC, of online problems. The class AOCessentially consists of those problems which can be reduced to ASG. In partic-ular, for any problem in AOC, our upper bound on the advice complexity forASG applies. This is one of the few known examples of a general technique for

93

constructing online algorithms with advice, which works for an entire class ofproblems.

On the hardness side, we show that several online problems, including OnlineVertex Cover, Online Cycle Finding, Online Dominating Set, On-line Independent Set, Online Set Cover and Online Disjoint PathAllocation are AOC-complete, that is, they have the same advice complexityas ASG. We prove this by providing reductions from ASG to each of theseproblems. The reductions preserve the competitive ratio and only increase thenumber of advice bits by an additive term of O(log n). Thus, we obtain boundson the advice complexity of each of these problems which are essentially tight.Finally, we give a few examples of problems which belong to AOC, but areprovably not AOC-complete. This first complexity class with its many completeproblems could be the beginning of a complexity theory for online algorithms.

As a key step in obtaining our results, we establish a connection between theadvice complexity of ASG and the size of covering designs (a well-studied objectfrom the field of combinatorial designs).

Discussion of results

Note that the offline versions of the AOC-complete problems have very differentproperties. Finding the shortest cycle in a graph can be done in polynomialtime. There is a greedy 2-approximation algorithm for finding a minimum vertexcover. No o(log n)-approximation algorithm exists for finding a minimum setcover (or a minimum dominating set), unless P = NP [132]. For any ε > 0, non1−ε-approximation algorithm exists for finding a maximum independent set,unless ZPP = NP [89]. Yet these AOC-complete problems all have essentiallythe same high advice complexity. Remarkably, the algorithm presented in thispaper for problems in AOC is oblivious to the input: it ignores the input anduses only the advice to compute the output. Our lower bound proves that forAOC-complete problems, this oblivious algorithm is optimal. This shows thatfor AOC-complete problems, an adversary can reveal the input in such a waythat an online algorithm simply cannot deduce any useful information from thepreviously revealed requests when it has to answer the current request. Thus,even though the AOC-complete problems are very different in the offline settingwith respect to approximation, in the online setting, they become equally hardsince an adversary can prevent an online algorithm from using any non-trivialstructure of these problems.

Finally, we remark that the bounds (7.1) are under the assumption that thenumber of 1s in the input string (that is, the size of the optimal solution) ischosen adversarially. In fact, if t denotes the number of 1s in the input string,we give tight lower and upper bounds on the advice complexity as a functionof both n, c, and t. We then obtain (7.1) by calculating the value of t whichmaximizes the advice needed (it turns out that this value is somewhere betweenn/(ec) and n/(2c)). If t is smaller or larger than this value, then our algorithmwill use less advice than stated in (7.1).

94

Comparison with previous results

The original string guessing problem, SG, can be viewed as a maximizationproblem, the goal being to correctly guess as many of the n bits as possible.Clearly, Opt always obtains a profit of n. With a single bit of advice, analgorithm can achieve a strict competitive ratio of 2: The advice bit simplyindicates whether the algorithm should always guess 0 or always guess 1. This isin stark contrast to ASG, where linear advice is needed to achieve any constantcompetitive ratio. On the other hand, for both SG and ASG, achieving aconstant competitive ratio c < 2 requires linear advice. However, the exactamount of advice required to achieve such a competitive ratio is larger for ASGthan for SG. See Figure 7.1 for a graphical comparison.

The problems Online Independent Set and Online Disjoint Path Al-location, which we show to be AOC-complete, have previously been studiedin the context of advice complexity or similar models. We present a detailedcomparison of our work to these previous results.

In [29], among other problems, the advice complexity of Online Disjoint PathAllocation is considered. It is shown that a strictly c-competitive algorithmmust read at least n+2

2c − 2 bits of advice. Comparing with our results, we seethat this lower bound is asymptotically tight. On the other hand, the authorsshow that for any c ≥ 2, there exists a strictly c-competitive online algorithmreading at most b bits of advice, where

b = min

n log

(c

(c− 1)(c−1)/c

),n log n

c

+ 3 log n+O(1) .

We remark that n log(c/(c− 1)(c−1)/c

)≥ (n log c)/c, for c ≥ 2. Thus, this

upper bound is a factor of 2 log c away from the lower bound.

In [87], the problem Online Independent Set is studied in a multi-solutionmodel. In this model, an online algorithm is allowed to maintain multiple so-lutions. The algorithm knows (a priori) the number n of vertices in the inputgraph. The model is parameterized by a function r(n). Whenever a vertex v isrevealed, the algorithm can include v in at most r(n) different solutions (someof which might be new solutions with v as the first vertex). At the end, thealgorithm outputs the solution which contains the most vertices.

The multi-solution model is closely related to the advice complexity model.After processing the entire input, an algorithm in the multi-solution model hascreated at most n ·r(n) different solutions (since at most r(n) new solutions canbe created in each round). Thus, one can convert a multi-solution algorithm toan algorithm with advice by letting the oracle provide log(n ·r(n)) bits of adviceindicating which solution to output. In addition, the oracle needs to provideO(log n) bits of advice in order to let the algorithm learn n (which was given tothe multi-solution algorithm for free). On the other hand, an algorithm usingb(n) bits of advice can be converted to 2b(n) deterministic algorithms. One canthen run them in parallel to obtain a multi-solution algorithm with r(n) = 2b(n).These simple conversions allow one to translate both upper and lower boundsbetween the two models almost exactly (up to a lower-order additive term ofO(log n)).

95

It is shown in [87] that for any c ≥ 1, there is a strictly c-competitive algorithmin the multi-solution model if dlog r(n) − 1e ≥ n/c. This gives a strictly c-competitive algorithm reading n

c +O(log n) bits of advice. On the other hand,it is shown that for any strictly c-competitive algorithm in the multi-solutionmodel, it must hold that c ≥ n/(2 log(n · r(n))). This implies that any strictlyc-competitive algorithm with advice must read at least n

2c − log n bits of advice.Thus, the upper and lower bounds obtained in [87] are asymptotically tight.

Comparing our results to those of [87] and [29], we see that we improve on boththe lower and upper bounds on the advice complexity of the problems underconsideration by giving tight results. For the upper bound on Online DisjointPath Allocation, the improvement is a factor of (log c)/2. The results of [87]are already asymptotically tight. Our improvement consists of determiningthe exact coefficient of the higher-order term. Perhaps even more important,obtaining these tight lower and upper bounds on the advice complexity forOnline Independent Set and Online Disjoint Path Allocation becomesvery easy when using our string guessing problem ASG. We remark that thereductions we use to show the hardness of these problems reduces instances ofASG to instances of Online Independent Set (resp. Online Disjoint PathAllocation) that are identical to the hard instances used in [87] (resp. [29]).What enables us to improve the previous bounds, even though we use the samehard instances, is that we have a detailed analysis of the advice complexity ofASG at our disposal.

7.1.6 Related work

The advice complexity of Online Disjoint Path Allocation has also beenstudied as a function of the length of the path (as opposed to the number ofrequests), see [16,29].

The advice complexity of Online Independent Set on bipartite graphs andon sparse graphs has been determined in [58]. It turns out that for these graphclasses, even a small amount of advice can be very helpful. For instance, it isshown that a single bit of advice is enough to be 4-competitive on trees (recallthat without advice, it is not possible to be better than Ω(n)-competitive, evenon trees).

It is clear that online maximum clique in the vertex arrival model is essentiallyequivalent to Online Independent Set. In [27], the advice complexity of adifferent version of online maximum clique is studied: The vertices of a graphare revealed as in the vertex-arrival model. Let VAlg be the set of verticesselected by Alg and let C be a maximum clique in the subgraph induced bythe vertices VAlg. The profit of the solution VAlg is |C|2 / |VAlg|. In particular,the algorithm is not required to output a clique, but is instead punished forincluding too many additional vertices in its output.

The Online Vertex Cover problem and some variations thereof are studiedin [55].

The advice complexity of an online set cover problem [6] has been studied in[106]. However, the version of online set cover that we consider is different and

96

so our results and those of [106] are incomparable.

7.2 Asymmetric String Guessing

In this section, we formally define the asymmetric string guessing problem andgive simple algorithms for the problem. There are four variants of the problem,one for each combination of minimization/maximization and known/unknownhistory. Collectively, these four problems will be referred to as ASG.

We have deliberately tried to mimic the definition of the string guessing problemSG from [27]. However, for ASG, the number, n, of requests is not revealedto the online algorithm (as opposed to in [27]). This is only a minor technicaldetail since it changes the advice complexity by at most O(log n) bits.

7.2.1 The Minimization Version

We begin by defining the two minimization variants of ASG: One in which theoutput of the algorithm cannot depend on the correctness of previous answers(unknown history), and one in which the algorithm, after each guess, learns thecorrect answer (known history2). We collectively refer to the two minimizationproblems as minASG.Definition 9. The minimum asymmetric string guessing problem with un-known history, minASGu, has input 〈?1, . . . , ?n, x〉, where x ∈ 0, 1n, forsome n ∈ N. For 1 ≤ i ≤ n, round i proceeds as follows:

1. The algorithm receives request ?i which contains no information.

2. The algorithm answers yi, where yi ∈ 0, 1.

The output y = y1 . . . yn computed by the algorithm is feasible, if x v y. Oth-erwise, y is infeasible. The cost of a feasible output is |y|1, and the cost of aninfeasible output is ∞. The goal is to minimize the cost.

Thus, each request carries no information. While this may seem artificial, it doescapture the hardness of some online problems (see for example Lemma 11).Definition 10. The minimum asymmetric string guessing problem with knownhistory, minASGk, has input 〈?, x1, . . . , xn〉, where x = x1 . . . xn ∈ 0, 1n, forsome n ∈ N. For 1 ≤ i ≤ n, round i proceeds as follows:

1. If i > 1, the algorithm learns the correct answer, xi−1, to the request inthe previous round.

2. The algorithm answers yi = f(x1, . . . , xi−1) ∈ 0, 1, where f is a functiondefined by the algorithm.

The output y = y1 . . . yn computed by the algorithm is feasible, if x v y. Oth-erwise, y is infeasible. The cost of a feasible output is |y|1, and the cost of aninfeasible output is ∞. The goal is to minimize the cost.

2The concept of known history for online problems also appears in [87, 88] where it isdenoted transparency.

97

The string x in either version of minASG will be referred to as the input string orthe correct string. Note that the number of requests in both versions of minASGis n + 1, since there is a final request that does not require any response fromthe algorithm. This final request ensures that the entire string x is eventuallyknown. For simplicity, we will measure the advice complexity of minASG as afunction of n (this choice is not important as it changes the advice complexityby at most one bit).

Clearly, for any deterministic minASG algorithm which sometimes answers 0,there exists an input string on which the algorithm gets a cost of ∞. However,if an algorithm always answers 1, the input string could consist solely of 0s.Thus, no deterministic algorithm can achieve any competitive ratio bounded bya function of n. One can easily show that the same holds for any randomizedalgorithm.

We now give a simple algorithm for minASG which reads O(n/c) bits of adviceand achieves a strict competitive ratio of dce.Theorem 20. For any c ≥ 1, there is a strictly dce-competitive algorithm forminASG which reads dnc e+O(log(n/c)) bits of advice.

Proof. We will prove the result for minASGu. Clearly, it then also holds forminASGk.

Let x = x1 . . . xn be the input string. The oracle encodes p = dn/ce in a self-delimiting way, which requires O(log(n/c)) bits of advice. For 0 ≤ j < p, defineCj = xi : i ≡ j (mod p). These p sets partition the input string, and the sizeof each Cj is at most dn/pe. The oracle writes one bit, bj , for each set Cj . IfCj contains only 0s, bj is set to 0. Otherwise, bj is set to 1. Thus, in total, theoracle writes dn/ce+O(log(n/c)) bits of advice to the advice tape.

The algorithm, Alg, learns p and the bits b0, . . . , bp−1 from the advice tape.In round i, Alg answers with the bit bimod p. We claim that this algorithmis strictly dce-competitive. It is clear that the algorithm produces a feasibleoutput. Furthermore, if Alg answers 1 in round i, it must be the case thatat least one input bit in Cimod p is 1. Since the size of each Cj is at mostdn/pe ≤ dce, this implies that Alg is strictly dce-competitive.

7.2.2 The Maximization Version

We also consider ASG in a maximization version. One can view this as a dualversion of minASG.Definition 11. The maximum asymmetric string guessing problem with un-known history, maxASGu, is identical to minASGu, except that the score func-tion is different: The score of a feasible output y is |y|0, and the score of aninfeasible output is −∞. The goal is to maximize the score.

The maximum asymmetric string guessing problem with known history is definedsimilarly:Definition 12. The maximum asymmetric string guessing problem with knownhistory, maxASGk, is identical to minASGk, except that the score function is

98

different: The score of a feasible output y is |y|0, and the score of an infeasibleoutput is −∞. The goal is to maximize the score.

We collectively refer to the two problems as maxASG. Similarly, minASGuand maxASGu are collectively called ASGu, and minASGk and maxASGkare collectively called ASGk.

An algorithm for maxASG without advice cannot attain any competitive ratiobounded by a function of n. If such an algorithm would ever answer 0 in someround, an adversary would let the correct answer be 1 and the algorithm’s outputwould be infeasible. On the other hand, answering 1 in every round gives anoutput with a profit of zero.

Consider instances of minASG and maxASG with the same correct string x. Itis clear that the optimal solution is the same for both instances. However, as isusual with dual versions of a problem, they differ with respect to approximation.For example, if half of the bits in x are 1s, then we get a 2-competitive solution yfor the minASG instance by answering 1 in each round. However, in maxASG,the profit of the same solution y is zero. Despite this, there is a similar resultto Theorem 20 for maxASG.Theorem 21. For any c ≥ 1, there is a strictly dce-competitive algorithm formaxASG which reads dn/ce+O(log n) bits of advice.

Proof. We will prove the result for maxASGu. Clearly, it then also holds formaxASGk.

The oracle partitions the input string x = x1 . . . xn into dce disjoint blocks, eachcontaining (at most) dnc e consecutive bits. Note that there must exist a blockwhere the number of 0s is at least |x|0 /dce. The oracle uses O(log n) bits toencode the index i in which this block starts and the index i′ in which it ends.Furthermore, the oracle writes the string xi . . . xi′ onto the advice tape, whichrequires at most dnc e bits, since this is the largest possible size of a block. Thealgorithm learns the string xi . . . xi′ and answers accordingly in rounds i to i′.In all other rounds, the algorithm answers 1. Since the profit of this output isat least |x|0 /dce, it follows that Alg is strictly dce-competitive.

In the following section, we determine the amount of advice an algorithm needsto achieve some competitive ratio c > 1. It turns out that the algorithms fromTheorems 20 and 21 use the asymptotically smallest possible number of advicebits, but the coefficient in front of the term n/c can be improved.

7.3 Advice Complexity of ASG

In this section we give upper and lower bounds on the number of advice bitsnecessary to obtain c-competitive ASG algorithms, for some c > 1. The boundsare tight up to O(log n) bits. For ASGu, the gap between the upper and lowerbounds stems only from the fact that the advice used for the upper boundincludes the number, n, of requests and the number, t, of 1-bits in the input.Since the lower bound is shown to hold even if the algorithm knows n and t,this slight gap is to be expected.

99

The following two observations will be used extensively in the analysis.Remark 1. Suppose that a minASG algorithm, Alg, is c-competitive. Bydefinition, there exists a constant, α, such that Alg(σ) ≤ c ·Opt(σ)+α. Then,one can construct a new algorithm, Alg′, which is strictly c-competitive anduses O(log n) additional advice bits as follows:

Use O(log n) bits of advice to encode the length n of the input and use α ·dlog ne = O(log n) bits of advice to encode the index of (at most) α rounds inwhich Alg guesses 1 but where the correct answer is 0. Clearly, Alg′ can usethis additional advice to achieve a strict competitive ratio of c.

This also means that a lower bound of b on the number of advice bits requiredto be strictly c-competitive implies a lower bound of b − O(log n) advice bitsfor being c-competitive (where the constant hidden in O(log n) depends on theadditive constant α of the c-competitive algorithm).

The same technique can be used for maxASG.Remark 2. For a minimization problem, an algorithm, Alg, using b bits ofadvice can be converted into 2b algorithms, Alg1, . . . ,Alg2b , without advice,one for each possible advice string, such that Alg(σ) = miniAlgi(σ) for anyinput sequence σ. The same holds for maximization problems, except that inthis case, Alg(σ) = maxiAlgi(σ).

For ASG with unknown history, the output of a deterministic algorithm candepend only on the advice, since no information is revealed to the algorithmthrough the input. Thus, for minASGu and maxASGu, a deterministic algo-rithm using b advice bits can produce only 2b different outputs, one for eachpossible advice string.

7.3.1 Using Covering Designs

In order to determine the advice complexity of ASG, we will use some basicresults from the theory of combinatorial designs. We start with the definitionof a covering design.

For any k ∈ N, a k-set is a set of cardinality k. Let v ≥ k ≥ t be positiveintegers. A (v,k,t)-covering design is a family of k-subsets (called blocks) of av-set, S, such that any t-subset of S is contained in at least one block. The sizeof a covering design, D, is the number of blocks in D. The covering number,C(v, k, t), is the smallest possible size of a (v, k, t)-covering design. Many papershave been devoted to the study of these numbers. See [57] for a survey. Theconnection to ASG is that for inputs to minASG where the number of 1s ist, an (n, bctc, t)-covering design can be used to obtain a strictly c-competitivealgorithm.

It is clear that a (v, k, t)-covering design always exists. Since a single block hasexactly

(kt

)t-subsets, and since the total number of t-subsets of a set of size v

is(vt

), it follows that

(vt

)/(kt

)≤ C(v, k, t). We will make use of the following

upper bound on the size of a covering design:

100

Lemma 5 (Erdős, Spencer [69]). For all natural numbers v ≥ k ≥ t,(vt

)(kt

) ≤ C(v, k, t) ≤(vt

)(kt

) (1 + ln

(k

t

))

We use Lemma 5 to express both the upper and lower bound in terms of (aquotient of) binomial coefficients. This introduces an additional difference oflog n between the stated lower and upper bounds.

Lemma 21 in Appendix 7.6 shows how the bounds we obtain can be approxi-mated by a closed formula, avoiding binomial coefficients. This approximationcosts an additional (additive) difference of O(log n) between the lower and upperbounds. The approximation is in terms of the following function:

B(n, c) = log

(1 +

(c− 1)c−1

cc

)n

For c > 1, we show that B(n, c) ± O(log n) bits of advice are necessary andsufficient to achieve a (strict) competitive ratio of c, for any version of ASG.See Figure 7.1 for a graphical view. It can be shown (Lemma 19) that

1

e ln(2)

n

c≤ B(n, c) ≤ n

c.

In particular, if c = o(n/ log n), we see that O(log n) becomes a lower-orderadditive term. Thus, for this range of c, we determine exactly the higher-orderterm in the advice complexity of ASG. Since this is the main focus of our paper,we will often refer to O(log n) as a lower-order additive term. The case wherec = Ω(n/ log n) is treated separately in Section 7.3.4.

101

0.0

0.2

0.4

0.6

0.8

1.0Advicebitsper

request

1 2 3 4 5Competitive ratio c

ASG SG1

c

1

e ln(2)c

Figure 7.1: The solid line shows the number of advice bits per request whichare necessary and sufficient for obtaining a (strict) competitive ratio of c forASG (ignoring lower-order terms). The dashed line shows the same number forthe original binary string guessing problem SG [27]. The dotted lines are thefunctions 1/c and 1/(e ln(2)c).

7.3.2 Advice Complexity of minASG

We first consider minASG with unknown history. Clearly, an upper bound forminASGu is also valid for minASGk. We will show that the covering numberC(v, k, t) is very closely related to the advice complexity of minASGu.Theorem 22. For any c > 1, there exists a strictly c-competitive algorithm forminASG reading b bits of advice, where

b ≤ B(n, c) +O(log n).

Proof. We will define an algorithm Alg and an oracle O for minASGu suchthat Alg is strictly c-competitive and reads at most b bits of advice. Clearly,the same algorithm can be used for minASGk.

Let x = x1 . . . xn be an input string to minASGu and set t = |x|1. The oracleO writes the value of n to the advice tape using a self-delimiting encoding.Furthermore, the oracle writes the value of t to the advice tape using dlog nebits (this is possible since t ≤ n). Thus, this part of the advice uses at most3dlog ne+ 1 bits in total.

If bctc ≥ n, then Alg will answer 1 in each round. If t = 0, Alg will answer 0in each round.

If 0 < bctc < n, then Alg computes an optimal (n, bctc, t)-covering design asfollows: Alg tries (in lexicographic order, say) all possible sets of bctc-blocks,starting with sets consisting of one block, then two blocks, and so on. For each

102

such set, Alg can check if it is indeed an (n, bctc, t)-covering design. As soonas a valid covering design, D, is found, the algorithm can stop, since D will bea smallest possible (n, bctc, t)-covering design.

Now, O picks a bctc-block, Sy, from D, such that the characteristic vector yof Sy satisfies that x v y. Note that, since Alg is deterministic, the oracleknows which covering design Alg computes and the ordering of the blocks inthat design. The oracle then writes the index of Sy on the advice tape. Thisrequires at most dlogC(n, bctc, t)e bits of advice.

Alg reads the index of the bctc-block Sy from the advice tape and answers 1in round i if and only if the element i belongs to Sy. Clearly, this will result inAlg answering 1 exactly bctc times and producing a feasible output. It followsthat Alg is strictly c-competitive. Furthermore, the number of bits read byAlg is

b ≤⌈

log

(max

t : bctc<nC(n, bctc, t)

)⌉+ 3dlog ne+ 1 .

The theorem now follows from Lemma 21, Inequality (7.8).

We now give an almost matching lower bound.Theorem 23. For any c > 1, a c-competitive algorithm Alg for minASGumust read b bits of advice, where

b ≥ B(n, c)−O(log n) .

Proof. By Remark 1, it suffices to prove the lower bound for strictly c-competitivealgorithms. Suppose that Alg is strictly c-competitive. Let b be the numberof advice bits read by Alg on inputs of length n. For 0 ≤ t ≤ n, let In,t bethe set of input strings of length n with Hamming weight t, and let Yn,t be thecorresponding set of output strings produced by Alg. We will argue that, foreach t, 0 ≤ bctc ≤ n, Yn,t can be converted to an (n, bctc, t)-covering design ofsize at most 2b.

By Remark 2, Alg can produce at most 2b different output strings, one foreach possible advice string. Now, for each input string, x ∈ In,t, there mustexist some advice which makes Alg output a string y, where |y|1 ≤ bctc andx v y. If not, then Alg is not strictly c-competitive. For each possible outputy ∈ 0, 1n computed by Alg, we convert it to the set Sy ⊆ [n] which has y asits characteristic vector. If |y|1 < bctc, we add some arbitrary elements to Syso that Sy contains exactly bctc elements. Since Alg is strictly c-competitive,this conversion gives the blocks of an (n, bctc, t)-covering design. The size ofthis covering design is at most 2b, since Alg can produce at most 2b differentoutputs. It follows that C(n, bctc, t) ≤ 2b, for all t, 0 ≤ bctc ≤ n. Thus,

b ≥ log

(max

t : bctc<nC(n, bctc, t)

).

The theorem now follows from Lemma 21, Inequality (7.6).

103

Note that the proof of Theorem 23 relies heavily on the unknown history inorder to bound the total number of possible outputs. However, Theorem 24below states that the lower bound of B(n, c) − O(log n) also holds for min-ASGk. In order to prove this, we show how an adversary can ensure thatrevealing the correct answers for previous requests does not give the algorithmtoo much extra information. The way to ensure this depends on the specificstrategy used by the algorithm and oracle at hand, and so the proof is morecomplicated than that of Theorem 23.Theorem 24. For any c > 1, a c-competitive algorithm for minASGk mustread b bits of advice, where

b ≥ B(n, c)−O(log n) .

Proof. By Remark 1, it suffices to prove the lower bound for strictly c-competitivealgorithms. Consider the set, In,t, of input strings of length n and Hammingweight t, for some t such that bctc ≤ n. Restricting the input set to strings withone particular Hamming weight can only weaken the adversary.

Let Alg be a strictly c-competitive algorithm for minASGk which reads atmost b bits of advice for any input of length n. For an advice string ϕ, denoteby Iϕ ⊆ In,t the set of input strings for which Alg reads the advice ϕ. Since weare considering minASGk, in any round, Alg may use both the advice stringand the information about the correct answer for previous rounds when decidingon an answer for the current round.

We will prove the lower bound by considering the computation of Alg, whenreading the advice ϕ, as a game between Alg and an adversary. This gameproceeds according to the rules specified in Definition 10. In particular, at thebeginning of round i, the adversary reveals the correct answer xi−1 for roundi− 1 to Alg. Thus, at the beginning of round i, the algorithm knows the firsti− 1 bits, x1, . . . , xi−1, of the input string. We say that a string s ∈ Iϕ is alivein round i if sj = xj for all j < i, and we denote by Iiϕ ⊆ Iϕ the set of stringswhich are alive in round i. The adversary must reveal the correct answers in away that is consistent with ϕ. That is, in each round, there must exist at leastone string in Iϕ which is alive.

We first make two simple observations:

• Suppose that, in some round i, there exists a string s ∈ Iiϕ such thatsi = 1. Then, Alg must answer 1, or else the adversary can choose s asthe input string and thereby force Alg to incur a cost of ∞. Thus, wewill assume that Alg always answers 1 in such rounds.

• On the other hand, if, in round i, all s ∈ Iiϕ have si = 0, then Alg is freeto answer 0. We will assume that Alg always answers 0 in such rounds.

Assume that, at some point during the computation, Iϕ contains exactly mstrings and exactly h 1s are still to be revealed. We let L1(m,h) be the largestnumber such that for every set of m different strings of equal length, eachwith Hamming weight h, the adversary can force Alg to incur a cost of at

104

least L1(m,h) when starting for this situation. In other words, L1(m,h) is theminimum number of rounds in which the adversary can force Alg to answer 1.

Claim: For any m,h ≥ 1,

L1(m,h) ≥ min

d : m ≤

(d

h

). (7.2)

Before proving the claim, we will show how it implies the theorem. For any t,0 ≤ bctc < n, there are

(nt

)possible input strings of length n and Hamming

weight t. By the pigeonhole principle, there must exist an advice string ϕ′ suchthat |Iϕ′ | ≥

(nt

)/2b. Now, if m = |Iϕ′ | >

(bctct

), then by (7.2), L1(m, t) ≥

mind :(bctct

)<(dt

) = bctc + 1. This contradicts the fact that Alg is strictly

c-competitive. Thus, it must hold that |Iϕ′ | ≤(bctct

). Combining the two

inequalities involving |Iϕ′ |, we get(bctct

)≥ |Iϕ′ | ≥

(nt

)2b⇒ 2b ≥

(nt

)(bctct

)Since this holds for all values of t, we obtain the lower bound

b ≥ log

(max

t : bctc<n

(nt

)(bctct

)) .The theorem then follows from Lemma 21 and Inequalities (7.7) and (7.6).

Proof of claim: Fix 1 ≤ i ≤ n and assume that, at the beginning of roundi, there are m strings alive, all of which still have exactly h 1’s to be revealed.The rest of the proof is by induction on m and h.

For the base case, suppose first that h = 1. Then, for each of the m strings,s1, . . . , sm ∈ Iiϕ, there is exactly one index, i1,. . . , im, such that s1

i1= · · · =

smim = 1. Since all strings in Iiϕ must be different, it follows that ij 6= ik forj 6= k. Without loss of generality, assume that i1 < i2 < · · · < im. In roundsi1, . . . , im−1, the adversary chooses the correct answer to be 0, while Alg isforced to answer 1 in each of these rounds. Finally, in round im, the adversaryreveals the correct answer to be 1 (and hence the input string must be sm). Intotal, Alg incurs a cost of m, which shows that L1(m, 1) = m for all m ≥ 1.

Assume now that m = 1. It is clear that L1(m,h) ≥ h for all values of h. Inparticular, L1(1, h) = h. This finishes the base case.

For the inductive step, fix integers m,h ≥ 2. Assume that the formula is truefor all (i, j) such that j ≤ h− 1 or such that j = h and i ≤ m− 1. We will showthat the formula is also true for (m,h).

Consider the strings s1, . . . , sm ∈ Iiϕ alive at the beginning of round i. Wepartition Iiϕ into two sets, S0 = sj : sji = 0 and S1 = sj : sji = 1, and letm0 = |S0| and m1 = |S1|. Recall that if all sequences s ∈ Iiϕ have si = 0, we

105

assume that Alg answers 0, leaving m and h unchanged. Thus, we may safelyignore such rounds and assume that m0 < m. We let

d = min

d′ : m ≤

(d′

h

),

d0 = min

d′ : m0 ≤

(d′

h

), and

d1 = min

d′ : m1 ≤

(d′

h− 1

).

If d1 + 1 ≥ d, then the adversary chooses 1 as the correct answer in round i. Bythe induction hypothesis, L1(m1, h− 1) ≥ d1. Together with the fact that Algis forced to answer 1 in round i, this shows that the adversary can force Algto incur a cost of at least L1(m1, h− 1) + 1 ≥ d1 + 1 ≥ d.

On the other hand, if d1 + 1 < d, the adversary chooses 0 as the correct answerin round i. Note that this implies that each string alive in round i+ 1 still hasexactly h 1’s to be revealed. We must have d1 ≤ d− 2 since d1 and d are bothintegers. Moreover, by definition of d, it holds that m >

(d−1h

). Thus, we get

the following lower bound on m0:

m0 = m−m1

>

(d− 1

h

)−(

d1

h− 1

)≥(d− 1

h

)−(d− 2

h− 1

), since

(a

b

)is increasing in a

=

(d− 2

h

), by Pascal’s Identity.

This lower bound onm0 shows that d0 > d−2, and hence d0 ≥ d−1. Combiningthis with the induction hypothesis gives L1(m0, h) ≥ d0 ≥ d− 1. Since m1 ≥ 1,Alg is forced to answer 1 in round i, so the adversary can make Alg incur acost of at least L1(m0, h) + 1 ≥ d.

7.3.3 Advice Complexity of maxASG

In this section, we will show that the advice complexity of maxASG is the sameas that of minASG, up to a lower-order additive term of O(log n). We use thesame techniques as in Section 7.3.2.

As noted before, the difficulty of computing a c-competitive solution for a specificinput string is not the same for minASG and maxASG. The key point is thatcomputing a c-competitive solution for maxASG, on input strings with u 0’s,is roughly as difficult as computing a c-competitive solution for minASG, oninput strings with du/ce 1’s.

We show that the proofs of Theorems 22–24 can easily be modified to give upperand lower bounds on the advice complexity of maxASG. These bounds withinthe proofs look slightly different from the ones obtained for minASG, but we

106

show in Lemmas 23 and 24 that they differ from B(n, c) by at most an additiveterm of O(log n).Theorem 25. For any c > 1, there exists a strictly c-competitive online algo-rithm for maxASG reading b bits of advice, where

b ≤ B(n, c) +O(log n).

Proof. We will define an algorithm Alg and an oracle O for maxASGu suchthat Alg is strictly c-competitive and reads at most b bits of advice. Clearly,the same algorithm can be used for maxASGk.

As in the proof of Theorem 22, we note that, for any integers n, u where 0 <u < n, the algorithm Alg can compute an optimal (n, n−du/ce, n−u)-coveringdesign deterministically.

Let x = x1 . . . xn be an input string to maxASGu and set u = |x|0. The oracleO writes the values of n and u to the advice tape using at most 3dlog ne+ 1 bitsin total.

If 0 < u < n, then O picks an (n − du/ce)-block, Sy, from the optimal (n, n −du/ce, n−u)-covering design, as computed by Alg, such that the characteristicvector y of Sy satisfies that x v y. The oracle writes the index of Sy on theadvice tape. This requires at most dlogC(n, n− du/ce, n− u)e bits of advice.

The algorithm, Alg, first reads the values of n and u from the advice tape.If u = 0, then Alg will answer 1 in each round, and if u = n, then Alg willanswer 0 in each round. If 0 < u < n, then Alg will read the index of the(n− du/ce)-block Sy from the advice tape. Alg will answer 1 in round i if andonly if the element i belongs to the given block. Clearly, this will result in Alganswering 0 exactly n − (n − du/ce) = du/ce times and producing a feasibleoutput. It follows that Alg will be strictly c-competitive. Furthermore, thenumber of bits read by Alg is

b ≤⌈

log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u

))⌉+ 3dlog ne+ 1 .

The theorem now follows from Lemma 24.

Theorem 26. For any c > 1, a c-competitive algorithm Alg for maxASGumust read b bits of advice, where

b ≥ B(n, c)−O(log n) .

Proof. By Remark 1, it suffices to prove the lower bound for strictly c-competitivealgorithms. Suppose that Alg is strictly c-competitive. Let b be the number ofadvice bits read by Alg on inputs of length n. For 0 ≤ u ≤ n, let In,u be the setof input strings x of length n with |x|0 = u, and let Yn,u be the correspondingset of output strings produced by Alg. We will argue that, for each u, Yn,u canbe converted to an (n, n− du/ce, n− u)-covering design of size at most 2b.

By Remark 2, Alg can produce at most 2b different output strings, one for eachpossible advice string. Now, for each input string, x = x1 . . . xn with |x|0 = u(and, hence, |x|1 = n − u), there must exist some advice which makes Algoutput a string y = y1 . . . yn where |y|0 ≥ du/ce (and, hence, |y|1 ≤ n− du/ce)

107

and x v y. If not, then Alg is not strictly c-competitive. For each possibleoutput y ∈ 0, 1n computed by Alg, we convert it to the set Sy ⊆ [n] whichhas y as its characteristic vector. If |y|1 < n − du/ce, we add some arbitraryelements to Sy so that Sy contains exactly n − du/ce elements. Since Alg isstrictly c-competitive, this conversion gives the blocks of an (n, n−du/ce, n−u)-covering design. The size of this covering design is at most 2b, since Alg canproduce at most 2b different outputs. It follows that C(n, n−du/ce, n−u) ≤ 2b,for all u. Thus,

b ≥ log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u

)).

The theorem now follows from Lemma 24.

As was the case for minASG, the lower bound for maxASGu also holds formaxASGk.Theorem 27. For any c > 1, a c-competitive algorithm Alg for maxASGkmust read at least b bits of advice, where

b ≥ B(n, c)−O(log n) .

Proof. By Remark 1, it suffices to prove the lower bound for strictly c-competitivealgorithms.

Consider input strings, x, of length n and such that |x|0 = u. Let t = |x|1 =n − u. We reuse the notation from the proof of Theorem 24 and let Iϕ ⊆ In,tdenote the set of strings for which Alg reads the advice string ϕ.

Suppose there exists some advice string ϕ′ such that m = |Iϕ′ | >(n−duc e

t

).

Since Inequality (7.2) from the proof of Theorem 24 holds for maxASG too,we get that L1(m, t) ≥ n− duc e+ 1. But this means that there exists an inputx ∈ Iϕ′ , with |x|1 = t, such that Alg must answer 1 at least n− duc e+ 1 times.In other words, for the output y, computed by Alg on input x, it holds that|y|0 ≤ n− (n−duc e+ 1) ≤ duc e−1. Since |x|0 = u, this contradicts the fact thatAlg is strictly c-competitive.

Since there are(nu

)possible input strings x such that |x|0 = u, and since the

above was shown to hold for all choices of u, we get the lower bound

b ≥ log

(max

u : 0<u<n

(nu

)(n−duc en−u

)) .The theorem now follows from Lemma 24.

7.3.4 Advice Complexity of ASG when c = Ω(n/ log n)

Throughout the paper, we mostly ignore additive terms of O(log n) in the advicecomplexity. However, in this section, we will consider the advice complexity ofASG when the number of advice bits read is at most logarithmic. Surprisingly,

108

it turns out that the advice complexity of minASG and maxASG is differentin this case.

Recall that, by Theorem 25 (or Theorem 21), using O(log n) bits of advice, analgorithm for maxASG can achieve a competitive ratio of n

logn . The followingtheorem shows that there is a “phase-transition” in the advice complexity, in thesense that using less than log n bits of advice is no better than using no adviceat all. We remark that Theorem 28 and its proof are essentially equivalent to aprevious result of Halldórsson et al. [87] on Online Independent Set in themulti-solution model.Theorem 28 (cf. [87]). Let Alg be an algorithm for maxASG reading b <blog nc bits of advice. Then, the competitive ratio of Alg is not bounded by afunction of n. This is true even if Alg knows n in advance.

Proof. We will prove the result for maxASGk. Clearly, it then also holds formaxASGu.

By Remark 2, we can convert Alg to m = 2b online algorithms without advice.Denote the algorithms by Alg1, . . . ,Algm. Since b < blog nc, it follows thatm ≤ n/2. We claim that the adversary can construct an input string x =x1 . . . xn for maxASGk such that the following holds: For each 1 ≤ j ≤ m, theoutput of Algj is either infeasible or contains only 1s. Furthermore, x can beconstructed such that |x|0 ≥

n2 .

We now show how the adversary may achieve this. For 1 ≤ i ≤ n, the adversarydecides the value of xi as follows: If there is some algorithm, Algj , whichanswers 0 in round i and Algj answers 1 in all rounds before round i, theadversary lets xi = 1. In all other cases, the adversary lets xi = 0. It follows thatif an algorithm Algj ever answers 0, its output will be infeasible. Furthermore,the number of 1’s in the input string constructed by the adversary is at mostn/2, since m ≤ n/2. Thus, the profit of Opt on this input is at least n/2, whilethe profit of Alg is at most 0.

For minASG, the algorithm from Theorem 20 achieves a competitive ratio ofdce and uses O(n/c) bits of advice, for any c > 1. In particular, it is possibleto achieve a competitive ratio of e.g. O(n/(log log n)) using O(log log n) bits ofadvice, which we have just shown is not possible for maxASG. The followingtheorem shows that no strictly c-competitive algorithm for minASG can useless than Ω(n/c) bits of advice, even if n/c = o(log n).Theorem 29. For any c > 1, on inputs of length n, a strictly dce-competitivealgorithm Alg for minASG must read at least b = Ω(n/c) bits of advice.

Proof. We will prove the result for minASGk. Clearly, it then also holds forminASGu.

Suppose that Alg is strictly dce-competitive. Since bdcetc = dcet, it followsfrom the proof of Theorem 24 that Alg must read at least b bits of advice,where

b ≥ log

(max

t : dcet<n

(nt

)(dcett

)) .By Lemma 22, this implies that b = Ω(n/c).

109

7.4 The Complexity Class AOC

In this section, we define a class, AOC, and show that for each problem, P, inAOC, the advice complexity of P is at most that of ASG.Definition 13. A problem, P, is in AOC (Asymmetric Online Covering) if itcan be defined as follows: The input to an instance of P consists of a sequence ofn requests, σ = 〈r1, . . . , rn〉, and possibly one final dummy request. An algorithmfor P computes a binary output string, y = y1 . . . yn ∈ 0, 1n, where yi =f(r1, . . . , ri) for some function f .

For minimization (maximization) problems, the score function, s, maps a pair,(σ, y), of input and output to a cost (profit) in N ∪ ∞ (N ∪ −∞). Foran input, σ, and an output, y, y is feasible if s(σ, y) ∈ N. Otherwise, y isinfeasible. There must exist at least one feasible output. Let Smin(σ) (Smax(σ))be the set of those outputs that minimize (maximize) s for a given input σ.

If P is a minimization problem, then for every input, σ, the following musthold:

1. For a feasible output, y, s(σ, y) = |y|1.

2. An output, y, is feasible if there exists a y′ ∈ Smin(σ) such that y′ v y.If there is no such y′, the output may or may not be feasible.

If P is a maximization problem, then for every input, σ, the following musthold:

1. For a feasible output, y, s(σ, y) = |y|0.

2. An output, y, is feasible if there exists a y′ ∈ Smax(σ) such that y′ v y.If there is no such y′, the output may or may not be feasible.

The dummy request is a request that does not require an answer and is notcounted when we count the number of requests. Most of the problems that weconsider will not have such a dummy request, but it is necessary to make surethat ASG belongs to AOC.

The input, σ, to a problem P in AOC can contain any kind of information.However, for each request, an algorithm for P only needs to make a binarydecision. If the problem is a minimization problem, it is useful to think ofanswering 1 as accepting the request and answering 0 as rejecting the request(e.g. vertices in a vertex cover). The output is guaranteed to be feasible if theaccepted requests are a superset of the requests accepted in an optimal solution(they “cover” the optimal solution).

If the problem is a maximization problem, it is useful to think of answering 0as accepting the request and answering 1 as rejecting the request (e.g. verticesin an independent set). The output is guaranteed to be feasible if the acceptedrequests are a subset of the requests accepted in a optimal solution.

Note that outputs for problems in AOC may have a score of ±∞. This is usedto model that the output is infeasible (e.g. not a vertex cover/independent set).

We now show that our ASGu algorithm based on covering designs works forevery problem in AOC. This gives an upper bound on the advice complexity for

110

all problems in AOC.Theorem 30. Let P be a problem in AOC. There exists a strictly c-competitiveonline algorithm for P reading b bits of advice, where

b ≤ B(n, c) +O(log n).

Proof. We first assume that P is a minimization problem. Let Alg be a strictlyc-competitive minASGu algorithm reading at most b bits of advice providedby an oracle O. By Theorem 22, such an algorithm exists. We will define a Palgorithm, Alg′, together with an oracle O′, that is strictly c-competitive andreads at most b bits of advice.

For a given input, σ, to P, the oracle O′ starts by computing an x such thatx ∈ Smin(σ). This is always possible since by the definition of AOC, suchan x always exists, and O′ has unlimited computational power. Let ϕ be theadvice that O would write to the advice tape if x were the input string in aninstance of minASGu. O′ writes ϕ to the advice tape. From here, Alg′ behavesas Alg would do when reading ϕ (in particular, Alg′ ignores any possibleinformation contained in σ) and computes the output y. Since Alg is strictlyc-competitive for minASGu, we know that x v y and that |y|1 ≤ c |x|1. SinceP is in AOC, this implies that y is feasible (with respect to the input σ) andthat s(σ, y) ≤ c |x|1 = c ·Opt(σ).

Similarly, one can reduce a maximization problem to maxASGu and applyTheorem 25.

Showing that a problem, P, belongs to AOC immediately gives an upper boundon the advice complexity of P. For all variants of ASG, we know that this upperbound is tight up to an additive O(log n) term. This leads us to the followingdefinition of completeness.Definition 14. A problem, P, is AOC-complete if

• P belongs to AOC and

• for all c > 1, any c-competitive algorithm for P must read at least b bitsof advice, where

b ≥ B(n, c)−O(log n).

Thus, the advice complexity of an AOC-complete problem must be identicalto the upper bound from Theorem 30, up to a lower-order additive term ofO(log n). By Definitions 9–12 combined with Theorems 23–24 and 26–27, all ofminASGu, minASGk, maxASGu and maxASGk are AOC-complete.

When we show that some problem, P, is AOC-complete, we usually do this bygiving a reduction from a known AOC-complete problem to P, preserving thecompetitive ratio and increasing the number of advice bits by at most O(log n).ASGk is especially well-suited as a starting point for such reductions.

We allow for an additional O(log n) bits of advice in Definition 14 in order to beable to use the reduction between the strict and non-strict competitive ratios asexplained in Remark 1 and in order to encode some natural parameters of theproblem, such as the input length or the score of an optimal solution. For mostvalues of c, it seems reasonable to allow these additional advice bits. However,

111

it does mean that for c = Ω(n/ log n), the requirement in the definition ofAOC-complete is vacuously true. We refer to Section 7.3.4 for a discussion ofthe advice complexity for this range of competitive ratio.

7.4.1 AOC-complete Minimization Problems

In this section, we show that several online problems are AOC-complete, startingwith Online Vertex Cover. See the introduction for the definition of theproblems

Online Vertex Cover.

Lemma 6. Online Vertex Cover is in AOC.

Proof. We need to verify the conditions in Definition 13.

Recall that an input σ = 〈r1, . . . , rn〉 for Online Vertex Cover is a sequenceof requests, where each request is a vertex along with the edges connecting itto previously requested vertices. There is no dummy request at the end. Foreach request, ri, an algorithm makes a binary choice, yi: It either includes thevertex into its solution (yi = 1) or not (yi = 0).

The cost of an infeasible solution is ∞. A solution y = y1 . . . yn for OnlineVertex Cover is feasible if the vertices included in the solution form a vertexcover in the input graph. Clearly, there is always at least one feasible solution,since taking all the vertices will give a vertex cover.

Thus, Online Vertex Cover has the right form. Finally, we verify thatconditions 1 and 2 are also satisfied: Condition 1 is satisfied since the cost ofa feasible solution is the number of vertices in the solution and condition 2 issatisfied since a superset of a vertex cover is also a vertex cover.

We now show a hardness result for Online Vertex Cover. In our reduction,we make use of the following graph construction. The same construction willalso be used later on for other problems. We remark that this graph construc-tion is identical to the one used in [87] for showing lower bounds for OnlineIndependent Set in the multi-solution model.Definition 15 (cf. [87]). For any string x = x1 . . . xn ∈ 0, 1n, define Gx =(V,E) as follows:

V = v1, . . . , vn,E = (vi, vj) : xi = 1 and i < j.

Furthermore, let V0 = vi : xi = 0 and V1 = vi : xi = 1.

For a string x ∈ 0, 1n, the graph Gx from Definition 15 is a split graph: Thevertex set V can be partitioned into V0 and V1 such that V0 is an independentset of size |x|0 and V1 is a clique of size |x|1.Lemma 7. If there is a c-competitive algorithm reading b bits for Online Ver-tex Cover, then there is a c-competitive algorithm reading b + O(log n) bitsfor minASGk.

112

0 1 1 0 1 0

Figure 7.2: G011010

Proof. Let Alg be a c-competitive algorithm for Online Vertex Cover read-ing at most b bits of advice. By definition, there exists a constant α such thatAlg(σ) ≤ c ·Opt(σ)+α for any input sequence σ. We will define an algorithm,Alg′, and an oracle, O′, for minASGk such that Alg′ is c-competitive (withthe same additive constant) and reads at most b+O(log n) bits of advice.

For x = x1 . . . xn an input string to minASGk, consider the input instanceto Online Vertex Cover Gx = (V,E) defined in Definition 15 where thevertices are requested in the order 〈v1, . . . , vn〉. We say that a vertex in V0 isbad and that a vertex in V1 is good. Note that V1 \ vn is a minimum vertexcover of Gx. Also, if an algorithm rejects a good vertex vi, then it must acceptall later vertices vj (where i < j ≤ n) in order to cover the edges (vi, vj). Inparticular, since the good vertices form a clique, no algorithm can reject morethan one good vertex.

Let ϕ be the advice read by Alg, and let VAlg be the vertices chosen by Alg.Since Alg is c-competitive, we know that VAlg must be a vertex cover of sizeat most c |V1 \ vn|+ α ≤ c |V1|+ α.

We now define Alg′ and O′. As usual, y denotes the output computed by Alg′.We consider three cases. The first two bits of the advice tape will be used totell Alg′ which one of the three cases we are in.

Case 1: Alg accepts all good vertices in Gx, i.e., V1 ⊆ VAlg. The oracleO′ writes the advice ϕ to the advice tape. When Alg′ receives request i, itconsiders what Alg does when the vertex vi in Gx is revealed: Alg′ answers1 if Alg accepts vi and 0 otherwise. Note that it is possible for Alg′ tosimulate Alg since, at the beginning of round i, Alg′ knows x1 . . . xi−1. Inparticular, Alg′ knows which edges to reveal to Alg along with the vertex viin Gx. Together with access to the advice ϕ read by Alg, this allows Alg′

to simulate Alg. Since V1 ⊆ VAlg, we get that x v y. Furthermore, since|VAlg| ≤ c |V1|+ α, we also get that |y|1 ≤ c |x|1 + α.

Case 2a: Alg rejects a good vertex, vi, and accepts a bad vertex, vj. In this case,the oracle O′ writes the indices of i and j in a self-delimiting way, followed byϕ, to the advice tape. Alg′ simulates Alg as before and answers accordingly,except that it answers 1 in round i and 0 in round j. This ensures that x v y.Furthermore, |y|1 = |VAlg| ≤ c |V1|+ α = c |x|1 + α.

Case 2b: Alg rejects a good vertex, vi, and all bad vertices. In this case,VAlg = V1 \ vi. The oracle O′ writes the value of i to the advice tape in aself-delimiting way, followed by ϕ. Again, Alg′ simulates Alg, but it answers1 in round i. Thus, x = y, meaning that x v y and y is optimal.

113

In all cases, Alg′ computes an output y such that x v y and |y|1 ≤ c |x|1 + α.Since |ϕ| ≤ b, the maximum number of bits read by Alg′ is b+O(log n) + 2 =b+O(log n).

Theorem 31. Online Vertex Cover is AOC-complete.

Proof. By Lemma 6, Online Vertex Cover is in AOC. Combining Lemma 7and Theorem 24 shows that a c-competitive algorithm for Online VertexCover must read at least B(n, c) − O(log n) bits of advice. Thus, OnlineVertex Cover is AOC-complete.

Online Cycle Finding.

Most of the graph problems that we prove to be AOC-complete are, in theiroffline versions, NP-complete. However, in this section, we show that OnlineCycle Finding is also AOC-complete. The offline version of this problem isvery simple and can easily be solved in polynomial time.Lemma 8. Online Cycle Finding is in AOC.

This and the following proofs of membership of AOC have been omitted. Theyare almost identical to the proof of Lemma 6.

In order to show that Online Cycle Finding is AOC-complete, we will makeuse of the following graph.Definition 16. For a string x = x1 . . . xn ∈ 0, 1n define f(xi) to be thelargest j < i such that xj = 1. Note that this may not always be defined. Welet max be the largest i such that xi = 1. Similarly, we let min be the smallesti such that xi = 1. We now define the graph Hx = (V,E):

V = v1, . . . , vn,E = (vj , vi) : f(xi) = j ∪ (vmin, vmax).

Furthermore, let V0 = vi : xi = 0 and V1 = vi : xi = 1.

0 1 0 0 1 0 1

Figure 7.3: H0100101

Lemma 9. If there is a c-competitive algorithm reading b bits for Online Cy-cle Finding, then there is a c-competitive algorithm reading b + O(log n) bitsfor minASGk.

Proof. Let Alg be a c-competitive algorithm (with an additive constant α) forOnline Cycle Finding reading at most b bits of advice. We will define analgorithm Alg′ and an oracle O′ for minASGk such that Alg′ is c-competitive(with the same additive constant) and reads at most b+O(log n) bits of advice.

114

Let x = x1 . . . xn be an input string to minASGk. The oracle O′ first writesone bit of advice to indicate if |x|1 ≤ 2. If this is the case, O′ writes (in a self-delimiting way) the index of these at most two 1s to the advice tape. This canbe done using O(log n) bits and clearly allows Alg′ to be strictly 1-competitive.In the rest of the proof, we will assume that there are at least three 1s in x.

Consider the input instance to Online Cycle Finding, Hx = (V,E), definedin Definition 16, where the vertices are requested in the order 〈v1, . . . , vn〉. Notethat the vertices V1 form the only cycle in Hx. Thus, if an algorithm rejects avertex from V1, the subgraph induced by the vertices accepted by the algorithmcannot contain a cycle.

Let ϕ be the advice read by Alg, and let VAlg be the vertices chosen by Alg,when the n vertices of Hx are revealed. Since Alg is c-competitive, we knowthat |VAlg| ≤ c |V1|+ α.

We now define Alg′. As usual, y denotes the output computed by Alg′. SinceAlg is c-competitive, it must hold that V1 ⊆ VAlg. The oracle O′ writes theadvice ϕ to the advice tape. When Alg′ receives request i at the beginningof round i in minASGk, it considers what Alg does when the vertex vi in Hx

is revealed: Alg′ answers 1 if Alg accepts vi and 0 otherwise. Note that itis possible for Alg′ to simulate Alg since, at the beginning of round i, Alg′

knows x1 . . . xi−1. In particular, Alg′ knows which edges were revealed to Algalong with the vertex vi in Hx. Note, however, that in order to simulate theedge from vmin to vmax, Alg needs to know when vmax is being revealed. Thiscan be achieved using O(log n) additional advice bits.

Together with access to the advice ϕ read by Alg, this allows Alg′ to simulateAlg. Since V1 ⊆ VAlg, we get that x v y. Furthermore, since |VAlg| ≤ c |V1|+α,we also get that |y|1 ≤ c |x|1 + α.

Theorem 32. Online Cycle Finding is AOC-complete

Proof. This follows from Lemmas 8 and 9 together with Theorem 24.

Online Dominating Set.

In this section, we show that Online Dominating Set is also AOC-complete.We do not require that the vertices picked by the online algorithm form a dom-inating set at all times. We only require that the solution produced by thealgorithm is a dominating set when the request sequence ends. Of course, thismakes a difference only because we consider online algorithms with advice. ForOnline Vertex Cover, this issue did not arise, since it is not possible to endup with a vertex cover without maintaining a vertex cover at all times. Thus,in this aspect, Online Dominating Set is more similar to Online CycleFinding.Lemma 10. Online Dominating Set in in AOC.

In order to show that Online Dominating Set is AOC-complete, we use thefollowing construction.

115

Definition 17. For a string x = x1 . . . xn such that |x|1 ≥ 1, define max to bethe largest i such that xi = 1 and define Kx = (V,E) as follows:

V = v1, . . . , vn,E = (vi, vmax) : xi = 0.

Furthermore, let V0 = vi : xi = 0 and V1 = vi : xi = 1.

Note that V1 is a smallest dominating set in Kx and that any dominating set iseither a superset of V1 or equal to V \ vmax. We now give a lower bound onthe advice complexity of Online Dominating Set. Interestingly, it is possibleto do this by making a reduction from minASGu (instead of minASGk) toOnline Dominating Set.Lemma 11. If there is a c-competitive algorithm for Online DominatingSet reading b bits of advice, then there is a c-competitive algorithm readingb+O(log n) bits of advice for minASGu.

Proof. Let Alg be a c-competitive algorithm (with an additive constant of α)for Online Dominating Set reading at most b bits of advice. We will define analgorithm Alg′ and an oracle O′ for minASGu such that Alg′ is c-competitive(with the same additive constant) and reads at most b+O(log n) bits of advice.

Let x = x1 . . . xn be an input string to minASGu. The oracle O′ first writes onebit of advice to indicate if |x|1 = 0. If this is the case, Alg′ answers 0 in eachround. In the rest of the proof, we will assume that |x|1 ≥ 1.

Consider the input instance to Online Dominating Set, Kx = (V,E), definedin Definition 17, where the vertices are requested in the order 〈v1, . . . , vn〉. Notethat V1 is the smallest dominating set in Kx. Let ϕ be the advice read by Alg,and let VAlg be the vertices chosen by Alg, when the n vertices of Kx arerevealed. Since Alg is c-competitive, we know that VAlg is a dominating set ofsize |VAlg| ≤ c |V1|+ α.

We now define Alg′ and O′. The second bit of the advice tape will be used tolet Alg′ distinguish the two cases described below. Note that the only vertexfrom V1 that can be rejected by a c-competitive algorithm is vmax, and nothingcan be rejected when V1 = V . Hence the two cases are exhaustive.

Case 1: Alg accepts all vertices in V1. The oracle O′ writes the value of max ina self-delimiting way. This requires O(log n) bits. Furthermore, O′ writes ϕ tothe advice tape. Now, Alg′ learns ϕ and max and works as follows: In roundi ≤ max−1, Alg′ answers 1 if Alg accepts the vertex vi and 0 otherwise. Notethat Alg′ knows that no edges are revealed to Alg in the first max−1 rounds.Thus, Alg′ can compute the answer produced by Alg in these rounds from ϕalone. In round max, Alg′ answers 1. In rounds max + 1, . . . , n, the algorithmAlg′ always answers 0.

Case 2: Alg rejects vmax. In order to dominate vmax, Alg must accept a vertexvi ∈ V0. The oracle O′ writes the values of max and i in a self-delimiting way,followed by ϕ, to the advice tape. Alg′ behaves as in Case 1, except that itanswers 0 in round i.

In both cases, x v y and |y|1 ≤ |VAlg| ≤ c |V1| + α = c |x|1 + α. Furthermore,Alg′ reads b+O(log n) bits of advice.

116

Theorem 33. Online Dominating Set is AOC-complete

Proof. This follows from Lemmas 10 and 11 together with Theorem 24.

Online Set Cover.

We study a version of Online Set Cover in which the universe is known fromthe beginning and the sets arrive online. Note that this problem is very differentfrom the set cover problem studied in [6,106], where the elements (and not thesets) arrive online.Lemma 12. Online Set Cover is in AOCLemma 13. If there is a c-competitive algorithm for Online Set Cover read-ing b bits of advice, then there is a c-competitive algorithm reading b+O(log n)bits of advice for minASGu.

Proof. Let x = x1 . . . xn be an input string to minASGu with |x|1 ≥ 1, anddefine max as in Definition 17. We define an instance of Online Set Coveras follows. The universe is [n] = 1, . . . , n and there are n requests. Fori 6= max, request i is just the singleton i. Request max is the set max∪S0,where S0 = i : xi = 0.

Using these instances of Online Set Cover and the same arguments as inLemma 11 proves the theorem. Note that for Online Set Cover, only Case1 of Lemma 11 is relevant, since a c-competitive algorithm for this problem willaccept all requests i 6∈ S0.

Theorem 34. Online Set Cover is AOC-complete

Proof. This follows from Lemmas 12 and 13 together with Theorem 24.

7.4.2 AOC-complete maximization problems

In this section, we consider two maximization problems which are AOC-complete.

Online Independent Set.

The first maximization problem that we consider is Online Independent Set.Lemma 14. Online Independent Set is in AOC.

Proof. Each request is a vertex along with the edges connecting it to previouslyrequested vertices. The algorithm makes a binary choice for each request, toinclude the vertex (yi = 0) or not (yi = 1). The feasible outputs are those thatare independent sets. There exists a feasible output (taking no vertices). Thescore of a feasible output is the number of vertices in it, and the score of aninfeasible output is −∞. Any subset of the vertices in an optimal solution is afeasible solution.

117

Lemma 15. If there is a c-competitive algorithm reading b bits for OnlineIndependent Set, then there is a c-competitive algorithm reading b+O(log n)bits for maxASGk.

Proof. The proof is almost identical to the proof of Lemma 7. Let Alg be ac-competitive algorithm (with an additive constant of α) for Online Indepen-dent Set reading at most b bits of advice. We will define an algorithm Alg′

and an oracle O′ for maxASGk such that Alg′ is c-competitive (with the sameadditive constant) and reads at most b bits of advice.

As in Lemma 7, on input x = x1 . . . xn to Online Independent Set, thealgorithm Alg′ simulates Alg on Gx (from Definition 15). This time, a vertexin V0 is good and a vertex in V1 is bad. Note that V0 ∪ vn is a maximumindependent set in Gx. Also, if Alg accepts a bad vertex, vi, then no furthervertices vj (where i < j ≤ n) can be accepted because of the edges (vi, vj).Thus, Alg accepts at most one bad vertex. Let VAlg be the vertices acceptedby Alg. Since Alg is c-competitive, VAlg is an independent set satisfying|V0| ≤ |V0 ∪ vn| ≤ c |VAlg| + α. We denote by y the output computed byAlg′. There are three cases to consider:

Case 1: All vertices accepted by Alg are good, that is VAlg ⊆ V0. In this case,Alg answers 0 in round i if vi ∈ VAlg and 1 otherwise. Clearly, x v y and|x|0 = |V0| ≤ c |VAlg|+ α = c |y|0 + α.

Case 2a: Alg accepts a bad vertex, vi, and rejects a good vertex, vj. In thiscase, the oracle O′ writes the indices i and j in a self-delimiting way. Algsimulates Alg′ as before, but answers 1 in round i and 0 in round j. It followsthat x v y and |x|0 ≤ c |y|0 + α.

Case 2b: Alg accepts a bad vertex, vi, and all good vertices. This implies thatVAlg is an independent set of size |V0|+1, which must be optimal. The oracle O′writes the value of i to the advice tape in a self-delimiting way. Alg simulatesAlg′ as before but answers 1 in round i. It follows that x v y. Furthermore,|y|0 = |VAlg| − 1 = |V0| = |x|0, and hence the solution y is optimal.

In order to simulate Alg, the algorithm Alg′ needs to read at most b bits ofadvice plus O(log n) bits of advice to specify the case and handle the cases whereAlg accepts a bad vertex.

Theorem 35. Online Independent Set is AOC-complete.

Proof. This follows from Lemmas 14 and 15 together with Theorem 27.

Online Disjoint Path Allocation.

In this section, we show that Online Disjoint Path Allocation is AOC-complete.Lemma 16. Online Disjoint Path Allocation is in AOC.

In Lemma 17, we use the same hard instance for Online Disjoint Path Al-location as in [29] to get a lower bound on the advice complexity of OnlineDisjoint Path Allocation.

118

x = 010

L = 8

Ix = 〈(0, 4), (4, 6), (4, 5)〉

Figure 7.4: An example of the reduction used in the proof of Lemma 17. Therequest (0, 4) is good, since x1 = 0, and (4, 6) is a bad request, since x2 = 1.

Lemma 17. If there is a c-competitive algorithm reading b bits for OnlineDisjoint Path Allocation, then there is a c-competitive algorithm readingb+O(log n) bits for maxASGk.

Proof. The proof is similar to the proof of Lemma 15. Let Alg be a c-competitivealgorithm for Online Disjoint Path Allocation reading at most b bits ofadvice. We will describe an algorithm Alg′ and an oracle O′ for maxASGksuch that Alg′ is c-competitive (with the same additive constant α as Alg)and reads at most b bits of advice.

Let x = x1 . . . xn be an input to maxASGk. We define an instance Ix of OnlineDisjoint Path Allocation with L = 2n (that is, the number of vertices onthe path is 2n + 1). In round i, 1 ≤ i ≤ n, a path of length 2n−i arrives. For2 ≤ i ≤ n, the position of the path depends on xi−1. We define the requestsequence inductively (for an example, see Figure 7.4):

In round 1, the request (u1, v1) arrives, where

u1 = 0

v1 = 2n−1

In round i, 2 ≤ i ≤ n, the request (ui, vi) arrives, where

ui =

ui−1, if xi−1 = 1

vi−1, if xi−1 = 0

vi = ui + 2n−i

We say that a request ri = (ui, vi) is good if xi = 0 and bad if xi = 1. If ri isgood, then none of the later requests overlap with ri. On the other hand, if riis bad, then all later requests do overlap with ri. In particular, if one acceptsa single bad request, then no further requests can be accepted. An optimalsolution is obtained if one accepts all good requests together with rn.

The oracle O′ will provide Alg′ with the advice ϕ read by Alg when processingIx. Since Alg knows the value of xi−1 at the beginning of round i in maxASGk,Alg can use the advice ϕ to simulate Alg′ on Ix. If Alg only accepts goodrequests, it is clear that Alg′ can compute an output y such that x v y and|x|0 ≤ c |y|0 +α. The case where Alg accepts a bad request is handled by usingat most O(log n) additional advice bits, exactly as in Lemma 15.

Theorem 36. Online Disjoint Path Allocation is AOC-complete.

119

7.4.3 AOC Problems which are not AOC-complete

In this section, we will give two examples of problems in AOC which are provablynot AOC-complete.

Uniform knapsack.

We define the problem Online Uniform Knapsack as follows: For each re-quest, i, an item of weight ai, 0 ≤ ai ≤ 1, is requested. A request mustimmediately be either accepted or rejected, and this decision is irrevocable. LetS denote the set of indices of accepted items. We say that S is a feasible solutionif∑i∈S ai ≤ 1. The profit of a feasible solution is the number of items accepted

(all items have a value of 1). The problem is a maximization problem.

The Online Uniform Knapsack problem is the online knapsack problem asstudied in [30], but with the restriction that all items have a value of 1. Thisproblem is the same as online dual bin packing with only a single bin availableand where items can be rejected.

It is clear that Online Uniform Knapsack belongs to AOC since a subsetof a feasible solution is also a feasible solution. Furthermore, since all itemshave value 1, the profit of a feasible solution is simply the number of itemspacked in the knapsack. The problem is hard in the sense that no deterministicalgorithm (without advice) can attain a strict competitive ratio better thanΩ(n) (see [30, 123]). However, as the next lemma shows, the problem is notAOC-complete. In [30], it is shown that for any ε > 0, it is possible to achievea competitive ratio of 1 + ε using O(log n) bits of advice, under the assumptionthat all weights and values can be represented in polynomial space. Lemma 18shows how this assumption can be avoided when all items have unit value.Lemma 18. There is a strictly 2-competitive Online Uniform Knapsackalgorithm reading O(log n) bits of advice, where n is the length of the input.

Proof. Fix an input σ = 〈a1, . . . , an〉. Letm be the number of items accepted byOpt. The oracle writes m to the advice tape using a self-delimiting encoding.Since m ≤ n, this requires O(log n) bits. The algorithm Alg learns m fromthe advice tape and works as follows: If Alg is offered an item, ai, such thatai ≤ 2/m and if accepting ai will not make the total weight of Alg’s solutionlarger than 1, then Alg accepts ai. Otherwise, ai is rejected.

In order to show that Alg is strictly 2-competitive, we define A = ai : ai ≤2/m. First note that |A| ≥ m/2, since the sizes of the m smallest items add upto at most 1. Thus, if Alg accepts all items contained in A, it accepts at leastm/2 items. On the other hand, if Alg rejects any item ai ∈ A, it means that ithas already accepted items of total size more than 1− 2/m. Since all accepteditems have size at most 2/m, this means that Alg has accepted at least m/2items.

Even though Online Uniform Knapsack is not AOC-complete, the fact thatit belongs to AOC might still be of interest, since this provides some startingpoint for determining the advice complexity of the problem. In particular, it

120

gives some (non-trivial) way to obtain a c-competitive algorithm for c < 2.Determining the exact advice complexity of Online Uniform Knapsack isleft as an open problem.

Matching under edge-arrival.

We briefly consider the Online Matching problem in an edge-arrival version.For each request, an edge is revealed. An edge can be either accepted or rejected.Denote by EAlg the edges accepted by some algorithm Alg. A solution EAlg isfeasible if the set of edges in the solution is a matching in the input graph, andthe profit of a feasible solution is the number of edges in EAlg. The problem isa maximization problem.

It is well-known that the greedy algorithm is 2-competitive for Online Match-ing. Since this algorithm works in an online setting without any advice, it fol-lows that Online Matching is not AOC-complete. On the other hand, OnlineMatching is in AOC. This gives an upper bound on the advice complexity ofthe problem for 1 ≤ c < 2. It seems obvious that this upper bound is not tight,but currently, no better bound is known.

7.5 Conclusion and Open Problems

The following theorem summarizes the main results of this paper.Theorem 37. For the problems

• Online Vertex Cover

• Online Cycle Finding

• Online Dominating Set

• Online Set Cover (set-arrival version)

• Online Independent Set

• Online Disjoint Path Allocation

and for any c > 1, possibly a function of the input length n,

b = log

(1 +

(c− 1)c−1

cc

)n±O(log n)

bits of advice are necessary and sufficient to achieve a (strict) competitive ratioof c.

As with the original string guessing problem SG [27, 68], we have shown thatASG is a useful tool for determining the advice complexity of online problems. Itseems plausible that one could identify other variants of online string guessingand obtain classes similar to AOC. Potentially, this could lead to an entirehierarchy of string guessing problems and related classes.

More concretely, there are various possibilities of generalizing ASG. One couldassociate some positive weight to each bit xi in the input string. The goal

121

would then be to produce a feasible output of minimum (or maximum) weight.Such a string guessing problem would model minimum weight vertex cover (ormaximum weight independent set). Note that for maxASG, the algorithm fromTheorem 21 works in the weighted version. However, the same is not true forany of the algorithms we have given for minASG. Thus, it remains an openproblem if O(n/c) bits of advice suffice to achieve a competitive ratio of c forthe weighted version of minASG.

Acknowledgements

The authors would like to thank Magnus Gausdal Find for helpful discussions.

122

7.6 Appendix: Approximation of the Advice Com-plexity Bounds

In Theorems 22-27, bounds on the advice complexity of ASG were obtained.These bounds are tight up to an additive term of O(log n). However, within theproofs, they are all expressed in terms of the minimum size of a certain coveringdesign or a quotient of binomial coefficients. In this appendix, we prove theclosed formula estimates for the advice complexity stated in Theorems 22-27and 30. Again, these estimates are tight up to an additive term of O(log n).The key to obtaining the estimates is the estimation of a binomial coefficientusing the binary entropy function.

7.6.1 Approximating the Function B(n, c)

Lemma 19. For c > 1, it holds that

1

e ln(2)

1

c≤ log

(1 +

(c− 1)c−1

cc

)≤ 1

c.

Proof. We prove the upper bound first. To this end, note that

log

(1 +

(c− 1)c−1

cc

)≤ 1

c⇔ 1 +

(c− 1)c−1

cc≤ 21/c ⇔

(1 +

(c− 1)c−1

cc

)c≤ 2.

Using calculus, one may verify that(

1 + (c−1)c−1

cc

)cis decreasing in c for c > 1.

Thus, by continuity, it follows that(1 +

(c− 1)c−1

cc

)c≤ limc→1+

(1 +

(c− 1)c−1

cc

)c= limc→1+

(1 +

(c− 1

c

)c−11

c

)c= limc→1+

(1 +

1

c

)c= 2.

For the lower bound, let a = e ln(2) and note that

1

ac≤ log

(1 +

(c− 1)c−1

cc

)⇔ 2 ≤

(1 +

(c− 1)c−1

cc

)ac.

Again, using calculus, one may verify that(

1 + (c−1)c−1

cc

)acis decreasing in c

for c > 1. It follows that(1 +

(c− 1)c−1

cc

)ac≥ limc→∞

(1 +

(c− 1)c−1

cc

)ac= limc→∞

(1 +

(c− 1

c

)c−11

c

)ac= limc→∞

(1 +

1

e

1

c

)ac= limc→∞

(1 +

a/e

ac

)ac= ea/e = eln(2) = 2.

123

7.6.2 The Binary Entropy Function

In this section, we give some properties of the binary entropy function that willbe used extensively in Section 7.6.4.Definition 18. The binary entropy function H : [0, 1] → [0, 1] is the functiongiven by

H(p) = −p log(p)− (1− p) log(1− p), for 0 < p < 1,

and H(0) = H(1) = 0.Lemma 20 (Lemma 9.2 in [129]). For integers m,n such that 0 ≤ m ≤ n,

2nH(m/n)

n+ 1≤(n

m

)≤ 2nH(m/n).

Proposition 1. The binary entropy function H(p) has the following properties.

(H1) H(

1s

)= log(s) + 1−s

s log(s− 1) for s > 1.

(H2) sH(

1s

)≤ log s+ 2 for s > 1.

(H3) H ′(p) = log(

1p − 1

)and H ′′(p) < 0 for 0 < p < 1.

(H4) For any fixed t > 0, sH(ts

)is increasing in s for s > t.

(H5) nH(

1x

)− nH

(1x + 1

n

)< 3 if n ≥ 3 and x > 2.

Proof. (H1): Follows from the definition.

(H2): For s > 1,

sH

(1

s

)= s

(log s+

1− ss

log(s− 1)

), by (H1)

= log

((1 +

1

s− 1

)s−1

s

)≤ log(e · s) = log(e) + log(s) ≤ log s+ 2.

(H3): Note that H is smooth for 0 < p < 1. The derivative H ′(p) can becalculated from the definition. The second-order derivative is

H ′′(p) =−1

(1− p)p ln(2),

which is strictly less than zero for all 0 < p < 1.

(H4): Fix t > 0. The claim follows by showing that the partial derivative ofsH( ts ) with respect to s is positive for all s > t.

d

ds

(sH

(t

s

))= H

(t

s

)+ sH ′

(t

s

)(− t

s2

)= H

(t

s

)− t

sH ′(t

s

)= − t

slog

(t

s

)−(

1− t

s

)log

(1− t

s

)− t

slog(st− 1), by Def. 18 and (H3)

= − log

(1− t

s

)> 0.

124

(H5): H(p) is increasing for 0 ≤ p ≤ 12 and decreasing for 1

2 ≤ p ≤ 1. If1x + 1

n ≤12 , then the claim is trivially true (since then the difference is negative).

Assume therefore that 1x + 1

n >12 . Under this assumption, H( 1

x ) increases andH( 1

x + 1n ) decreases as x tends to 2. Thus, H( 1

x ) − H( 1x + 1

n ) increases as xtends to 2 and, hence,

H

(1

x

)−H

(1

x+

1

n

)≤ H

(1

2

)−H

(1

2+

1

n

). (7.3)

Inserting into the definition of H gives

H

(1

2

)−H

(1

2+

1

n

)= 1−

(−(

1

2+

1

n

)log

(1

2+

1

n

)−(

1

2− 1

n

)log

(1

2− 1

n

))=

1

nlog

( 12 + 1

n12 −

1n

)+

1

2log

((1

2+

1

n

)(1

2− 1

n

))+ 1

=1

nlog

(n+ 2

n− 2

)+

1

2log

(n2 − 4

4n2

)+ 1

Since (n+2)/(n−2) is decreasing for n ≥ 3, it follows that log((n+2)/(n−2)) ≤log(5). Furthermore, (n2−4)/(4n2) ≤ 1

4 for all n ≥ 3, and so 12 log

((n2 − 4)/(4n2)

)+

1 ≤ 0. We conclude that, for all n ≥ 3,

H

(1

2

)−H

(1

2+

1

n

)≤ log(5)

n<

3

n. (7.4)

Combining (7.3) and (7.4) proves (H5).

7.6.3 Binomial Coefficients

The following proposition is a collection of simple facts about the binomialcoefficient that will be used in Sections 7.6.4 and 7.6.5.Proposition 2. Let a, b, c ∈ N.

(B1)(ab

)= a

a−b(a−1b

), where b < a.

(B2) For fixed b,(ab

)is increasing in a.

(B3) If c ≤ b ≤ a, then (ac

)(bc

) =

(ab

)(a−ca−b) .

Proof. First, we prove (B1):(a

b

)=

a!

b!(a− b)!=

a

a− b(a− 1)!

b!(a− 1− b)!=

a

a− b

(a− 1

b

)

(B2) follows directly from (B1).

125

To prove (B3), we calculate the two fractions separately:(ac

)(bc

) =a!

c!(a− c)!c!(b− c)!

b!=

a!

(a− c)!(b− c)!b!(

ab

)(a−ca−b) =

a!

b!(a− b)!(a− b)!(b− c)!

(a− c)!=a!

b!

(b− c)!(a− c)!

=

(ac

)(bc

)

7.6.4 Approximating the Advice Complexity Bounds forminASG

The following lemma is used for proving Theorems 22–24.Lemma 21. For c > 1 and n ≥ 3,

log

(max

t : bctc<nC(n, bctc, t)

)≥ log

(max

t : bctc<n

(nt

)(bctct

)) (7.5)

≥ log

(1 +

(c− 1)c−1

cc

)n− 2 log(n+ 1)− 5 (7.6)

and

log

(max

t : bctc<nC(n, bctc, t)

)≤ log

(max

t : bctc<n

(nt

)(bctct

)n) (7.7)

≤ log

(1 +

(c− 1)c−1

cc

)n+ 3 log(n+ 1). (7.8)

Proof. We prove the upper and lower bounds separately.

Upper bound: Fix n, c. By Lemma 5,

C(n, bctc, t) ≤(nt

)(bctct

) (1 + ln

(bctct

)).

Note that 1 + ln(bctct

)≤ n since we consider only bctc < n. This proves (7.7).

Now, taking the logarithm on both sides gives

log(C(n, bctc, t)) ≤ log

( (nt

)(bctct

))+ log n ≤ log

( (nt

)(dcte−1t

))+ log n

≤ log

(nt

)dcte−tdcte

(dctet

)+ log n, by (B1)

≤ log

( (nt

)(dctet

))+ log

(dctedcte − t

)+ log n

≤ log

( (nt

)(dctet

))+ 2 log n . (7.9)

126

Above, we have increased bctc to dcte in the binomial coefficient (at the price ofan additive term of log n). This is done since it will later be convenient to usethat ct ≤ dcte. Using Lemma 20, we get that(

nt

)(dctet

) ≤ 2nH(t/n)

2dcteH(t/dcte) (dcte+ 1) ,

and therefore

log

( (nt

)(dctet

)) ≤ nH ( tn

)− dcteH

(t

dcte

)+ log (dcte+ 1)

≤ nH(t

n

)− ctH

(1

c

)+ log(n+ 1), by (H4). (7.10)

DefineM(n, t) = nH

(t

n

)− ctH

(1

c

).

Combining (7.9) and (7.10) shows that

log(C(n, bctc, t)) ≤M(n, t) + 3 log(n+ 1) . (7.11)

The function M is smooth. For any given input length n, we can determine thevalue of t maximizing M(n, t) using calculus. In order to simplify the notationfor these calculations, define

x =

(c

c− 1

)c(c− 1) + 1,

and note that

log(x− 1) = c

(log c+

1− cc

log(c− 1)

)= cH

(1

c

), by (H1). (7.12)

We want to determine those values of t for which ddtM(n, t) = 0:

d

dtM(n, t) =

d

dt

(nH

(t

n

)− ctH

(1

c

))= 0

⇔ nH ′(t

n

)· 1

n− cH

(1

c

)= 0

⇔ log(nt− 1)

= cH

(1

c

), by (H3)

⇔ n

t= 2cH(1/c) + 1

⇔ t =n

2cH(1/c) + 1

⇔ t =n

2log(x−1) + 1, by (7.12)

⇔ t =n

x.

127

Note that d2

dt2M(n, t) = H ′′( tn )/n < 0 for all values of t, by (H3). Thus,

M(n, t) ≤M(n,n

x

), for all values of t . (7.13)

The value of M(n, nx ) can be calculated as follows:

M(n,n

x

)= nH

(1

x

)− c n

xH

(1

c

)= n

(log(x) +

1− xx

log(x− 1)− c

xH(1/c)

), by (H1)

= n

(log(x) +

1− xx

log(x− 1)− 1

xlog(x− 1)

), by (7.12)

= n(

log(x)− log(x− 1))

= n log

(x

x− 1

)= n log

(1 +

(c− 1)c−1

cc

). (7.14)

Combining (7.11), (7.13), and (7.14), we conclude that

log(C(n, bctc, t)) ≤ n log

(1 +

(c− 1)c−1

cc

)+ 3 log(n+ 1).

Lower Bound: By Lemma 5,

log

(max

t : bctc<nC(n, bctc, t)

)≥ log

(max

t : bctc<n

(nt

)(bctct

)) .

This proves (7.5). In order to prove (7.6), first note that by Lemma 19,

log

(1 +

(c− 1)c−1

cc

)n ≤ n

c.

Thus, for c ≥ n2 , the righthand side of (7.6) is negative, and hence, the inequality

is trivially true.

Assume now that c < n2 . We will determine an integer value of t such that(

nt

)/(bctct

)becomes sufficiently large. First, we use Lemma 20:(

nt

)(bctct

) ≥ 2nH(t/n)

(n+ 1) · 2bctcH(t/bctc) =2nH(t/n)−bctcH(t/bctc)

n+ 1

It is possible that t = bctc, but this is fine since H(1) = 0. Using (H4), we seethat

bctcH(

t

bctc

)≤ ctH

(t

ct

)= ctH

(1

c

).

Thus,

log

( (nt

)(bctct

)) ≥ nH ( tn

)−ctH

(1

c

)−log(n+1) = M(n, t)−log(n+1). (7.15)

128

Let t′ = nx . We know that M(n, t) attains its maximum value when t = t′.

Since c > 1, it is clear that x > c and hence t′ < nc . It follows that bct′c < n.

However, t′ might not be an integer. In what follows, we will first argue thatbcdt′ec < n and then that M(n, dt′e) is close to M(n, t′). The desired lowerbound will then follow by setting t = dt′e.

Using calculus, it can be verified that, for c > 1, x/c is increasing in c. Hence,

x

c=

(c

c− 1

)c−1

+1

c

≥ limc→1+

((c

c− 1

)c−1

+1

c

), for c > 1

= limc→1+

(1 +

1

c− 1

)c−1

+ limc→1+

1

c= lima→0+

(1 +

1

a

)a+ 1 = 2 .

Thus, c ≤ x/2, and hence,

bcdt′ec ≤ c⌈nx

⌉<cn

x+ c ≤ n

2+ c < n .

Note that ddtM(n, t) < 0 for t > t′, so M(n, dt′e) ≥ M(n, t′ + 1). Combining

this observation with (H2) and (H5), we get that

M(n, dt′e) ≥M(n, t′ + 1) = nH

(t′ + 1

n

)− c(t′ + 1)H

(1

c

)= nH

(1

x+

1

n

)− cn

xH

(1

c

)− cH

(1

c

)≥ nH

(1

x+

1

n

)− cn

xH

(1

c

)− log n− 2, by (H2)

≥ nH

(1

x

)− cn

xH

(1

c

)− log n− 5, by (H5)

= M(n, t′)− log n− 5.

By choosing t = dt′e in the max, we conclude that

log

(max

t : bctc<n

(nt

)(bctct

)) ≥M(n, dt′e)− log(n+ 1), by (7.15)

≥M(n, t′)− log(n+ 1)− log n− 5

≥ n log

(1 +

(c− 1)c−1

cc

)− 2 log(n+ 1)− 5, by (7.14).

The following lemma is used for proving Theorem 29.Lemma 22. If c is an integer-valued function of n and c > 1, it holds that

log

(maxt : ct<n

(nt

)(ctt

)) = Ω(nc

).

129

Proof. Assume that c is an integer-valued function of n, that c > 1 and thatct < n. It follows that(

nt

)(ctt

) =n!(ct− t)!

(n− t)!(ct)!≥ n(n− 1) · · · (n− t+ 1)

(ct)(ct− 1) · · · (ct− t+ 1)

Let t = b necc. Then(nt

)(ctt

) =n(n− 1) · · · (n− t+ 1)

(ct)(ct− 1) · · · (ct− t+ 1)≥ n(n− 1) · · · (n− t+ 1)

ne (ne − 1) · · · (ne − t+ 1)

=nne

n− 1ne − 1

· · · n− t+ 1ne − t+ 1

≥ et.

Since

log(et) = t log(e) ≥( nec− 1)

log e =n

e ln(2) c− log(e) = Ω

(nc

),

this proves the lemma by choosing t = b necc.

7.6.5 Approximating the Advice Complexity Bounds formaxASG

Lemma 24 of this section is used for Theorems 25–27. In proving Lemma 24,the following lemma will be useful.Lemma 23. For all n, c, it holds that

maxu : 0<u<n

(nu

)(n−du/cen−u

) ≤ n( maxt : bctc<n

(nt

)(bctct

)) .On the other hand, it also holds that

maxu : 0<u<n

(nu

)(n−du/cen−u

) ≥ 1

n

(max

t : bctc<n

(nt

)(bctct

)) .Proof. Let

fn,c(t) =

(nt

)(bctct

) and gn,c(u) =

(nu

)(n−du/cen−u

) .In order to prove the upper bound, we show that fn,c(bu/cc) ≥ gn,c(u)/n, for

130

any integer u, 0 < u < n. Note that bu/cc < u, since c > 1.

fn,c(bu/cc) =

(nbu/cc

)(bcbu/cccbu/cc

)≥

(nbu/cc

)(ubu/cc

) , by (B2)

=

(nu

)(n−bu/ccn−u

) , by (B3)

≥ u− bu/ccn− bu/cc

(nu

)(n−du/cen−u

) , by (B1)

≥ u− bu/ccn− bu/cc

gn,c(u)

≥ 1

ngn,c(u), since u− bu/cc ≥ 1.

By (B1), the second last inequality is actually an equality, unless u/c is aninteger.

In order to prove the lower bound, we will show that gn,c(dcte) ≥ fn,c(t)/n, forany integer t with bctc < n. Note that t < dcte, since c > 1.

gn,c(dcte) =

(ndcte)(

n−ddcte/cen−dcte

)≥

(ndcte)(

n−tn−dcte

) , by (B2)

=

(n

n−dcte)(

n−tn−dcte

)=

(nn−t)(dcte

t

) , by (B3)

=

(nn−t)

dctedcte−t

(bctct

) , by (B1)

=dcte − tdcte

fn,c(t)

≥ 1

nfn,c(t), since dcte − t ≥ 1 and dcte ≤ n.

Lemma 24. Let c > 1 and n ≥ 3. It holds that

log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u)

)≥ log

(max

u : 0<u<n

(nu

)(n−duc en−u

))

≥ log

(1 +

(c− 1)c−1

cc

)n− 3 log n− 6.

131

Furthermore,

log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u)

)≤ log

(max

u : 0<u<n

(nu

)(n−duc en−u

) n)

≤ log

(1 +

(c− 1)c−1

cc

)n+ 4 log(n+ 1)

Proof. We prove the lower bound first.

log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u)

)≥ log

(max

u : 0<u<n

(nu

)(n−duc en−u

)) , by Lemma 5

≥ log

(max

t : bctc<n

(nt

)(bctct

))− log n, by Lemma 23

≥ log

(1 +

(c− 1)c−1

cc

)n− 2 log(n+ 1)− 5− log n, by (7.6)

≥ log

(1 +

(c− 1)c−1

cc

)n− 3 log n− 6, since n ≥ 3.

We now prove the upper bound.

log

(max

u : 0<u<nC(n, n−

⌈uc

⌉, n− u)

)≤ log

(max

u : 0<u<n

(nu

)(n−duc en−u

) (1 + ln

(n− du/cen− u

))), by Lemma 5

≤ log

(max

u : 0<u<n

(nu

)(n−duc en−u

) n)

= log

(max

u : 0<u<n

(nu

)(n−duc en−u

))+ log n

≤ log

(max

t : bctc<n

(nt

)(bctct

) n)+ log n, by Lemma 23

≤ log

(1 +

(c− 1)c−1

cc

)n+ 4 log(n+ 1), by (7.8)

132

CHAPTER 8

Advice Complexity of the Online Induced Subgraph Problem

Dennis Komm, ETH Zürich, [email protected] Královič, Comenius University, [email protected] Královič, Google Inc., [email protected]

Christian Kudahl, University of Southern Denmark, [email protected] 1

Abstract

Several well-studied graph problems aim to select a largest (or smallest) inducedsubgraph with a given property of the input graph. Examples include maximumindependent set, maximum planar graph, maximum clique, minimum feedbackvertex set, and many others. In online versions of these problems, the verticesof the graph are presented in an adversarial order, and with each vertex, the on-line algorithm must irreversibly decide whether to include it into the constructedsubgraph, based only on the subgraph induced by the vertices presented so far.We study the properties that are common to all these problems by investigatinga generalized problem: for an arbitrary but fixed hereditary property π, findsome maximal induced subgraph having π. We investigate this problem fromthe point of view of advice complexity, i. e., we ask how some additional infor-mation about the yet unrevealed parts of the input can influence the solutionquality. We evaluate the information in a quantitative way by considering thebest possible advice of given size that describes the unknown input. Using aresult from Boyar et al. [STACS 2015, LIPIcs 30], we give a tight trade-off rela-tionship stating that, for inputs of length n, roughly n/c bits of advice are bothneeded and sufficient to obtain a solution with competitive ratio c, regardless ofthe choice of π, for any c (possibly a function of n). This complements the re-sults from Bartal et al. [SIAM Journal on Computing 36(2), 2006] stating that,without any advice, even a randomized algorithm cannot achieve a competi-tive ratio better than Ω(n1−log4 3−o(1)). Surprisingly, for a given cohereditaryproperty π and the objective to find a minimum subgraph having π, the advicecomplexity varies significantly with the choice of π. We also consider a pre-emptive online model, inspired by some applications mainly in networking andscheduling, where the decision of the algorithm is not completely irreversible.

1Supported in part by the Villum Foundation and the Stibo-Foundation and SNF grant200021-146372.

133

In particular, the algorithm may discard some vertices previously assigned tothe constructed set, but discarded vertices cannot be reinserted into the set.We show that, for the maximum induced subgraph problem, preemption doesnot significantly help by giving a lower bound of Ω(n/(c2 log c)) on the bits ofadvice that are needed to obtain competitive ratio c, where c is any increasingfunction bounded from above by

√n/ log n. We also give a linear lower bound

for c close to 1.

8.1 Introduction

Online algorithms get their input gradually, and this way have to produce partsof the output without full knowledge of the instance at hand, which is a largedisadvantage compared to classical offline computation, yet a realistic modelof many real-world scenarios [35]. Most of the offline problems have their on-line counterpart. Instead of asking about the time and space complexity ofalgorithms to solve a computational problem, competitive analysis is commonlyused as a tool to study how well online algorithms perform [34, 139] withoutany time or space restrictions; the analogous offline measurement is the analysisof the approximation ratio. A large class of computational problems for bothonline and offline computation are formulated on graphs; we call such problems(online) graph problems.

In this paper, we deal with problems on unweighted undirected graphs that aregiven to an online algorithm vertex by vertex in consecutive discrete time steps.Formally, we are given a graph G = (V,E), where |V | = n, with an ordering ≺on V . Without loss of generality, assume V = v1, . . . , vn, and v1 ≺ · · · ≺ vnspecifies the order in which the vertices ofG are presented to an online algorithm;this way, the vertex vi is given in the ith time step. Together with vi, all edgesvj , vi ∈ E are revealed for all vj ≺ vi. If vi is revealed, an online algorithmmust decide whether to accept vi or discard it. Neither G nor n are knownto the online algorithm. We study two versions of online problems; with andwithout preemption. In the former case, the decision whether vi is accepted ornot is definite. In the latter case, in every time step, the online algorithm maypreempt (discard) some of the vertices it previously accepted; however, a vertexthat was once discarded cannot be part of the solution anymore.

For an instance I = (v1, . . . , vn) of some graph problem, we denote by Alg(I)the solution computed by some online algorithm Alg; Opt(I) denotes an opti-mal solution for I, which can generally only be computed with the full knowledgeof I. We assume that I is constructed in an adversarial manner to give worst-case bounds on the solution quality of any online algorithm. This means thatwe explicitly think of I as being given by an adversary that knows Alg andwants to make it perform as poorly as possible; for more details, we refer to thestandard literature [34].

For maximization problems with an associated profit function called profit, anonline algorithm Alg is called c-competitive if, for every instance I of the given

134

problem, it holds that

profit(Alg(I)) ≥ 1/c · profit(Opt(I)) ; (8.1)

likewise, for minimization problems with a cost function called cost, we require

cost(Alg(I)) ≤ c · cost(Opt(I)) (8.2)

for every instance I. In this context, c > 1 may be a constant or a functionthat increases with the input length n. We will use c and c(n) interchangeablyto refer to the competitive ratio; the latter is simply used to emphasize that cmay depend on n.

Throughout this paper, log denotes the binary logarithm log2.

Instead of studying specific graph problems, in this paper, we investigate a largeclass of such problems, which are defined by hereditary properties. This classincludes many well-known problems such as maximum independent set, maxi-mum planar graph, maximum induced clique, and maximum acyclic subgraph.The cohereditary problems we consider are online versions of the offline problemof searching for a specific structure within a graph. An example is to find theshortest cycle; this defines the girth of the graph. Online cycle finding wasconsidered by Boyar et al. [40].

We call any collection of graphs a graph property π. A graph has (or satisfies)property π if it is in the collection. Examples include the property of being pla-nar (the collection contains all planar graphs), or being an independent set (thecollection contains all graphs with no edges). We only consider properties thatare non-trivial, i. e., they are both true for infinitely many graphs and false forinfinitely many graphs. A property is called hereditary if it holds that, if a graphG satisfies π, then also any induced subgraph G′ of G satisfies π; conversely, it iscalled cohereditary if it holds that, if a graph G satisfies π, and G is an inducedsubgraph of G′, then also G′ satisfies π. For a graph G = (V,E) and a subsetof vertices S = v1, . . . , vi ⊆ V , let G[S] (or G[v1, . . . , vi]) denote the subgraphof G induced by the vertices from S. For a graph G = (V,E), let G = (V,E) bethe complement of G, i. e., u, v ∈ E if and only if u, v 6∈ E. Let Kn denotethe complete graph on n vertices, and let Kn denote the independent set onn vertices. We consider the online version of the problem of finding maximal(minimal, respectively) induced subgraphs satisfying a hereditary (cohereditary,respectively) property π, denoted by Max-π (Min-π, respectively). For theease of presentation, we will call such problems hereditary (cohereditary, respec-tively) problems. Let SAlg := Alg(I) denote the set of vertices accepted bysome online algorithm Alg for some instance I of a hereditary problem. Then,for Max-π, the profit of Alg is |SAlg| := profit(Alg(I)) if G[SAlg] has theproperty π and −∞ otherwise; the goal is to maximize the profit. Conversely, forMin-π, the cost of Alg is |SAlg| := cost(Alg(I)) if G[SAlg] has the propertyπ and ∞ otherwise; the goal is to minimize the cost. As an example, considerthe online maximum independent set problem; the set of all independent sets isclearly a hereditary property (every independent set is a feasible solution, andevery induced subset of an independent set is again an independent set). Whena vertex is revealed, an online algorithm needs to decide whether it becomespart of the solution or not. The goal is to compute an independent set that

135

is as large as possible; the profit of the solution is thus equal to |SAlg|. It isstraightforward to define the problem without or with preemption.

In this paper, we study online algorithms with advice for hereditary and co-hereditary problems. In this setup, an online algorithm is equipped with anadditional resource that contains information about the instance it is dealingwith. A related model was originally introduced by Dobrev et al. [60]. Revisedversions were defined by Emek et al. [68], Böckenhauer et al. [29], and Hromkovičet al. [92]. Here, we use the model of the latter two papers. Consider an inputI = (v1, . . . , vn) of a hereditary problem. An online algorithm Alg with advicecomputes the output sequence Algφ(I) = (y1, . . . , yn) such that yi is computedfrom φ, v1, . . . , vi, where φ is the content of the advice tape, i. e., an infinitebinary sequence. We denote the cost (profit, respectively) of the computed out-put by cost(Algφ(I)) (profit(Algφ(I)), respectively). The algorithm Alg isc-competitive with advice complexity b(n) if, for every n and for each I of lengthat most n, there exists some φ such that cost(Algφ(I)) ≤ c · cost(Opt(I))(profit(Algφ(I)) ≥ 1/c ·profit(Opt(I)), respectively) and at most the first b(n)bits of φ have been accessed by Alg.2 We sometimes simply write b instead ofb(n) to increase readability.

The motivation for online algorithms with advice is mostly of a theoretical na-ture, as we may think of the information necessary and sufficient to compute anoptimal solution as the information content of the given problem [92]. More-over, there is a non-trivial connection to randomized online algorithms [28,105].Lower bounds on the advice complexity often translate to lower bounds forsemi-online algorithms. Essentially, here one studies whether knowing somesmall parameter of an online problem (such as the length of the input or thenumber of requests of a certain type) results in a much better competitive ra-tio. Lower bound results using advice can often help to answer this question.Similarly, lookahead can be seen as a special kind of advice that is supplied toan algorithm. This way, online algorithms with advice generalize a number ofconcepts introduced to give online algorithms more power. However, the mainquestion posed is how much any kind of (computable) information could help;and maybe even more importantly, which amount of information will never helpto overcome some certain threshold, no matter what this information actuallyis.

Organization, Related Work, and Results

We are mainly concerned with proving lower bounds of the form that a particularnumber of advice bits is necessary in order to obtain some certain output qualityfor a given hereditary property. We make heavy use of online reductions betweengeneric problems and the studied ones that allow us to bound the number ofadvice bits necessary from below. Emek et al. [68] used this technique in order toprove lower bounds for metrical task systems. The foundations of the reductionsas we perform them here are due to Böckenhauer et al. [27], who introducedthe string guessing problem, and Boyar et al. [40], who studied a problem called

2Note that usually an additive constant is included in the definition of c-competitiveness,i. e., in (8.1) and (8.2). However, for the problems we consider, this changes the advicecomplexity by at most O(logn); see Remark 9 in Boyar et al. [40].

136

asymmetric string guessing. Mikkelsen [128] introduced a problem, which wecall the anti-string guessing problem, and which is a variant of string guessingwith a more “friendly” cost function. Our reductions rely on some results fromBartal et al. [17] that characterize hereditary properties by forbidden subgraphstogether with some insights from Ramsey theory (see, e. g., Diestel [56]).

In Section 8.2, we recall some basic results from Ramsey theory and define thegeneric online problems that we use as a basis of our reductions. In Section 8.3,we study both Max-π and Min-π in the case that no preemption is allowed; us-ing a reduction from the asymmetric string guessing problem, we show that anyc-competitive online algorithm for Max-π needs roughly n/c advice bits, andthis is essentially tight. This complements results from Bartal et al. [17], whichstate that, without any advice, even a randomized algorithm cannot achieve acompetitive ratio better than Ω(n1−log4 3−o(1)). The advice complexity of themaximum independent set problem on bipartite and sparse graphs was studiedby Dobrev et al. [58]. In the subsequent sections, we allow the online algorithmto use preemption. In Section 8.4, we use a reduction from the string guessingproblem to show a lower bound of Ω(n/(c2 log c)) on the number of advice bitsthat are needed to obtain competitive ratio c, where c is any increasing functionbounded from above by

√n/ log n. In Section 8.5, using a reduction from the

anti-string guessing problem, we also give a linear lower bound for c being closeto 1.

Due to space constraints, some of the proofs are omitted.

8.2 Preliminaries

Hereditary properties can be characterized by forbidden induced subgraphs asfollows: if a graph G does not satisfy a hereditary property π, then any graphH such that G is an induced subgraph of H does not satisfy π neither. Hence,there is a (potentially infinite) set of minimal forbidden graphs (w.r.t. beinginduced subgraph) Sπ such that G satisfies π if and only if no graphs from Sπare induced subgraphs of G. Conversely, any set of graphs S defines a hereditaryproperty πS of not having a graph from S as induced subgraph.

Furthermore, there is the following bijection between hereditary and cohered-itary properties: for a hereditary property π we can define a property π suchthat a graph G satisfies π if and only if it does not satisfy π (it is easy to seethat π is cohereditary), and vice versa. Hence, a cohereditary property π canbe characterized by a set of minimal (w.r.t. being induced subgraph) obligatorysubgraphs Sπ such that a graph G has the property π if and only if at least onegraph from Sπ is an induced subgraph of G.

To each property π we can define the complementary property πc such that agraph G satisfies πc if and only if the complement of G satisfies π. Clearly, if πis (co)hereditary, so is πc. Moreover, if H is forbidden (obligatory, respectively)for π, H is forbidden (obligatory, respectively) for πc. The following statementis due to Lewis and Yannakakis.Lemma 25 (Lewis and Yannakakis [119], proof of Theorem 4). Every non-trivial hereditary property π is satisfied either by all cliques or by all independent

137

sets.

Proof. Assume, for the sake of contradiction, that there is a hereditary propertyπ, and two numbersm, n, such thatKm andKn do not satisfy π. Let r(m,n) bethe Ramsey number [131], such that every graph with at least r(m,n) verticescontains Km or Kn as induced subgraph. Since π is non-trivial, there is a graphG with more than r(m,n) vertices that satisfies π. G contains either Km or Kn

as induced subgraph, and since π is hereditary, either Km or Kn satisfies π.

Bartal et al. proved the following theorem. It is formulated in the known su-pergraph model, where a graph G = (V,E) with n vertices is a-priori known tothe algorithm, and the input is a sequence of vertices v1, . . . , vk. The task isto select in an online manner the subgraph of the induced graph G[v1, . . . , vk]having property π.Theorem 38 (Bartal et al. [17] and references therein). In the known super-graph model, any randomized algorithm for the Max-π problem has competitiveratio

Ω(n1−log4 3−o(1)

),

even if preemption is allowed.

Note that n in the previous theorem thus refers to the size of the known su-pergraph, and not to the length of the input sequence. However, in the proofa graph with n = 4i vertices is considered, from which subgraphs of size 3i arepresented. Each of these instances has an optimal solution of size at least 2i,and it is shown that any deterministic algorithm can have a profit of at mostα(3/2)i log n on average, for some constant α. From that, using Yao’s princi-ple [154] as stated in [34], the result follows. The same set of instances thusyields the following result.Theorem 39 (Bartal et al. [17]). Any randomized algorithm for the Max-πproblem has competitive ratio

Ω(n2/ log 3−1−o(1)

),

even if preemption is allowed.

Next, we describe some specific online problems that allow us to give lowerbounds on the advice complexity using a special kind of reduction. Böckenhaueret al. [27] introduced a very generic online problem called string guessing withknown history over alphabets of size σ (σ-SGKH). The input is a sequence ofrequests (x0, . . . , xn) where x0 = n and for i ≥ 1, xi ∈ 1, . . . , σ. The algorithmhas to produce a sequence of answers (y1, . . . , yn, yn+1), where yi ∈ 1, . . . , σand yn+1 = ⊥ and where yi is allowed to depend on x0, . . . , xi−1 (and of courseany advice bits the algorithm reads). The cost is the number of positions i forwhich yi 6= xi.Theorem 40 (Böckenhauer et al. [27]). Let σ ≥ 2. Any online algorithm withadvice for σ-SGKH that guesses γn bits of the input correctly must read at least(

1 + (1− γ) logσ

(1− γσ − 1

)+ γ logσ γ

)n log σ

bits of advice.

138

Mikkelsen [128] introduced the problem anti-string guessing with known historyover alphabets of size σ (Anti-σ-SGKH). It is defined exactly as σ-SGKH exceptthat the cost is the number of positions i for which yi = xi.Theorem 41 (Mikkelsen [128, Theorem 11]). Let σ ≥ 2 and let 1 ≤ c <σ/(σ − 1). Any c-competitive Anti-σ-SGKH algorithm must read at least(

1− hσ(

1

c

))n log σ

bits of advice, where n is the input length. This holds even if n is known inadvance. Here, hσ is the σ-ary entropy function given by hσ(x) = x logσ(σ −1)− x logσ x− (1− x) logσ(1− x).

Boyar et al. [40] investigated a problem called maximum asymmetric stringguessing (maxASGk). The input is a sequence of requests (x0, . . . , xn) wherex0 = ⊥ and for i ≥ 1, xi ∈ 0, 1. The algorithm has to produce a sequence ofanswers (y1, . . . , yn, yn+1). The output is feasible if xi ≤ yi for all 1 ≤ i ≤ n. Theprofit of the algorithm is the number of zeros in y1, . . . , yn for feasible outputs,and −∞ otherwise. The “blind” version of the problem, where the algorithmhas to produce the output without actually seeing the requests (i. e., in eachstep, the algorithm receives some dummy request ⊥), is denoted maxASGu. Inwhat follows, let

Bc := log

(1 +

(c− 1)c−1

cc

)≈ 1

c· 1

e ln 2.

Theorem 42 (Boyar et al. [40]). For any function c(n) such that 1 ≤ c(n) ≤ n,there is a c-competitive algorithm for maxASGk (maxASGu, respectively) withadvice of size Bc · n + O(log n). Moreover, any c-competitive algorithm formaxASGk (maxASGu, respectively) must read at least

Bc · n−O(log n)

bits of advice.

Note that, in general, it does not make much difference if the length of theinput is initially known to the algorithm or not. More specifically, it changesthe advice complexity by at most O(log n).

8.3 Max-π and Min-π without Preemption

First, we show that for any non-trivial hereditary property π, the Max-π prob-lem is equivalent to asymmetric string guessing in the following sense.Theorem 43. If there is a c-competitive algorithm for maxASGu, then thereis a c-competitive algorithm for Max-π using the same advice.Theorem 44. If there is a c-competitive algorithm for Max-π that reads b(n)bits of advice, then there is a c-competitive algorithm for maxASGk using

b(n) +O(log2 n)

bits of advice.

139

The proof of Theorem 43 is omitted due to space constraints. Before provingTheorem 44, let us recall Lemma 3 from Bartal et al. [17].Lemma 26 (Bartal et al. [17]). Given any graph H, there exist constants n0

and α such that for all n > n0 there exists a graph G on n vertices such thatany induced subgraph of G on at least α log n vertices contains H as an inducedsubgraph.

This is a variant of Lemma 9 from Lund and Yannakakis3 [122].Lemma 27 (Lund and Yannakakis [122]). Let H be a graph on k vertices. Forsufficiently large N , for any graph G on N vertices and for all ` = Ω(logN), arandom subgraph G′ of G does not, with probability 1/2, contain a subset S of` vertices that is a clique in G but H is not an induced subgraph of G′[S].

Proof of Theorem 44. According to Lemma 25, π is satisfied either by all cliquesor by all independent sets. Without loss of generality, suppose the latter (oth-erwise, swap the edges and non-edges in the following arguments).

Consider a binary string ν = x1, . . . , xn (for large enough n). Let us considerthe graph Gν = (V,E) defined as follows. Let H be an arbitrary but fixedforbidden subgraph of π. Let G′ be the n-vertex graph from Lemma 26 withvertices V = v1, . . . , vn. If xi = 0 for some i, delete from G′ all edges vi, vjfor j > i. In the graph Gν defined this way, the vertices vi for which thecorresponding xi satisfies xi = 0 (denoted by Iν ⊆ V in the sequel) form anindependent set, and hence Gν [Iν ] has property π. On the other hand, anyinduced subgraph Gν [S] with property π can contain at most α log n verticesfrom V \ Iν (otherwise it would contain the forbidden graph H as inducedsubgraph). Note that, with O(log n) bits of advice to encode n, the graph Gνcan be constructed from the string ν in an online manner: the base graph G′ isfixed for a fixed n, and the subgraph Gν [v1, . . . , vi] depends only on the valuesof x1, . . . , xi−1.

Now consider a c-competitive algorithm Algπ for Max-π that uses b bits ofadvice. Let us describe how to derive an algorithm Alg for maxASGk fromAlgπ. For a given string ν = x1, . . . , xn, where ⊥, x1, . . . , xn is the inputfor maxASGk, the advice for Alg consists of three parts: first, there is aself-delimiting encoding of n using O(log n) bits, followed by a (self-delimiting)correction string eν of length O(log2 n) bits described later, and the rest is theadvice for Algπ on the input Gν . Let S be the solution (set of vertices) returnedby Algπ on Gν (with the proper advice). As argued before, S can contain atmost α log n vertices from V \ Iν . The indices of these vertices from Sout :=S ∩ (V \ Iν) are part of the string eν . Apart from that, eν contains the indicesof at most α log n vertices Sin ⊆ Iν such that |(S \ Sout) ∪ Sin| = min|S|, |Iν |.

The algorithm Alg works as follows: at the beginning, it constructs the graphG′. When a request xi arrives, Alg sends the new vertex vi of Gν to Algπ, andfinds out whether vi ∈ S. If vi ∈ Sin, Alg answers 0 regardless of the answerof Algπ. Similarly, if vi ∈ Sout, Alg answers 1. Otherwise, Alg answers 0 ifand only if vi ∈ S.

3Note that the original lemma speaks about pseudo-random subgraphs, which is a strongerassumption that we do not need here.

140

First, note that Alg always produces a feasible solution: if the input xi = 1,then either vi 6∈ S and Alg returns yi = 1, or else vi is included in Sout.Moreover, the number of zeros (the profit) in the output of Alg is min|S|, |Iν |,where |Iν | is the profit of the optimal solution. Since Algπ is c-competitive,|S| ≥ 1/c · profit(Opt(Gν)) ≥ 1/c · |Iν |.

Corollary 12. Let π be any non-trivial hereditary property. Let Ac,n be theminimum advice needed for a c-competitive Max-π algorithm. Then

Bc · n−O(log2 n) ≤ Ac,n ≤ Bc · n+O(log n) .

We have shown that the advice complexity of Max-π essentially does not de-pend on the choice of the property π. Interestingly, this is not the case forcohereditary properties and Min-π. On the one hand, there are cohereditaryproperties where little advice is sufficient for optimality as the following theoremshows.Theorem 45. If a cohereditary property π can be characterized by finitelymany obligatory subgraphs, there is an optimal algorithm for Min-π with adviceO(log n).

Proof. Since each obligatory subgraph has constant size, O(log n) bits can beused to encode the indices of the vertices (forming the smallest obligatory sub-graph) that are included in an optimal solution.

On the other hand, there are properties for which Min-π requires large adviceas stated by the following theorem, which was proven by Boyar et al. [40]. Theproblem minimum cycle finding requires to identify a smallest possible set ofvertices S such that G[S] contains a cycle. Hence, it is the Min-π problem forthe non-trivial cohereditary property “contains cycle.”Theorem 46 (Boyar et al. [40]). Any c-competitive algorithm for the minimumcycle finding problem must read at least

Bc · n−O(log n)

bits of advice.

An upper bound analogous to Theorem 43 also follows from the results of Boyaret al. [40]. Note that, for the minimum cycle finding problem, this bound is tightup to an additive constant of O(log n).Theorem 47. Let π be any non-trivial cohereditary property. There is a c-competitive algorithm for Min-π which reads

Bc · n+O(log n)

bits of advice.

8.4 Max-π with Preemption – Large Competi-tive Ratios

In this and the subsequent section, we consider the problem Max-π with pre-emption where π is a non-trivial hereditary property. In every time step, an

141

online algorithm can either accept or reject the currently given vertex and pre-empt any number of vertices that it accepted in previous time steps. However,vertices that were once rejected or preempted cannot be accepted in later timesteps. The goal is to accept as many vertices as possible. After each request,the current solution is required to have the property π.4 Using a string guessingreduction, we can prove the following theorem; due to space constraints, we onlygive the idea.Theorem 48. Consider the Max-π problem with preemption for a hereditaryproperty π with a forbidden subgraph H, such that π holds for all independentsets. Let c(n) be an increasing function such that c(n) log c(n) = o(

√n/ log n).

Any c(n)-competitive Max-π algorithm must read at least

Ω

(n

c(n)2 log c(n)

)bits of advice.

Proof Sketch. First, for some given n and σ, let us define the graph Gn,σ thatwill be used in the reduction. To ease the presentation, assume that n′ = n/σ isinteger. Let G1 be a graph with σ vertices, the existence of which is asserted byLemma 26, such that any subgraph of G1 with at least κ1 log σ vertices containsH as induced subgraph. Let GB be the complement of a union of n′ cliques ofsize σ each (i. e., GB consists of n′ independent sets V1, . . . , Vn′ of size σ each,and all remaining pairs of vertices are connected by edges). Applying Lemma 27to GB proves the existence of a graph G2 ⊆ GB such that any subset of G2 withat least κ2 log n vertices contains H as an induced subgraph. The graph Gn,σ isobtained from G2 by replacing each independent set Vi with a copy of G1 (eachsuch copy is called a “layer” in what follows).

Let us suppose that a c(n)-competitive Max-π algorithm Alg is given thatuses b(n) advice bits on instances of size n. Now fix an arbitrary n, and chooseσ := 4cκ1 log(4cκ1). We show how to solve instances of σ-SGKH of length n′−1using Alg. Let q1, . . . , qn′−1 be the instance of σ-SGKH, where qi ∈ 1, . . . , σ.The corresponding instance G for the Max-π problem is as follows: take thegraph Gn,σ, and denote by vi,1, . . . , vi,σ the vertices of the set Vi. Let vi,qibe the distinguished vertex in set Vi. Delete from Gn,σ all edges of the formvi,qi , vi′,qi′ where i′ > i. The resulting graph G is presented to Alg in theorder v1,1, . . . , v1,σ, v2,1, . . . , v2,σ, . . . .

Note that G can be constructed online based on the instance q1, . . . , qn′−1. Thedistinguished vertices form an independent set of size n′, and thus a feasiblesolution. On the other hand, apart from the distinguished vertices, any solutioncan have at most κ1 log σ vertices in one layer (otherwise, there would be aforbidden subgraph in that layer), and at most κ2 log n layers with verticesother than the distinguished ones (if there are more than κ2 log n nonemptylayers, choose one vertex from each nonempty layer; these form a clique in

4Note that without preemption, the condition to maintain π in every time step is implicit.Indeed, if π is violated in some step, the algorithm has accepted a forbidden subgraph, whichmeans that no matter how the sequence continues, the solution will ultimately be invalid. Letus emphasize that any algorithm that works for the case without preemption also works withpreemption.

142

GB , and due to Lemma 27 induce H in G2, and thus also in G). Hence,n′ ≤ profit(Opt(G)) ≤ n′ +K, where K := κ1κ2 log σ log n.

Since Alg is c-competitive, it produces a solution of size at least profit(Opt(G))/c.Since any solution can have at most K non-distinguished vertices, the solutionof Alg contains at least g := profit(Opt(G))/c−K distinguished vertices.

Consider an algorithm Alg′ for σ-SGKH on an instance of length n′− 1, whichsimulates Alg. For the ith request, it presents Alg the layer of vertices Vi. LetCand(i) ⊆ Vi (the candidate set) be the set of vertices selected by Alg fromVi. As stated before, |Cand(i)| ≤ κ1 log σ. A set Cand(i) is good if it containsthe distinguished vertex vi,qi . It follows from the definition of the problem thatthere are at least g good candidate sets.

Alg′ uses an additional O(log log σ) bits of advice to describe a number j with1 ≤ j ≤ κ1 log σ, and selects the jth vertex from any set Cand(i) as an answer(if |Cand(i)| is smaller than j, it is extended in an arbitrary fixed way). Thenumber j is selected in such a way that Alg′ gives the correct answer for afraction of 1/(κ1 log σ) of the good sets. As a result, the fraction of correctlyguessed numbers by Alg′ is at least

α :=n′ − cK

cκ1 log σ(n′ − 1).

Note that 1/(cκ1 log σ) ≥ α ≥ 1/(2cκ1 log σ) holds for large enough n, providedthat n′ ≥ 2cK − 1. To see that this inequality holds, note that

n′ ≥ 2cK−1 ⇐⇒ n

4cκ1 log(4cκ1)≥ 2cK−1 ⇐⇒ (2cK−1)4cκ1 log(4cκ1) ≤ n .

The last inequality holds for large enough n by the choice of c(·) due to the factthat

(2cK−1)4cκ1 log(4cκ1) ∈ O(c(n)2K log c(n)) = O((c(n) log c(n))2 log n) = o(n) .

Due to Theorem 40, any algorithm for σ-SGKH that correctly guesses a fractionof α numbers (for 1/σ ≤ α ≤ 1) on an input of length n′ − 1 requires at leastb := F (σ, α) · (n′ − 1) · log σ bits of advice where

F (σ, α) := 1 + (1− α) logσ

(1− ασ − 1

)+ α logσ α .

It can be shown that F (σ, α) log σ ∈ Ω(1/c). Finally, the theorem follows bynoting that n′ − 1 ∈ Ω(n/(c log c)).

Using a similar approach, we can get a stronger bound for the independent setproblem; the proof is omitted due to space constraints.Theorem 49. Let c(n) be any function such that

8 ≤ c(n) ≤ 1 +√

1 + 4n

4.

143

Any c(n)-competitive independent set algorithm that can use preemption mustread at least

0.01 · log(2c)

2c2(n− 2c)

bits of advice.

8.5 Max-π with Preemption – Small Competi-tive Ratios

In this section, we use Theorem 41 to give bounds for small constant valuesof the competitive ratio for algorithms for Max-π complementing the boundsfrom Theorem 48. In what follows, π is a non-trivial hereditary property and kis the size of a smallest forbidden subgraph with respect to π.Theorem 50. If there is a c-competitive algorithm for Max-π with preemptionthat reads b(kn) bits of advice for inputs of length kn, then there exists a c-competitive algorithm for Anti-k-SGKH, which, for inputs of length n, reads

b(kn) +O(log2 n)

bits of advice.

Proof. According to Lemma 25, π is satisfied either by all cliques or by allindependent sets. As in the proof of Theorem 44, we assume in the followingthat π is satisfied by all independent sets (if it is not, we can use the sameargument by swapping edges and non-edges between layers). We describe howto transform an instance of Anti-k-SGKH into an instance of Max-π withpreemption. The length of the instance for Max-π with preemption will be ktimes as long as the length n of the Anti-k-SGKH instance. We proceed to showthat a c-competitive algorithm for the latter implies a c-competitive algorithmfor the former which reads at most O((log n)2) additional advice bits.

Let ν = x1, . . . , xn with xi ∈ 1, . . . , k be an instance of Anti-k-SGKH. Con-sider the n-vertex graph G = (V (G), E(G)) given by Lemma 26 for a size-ksmallest minimal forbidden subgraph H = (V (H), E(H)) for π. Recall that anyinduced subgraph of G with at least α log n vertices contains H as an inducedsubgraph. Let us denote V (G) = v1, . . . , vn and V (H) = h1, . . . , hk. Wenow describe the construction of a graph Gν = (V (Gν), E(Gν)), which will bethe input for the given algorithm for Max-π. To this end, let

V (Gν) :=

n⋃i=1

k⋃j=1

vij ,

E(Gν) := vij , vij′ | hj , hj′ ∈ E(H) ∪ vij , vi′

j′ | i < i′, vj , vj′ ∈ E(G), j 6= xi,

where we assume an ordering v11 , . . . , v

1k, v

21 , . . . , v

2k, . . . , v

n1 , . . . , v

nk on the ver-

tices. Moreover, we denote the requests vi1, . . . , vik as layer i. Let X denote theset of vertices vixi

for i ∈ 1, . . . , n.

We start with a few observations about Gν that are straightforward.

144

Observation 4. Gν [X] is an independent set of size n. In particular, it hasproperty π.

Observation 5. Gν [vi1, . . . , vik] = H for an arbitrary but fixed i. Thus, any

induced subgraph of Gν that contains Gν [vi1, . . . , vik] does not have property π.

Observation 6. Consider a set of vertices, V , in Gν which is disjoint from X.If |V | ≥ kα log n, then Gν [V ] does not have property π. Note that V must inthis case contain vertices from at least α log n different layers. These have H asan induced subgraph since none of them are in X.

Now consider a c-competitive algorithm Algπ for Max-π with preemptionreading b(kn) bits of advice (recall that kn is the length of its input Gν). Westart by describing an algorithm Alg′ for Anti-k-SGKH, which uses b(kn) bits ofadvice (n is the length of its input). Afterwards, we use Alg′ to define anotheralgorithm Alg for Anti-k-SGKH, which uses O(log2 n) additional advice bitsand is c-competitive.

For a given string ν = x1, . . . , xn, let ⊥, x1, . . . , xn be the input for Anti-k-SGKH. Let S be the solution (set of vertices) returned by Algπ on Gν (withthe proper advice). Note that this is the resulting set of vertices after theunwanted vertices have been preempted. Alg′ works as follows: It constructsthe graph Gν online and simulates Algπ on it. When a request i arrives, thegoal of Alg′ is to guess a number in 1, . . . , k different from xi. It does thisby presenting all vertices in layer i to Algπ. It is important to note that thevertices in layer i can be presented without knowledge of xi, . . . , xn. Let Sidenote the set of these vertices, which are accepted by Algπ and have not beenpreempted after request vik. In layer i, Alg′ outputs yi = w where w is thesmallest number in 1, . . . , k such that viw /∈ Si. Note that such a numberalways exists due to Observation 5.

We now describe Alg, which uses O(log2 n) additional advice bits. The advicefor Alg consists of three parts (similar to the proof of Theorem 44). First, itcontains a self-delimiting encoding of n (this requires O(log n) bits). This isfollowed by a list of up to kα log n indices i, where Alg′ outputs yi = xi. LetSerror denote the set of these indices. A self-delimiting encoding of this requiresO(log2 n) bits (recall that α and k are constant). Finally, the advice which Alg′

received is included. This is b(kn) bits.

Alg works as follows for each request. If the request is not in Serror, it outputsthe same as Alg′. Conversely, if the request is in Serror, it outputs anothernumber in 1, . . . , k.

We now argue that Alg is c-competitive. Note that the optimal offline solutionfor Gν contains at most kα log n vertices not in X. The same of course holdsfor the solution produced by Algπ. Moreover, it holds that if in layer i thealgorithm Algπ accepts a vertex in X, then Alg′ outputs yi 6= xi. This meansthat the score of Algπ is at most kα log n more than the score of Alg′. Sincethe score of Alg is kα log n more than the score of Alg′, we have that Alg isc-competitive.

Combining Theorems 41 and 50, we get the following corollary.

145

Corollary 13. Let 1 < c < k/(k−1). Let π be any non-trivial hereditary prop-erty with a minimal forbidden subgraph of size k. Any c-competitive algorithmfor Max-π with preemption must read at least(

1− hk(

1

c

))n

log k

k−O(log2 n)

bits of advice, where n is the input length. Here, hk is the k-ary entropy functiongiven by hk(x) = x logk(k − 1)− x logk x− (1− x) logk(1− x).

8.6 Closing Remarks

In Corollary 12, we describe lower and upper bounds for the advice complexityof all online hereditary graph problems, which are essentially tight (there is justa gap of O(log2 n)). It turns out that, for all of them, roughly the same amountof information about the future is required to achieve a certain competitiveratio.

Intriguingly, we see quite a different picture for cohereditary properties. The-orem 47 gives the same upper bound as we had for hereditary properties, andTheorem 46 shows that this upper bound is essentially tight. However, Theo-rem 45 shows that there exist cohereditary problems that have an advice com-plexity as low as O(log n) bits to be optimal. It remains open if it is only thoseproblems with a finite set of obligatory graphs that have this very low advicecomplexity, or if this can also happen for cohereditary problems with an infiniteset of obligatory graphs.

For hereditary problems with preemption, we show that to achieve a competitiveratio strictly smaller than k/(k − 1), a linear number of advice bits is needed.This is asymptotically tight, since optimality (even without preemption) can beachieved with n bits. Furthermore, we show a lower bound for non-constantcompetitive ratios (that are roughly smaller than

√n). It remains open if there

is an algorithm for the preemptive case which uses fewer advice bits than thealgorithms solving the same problem in the non-preemptive case.

146

CHAPTER 9

Weighted Online Problems with Advice

Joan Boyar and Lene M. Favrholdt and Christian Kudahl and Jesper W.Mikkelsen

Department of Mathematics and Computer Science, University of SouthernDenmark

joan,lenem,jesperwm,[email protected] 1

Abstract

Recently, the first online complexity class, AOC, was introduced. The classconsists of many online problems where each request must be either accepted orrejected, and the aim is to either minimize or maximize the number of acceptedrequests, while maintaining a feasible solution. All AOC-complete problems(including Independent Set, Vertex Cover, Dominating Set, and Set Cover)have essentially the same advice complexity. In this paper, we study weightedversions of problems in AOC, i.e., each request comes with a weight and the aimis to either minimize or maximize the total weight of the accepted requests. Incontrast to the unweighted versions, we show that there is a significant differencein the advice complexity of complete minimization and maximization problems.We also show that our algorithmic techniques for dealing with weighted requestscan be extended to work for non-complete AOC problems such as maximummatching (giving better results than what follow from the general AOC results)and even non-AOC problems such as scheduling.

9.1 Introduction

An online problem is an optimization problem for which the input is dividedinto small pieces, usually called requests, arriving sequentially. An online algo-rithm must serve each request, irrevocably, without any knowledge of possiblefuture requests. The quality of online algorithms is traditionally measured us-ing the competitive ratio [98, 139], which is essentially the worst case ratio of

1This work was partially supported by the Villum Foundation and the Danish Council forIndependent Research, Natural Sciences, grant DFF-1323-00247.

147

the online performance to the performance of an optimal offline algorithm, i.e.,an algorithm that knows the whole input sequence from the beginning and hasunlimited computational power.

For some online problems such as Independent Set or Vertex Cover, the bestpossible competitive ratio is linear in the sequence length. This gives rise tothe question of what would happen, if the algorithm knew something aboutfuture requests. Sometimes a semi-online setting is studied where it is assumedthat the algorithm has some specific knowledge such as the value of an optimalsolution. The extra knowledge may also be more problem specific such as anaccess graph for paging. In contrast to problem specific approaches, advicecomplexity [29, 60, 68] is a quantitative and standardized way of relaxing theonline constraint. The main idea of advice complexity is to provide an onlinealgorithm, Alg, with some partial knowledge of the future in the form of advicebits provided by a trusted oracle which has unlimited computational power andknows the entire request sequence. Informally, the advice complextity of analgorithm is the maximum number of advice bits read for input sequences of agiven length, and the advice complexity of a problem is the advice complexityof the best possible algorithm for the problem. Upper bounds on the advicecomplexity for a problem can sometimes lead to (or come from) semi-onlinealgorithms, and lower bounds can show that such algorithms do not exist. Sinceits introduction, advice complexity has been a very active area of research.Lower and upper bounds on the advice complexity have been obtained for alarge number of online problems; a recent list can be found in [140].

Recently in [40], the first complexity class for online problems, AOC, was in-troduced. The class consists of online problems that can be described in thefollowing way: The input is a sequence of requests and each request must eitherbe accepted or rejected. The set of accepted requests is called the solution. Foreach request sequence, there is at least one feasible solution. The class containsminimization as well as maximization problems. For a minimization problem,the goal is to accept as few requests as possible, while maintaining a feasiblesolution, and for maximization problems, the aim is to accept as many requestsas possible. For minimization problems, any super set of a feasible solution isalso a solution, and for maximization problems, any subset of a feasible solutionis also a feasible solution.

In this paper, we consider a generalization of the problems in the class AOC inwhich each request comes with a weight. The goal is now to either minimize ormaximize the total weight of the accepted requests. We separately consider theclasses of maximization and minimization problems. For AOC-complete maxi-mization problems, we get advice complexity results quite similar to those forthe unweighted versions of the problems, but for AOC-complete minimizationproblems, the results are a lot more negative, so this gives a complexity classcontaining harder problems than AOC. This is in contrast to unweighted AOC-complete problems, where minimization and maximization problems are equallyhard in terms of advice complexity. Recently, differences between (unweighted)AOC minimization and maximization problems were found with respect to on-line bounded analysis [37] and min- and max-induced subgraph problems [103].

Our upper bound techniques are also useful for non-complete AOC problemssuch as maximum matching as well as non-AOC problems such as scheduling.

148

Previous results. For any AOC-complete problem, Θ(n/c) advice bits arenecessary and sufficient to obtain a competitive ratio of c. More specifically, forcompetitive ratio c, the advice complexity is B(n, c)±O(log n), where

B(n, c) = log

(1 +

(c− 1)c−1

cc

)n, (9.1)

and an/c ≤ B(n, c) ≤ n/c, a = 1/(e ln(2)) ≈ 0.53. This is an upper boundon the advice complexity of all problems in AOC. In [40], a list of problemsincluding Independent Set, Vertex Cover, Dominating Set, and Set Cover wereproven AOC-complete.

The paper [4] studies a semi-online version of scheduling where it is allowed tokeep several parallel schedules and choose the best schedule in the end. Thescheduling problem considered is makespan minimization on m identical ma-chines. Using (1/ε)O(log(1/ε)) parallel schedules, a (4/3 + ε)-competitive algo-rithm is obtained. Moreover, a (1+ε)-competitive algorithm using (m/ε)O(log(1/ε)/ε)

parallel schedules is given along with an almost matching lower bound. Notethat keeping s different schedules until the end corresponds to working withs different online algorithms. Thus, this particular semi-online model easilytranslates to the advice model, the advice being which of the s algorithms torun. In this way, the results of [4] correspond to a (4/3 + ε)-competitive algo-rithm using O(log2(1/ε)) advice bits and a (1 + ε)-competitive algorithm usingO(log(m/ε) · log(1/ε)/ε) advice bits. In particular, note that this algorithmuses constant advice in the size of the input and only logarithmic advice in thenumber of machines.

In [135], scheduling on identical machines with a more general type of objectivefunction (including makespan, minimizing the `p-norm, and machine covering)was studied. The paper considers the advice-with-request model where a fixednumber of advice bits are provided along with each request. The main resultis a (1 + ε)-competitive algorithm that uses O((1/ε) · log(1/ε)) advice bits perrequest, totaling O((n/ε) · log(1/ε)) bits of advice for the entire sequence.

Our results. We prove that adding arbitrary weights, AOC-complete mini-mization problems become a lot harder than AOC-complete maximization prob-lems:

• For AOC-complete maximization problems, the weighted version is notsignificantly harder than the unweighted version: For any maximizationproblem in AOC (this includes, e.g., Independent Set), the c-competitivealgorithm given in [40] for the unweighted version of the problem can beconverted into a (1+ε)c-competitive algorithm for the weighted version us-ing only O((log2 n)/ε) additional advice bits. Thus, a (1+ε)c-competitivealgorithm using at most B(n, c) +O((log2 n)/ε) bits of advice is obtained.

For non-complete AOC problems, better trade-offs between the competi-tive ratio and number of advice bits can be obtained. We show that anyc-competitive algorithm for an AOC problem, P, using b advice bits canbe converted into a O(c · log n)-competitive algorithm for the weightedversion of P using b+O(log n) advice bits. For maximum weight match-ing, this implies a O(log n)-competitive algorithm reading O(log n) bits

149

of advice. We show that this is best possible in the following sense: Fora set of weighted AOC problems including Matching, Independent Set,Clique, and Set Cover, no algorithm reading o(log n) bits of advice canhave a competitive ratio bounded by any function of n. Furthermore, anyO(1)-competitive algorithm for weighted matching must read Ω(n) advicebits.

• For all minimization problems known to be AOC-complete (this includes,e.g., Vertex Cover, Dominating Set, and Set Cover), n − O(log n) bits ofadvice are required to obtain a competitive ratio bounded by a function ofn. This should be contrasted with the fact that n bits of advice triviallyyields a 1-competitive algorithm.

If the largest weight wmax cannot be arbitrarily larger than the smallestweight wmin, the c-competitive algorithm given in [40] for the unweightedversion can be converted into a c(1 + ε)-competitive algorithm for theweighted versions using B(n, c)+O(log2 n+log(log(wmax/wmin)/ε)) advicebits in total.

Our main upper bound technique is a simple exponential classification schemethat can be used to sparsify the set of possible weights. This technique canalso be used for problems outside of AOC. For example, for scheduling onrelated machines, we show that for many important objective functions (includ-ing makespan minimization and minimizing the `p-norm), there exist (1 + ε)-competitive algorithms reading O((log2 n)/ε) bits of advice. For scheduling onm unrelated machines where m is constant, we get a similar result, but withO((log n)m+1/εm) advice bits. Finally, for unrelated machines, where the goalis tomaximize an objective function, we show that under some mild assumptionson the objective function (satisfied, for example, for machine covering), there isa (1 + ε)-competitive algorithm reading O((log n)m+1/εm) bits of advice.

For scheduling on related and unrelated machines, our results are the first non-trivial upper bounds on the advice complexity. For the case of makespan mini-mization on identical machines, the algorithm of [4] is strictly better than ours.However, for minimizing the `p-norm or maximizing the minimum load on iden-tical machines, we exponentially improve the previous best upper bound [135](which was linear in n).

9.2 Preliminaries

Notation. Throughout the paper, we let n denote the number of requests inthe input. We let R+ denote the set containing 0 and all positive real numbers.We let log denote the binary logarithm log2. For k ≥ 1, [k] = 1, 2, . . . , k. Forany bit string y, let |y|0 and |y|1 denote the number of zeros and the number ofones, respectively, in y. We write x v y if for all indices, i, xi = 1⇒ yi = 1.

Advice complexity and competitive ratio. In this paper, we use the“advice-on-tape” model [29]. Before the first request arrives, the oracle, which

150

may know the entire request sequence, prepares an advice tape, an infinite bi-nary string. The algorithm Alg may, at any point, read some bits from theadvice tape. The advice complexity of Alg is the maximum number of bits readby Alg for any input sequence of at most a given length. Opt is an optimaloffline algorithm.

Advice complexity is combined with competitive analysis to determine howmany bits of advice are necessary and sufficient to achieve a given competi-tive ratio.Definition 19 (Competitive ratio [98, 139] and advice complexity [29]). Theinput to an online problem, P, is a request sequence σ = 〈r1, . . . , rn〉. An onlinealgorithm with advice, Alg, computes the output y = 〈y1, . . . , yn〉, where yiis computed from ϕ, r1, . . . , ri, where ϕ is the content of the advice tape. Eachpossible output for P is associated with a cost/profit. For a request sequence σ,Alg(σ) (Opt(σ)) denotes the cost/profit of the output computed by Alg (Opt)when serving σ.

If P is a maximization (minimization) problem, then Alg is c(n)-competitive ifthere exists a constant, α, such that, for all n ∈ N, Opt(σ) ≤ c(n) ·Alg(σ)+α,(Alg(σ) ≤ c(n) · Opt(σ) + α), for all request sequences, σ, of length at mostn. If the relevant inequality holds with α = 0, we say that Alg is strictlyc(n)-competitive.

The advice complexity, b(n), of an algorithm, Alg, is the largest number of bitsof ϕ read by Alg over all possible request sequences of length at most n. Theadvice complexity of a problem, P, is a function, f(n, c), c ≥ 1, such that thesmallest possible advice complexity of a strictly c-competitive online algorithmfor P is f(n, c).

We only consider deterministic online algorithms (with advice). Note that boththe advice read and the competitive ratio may depend on n, but, for ease ofnotation, we often write b and c instead of b(n) and c(n). Also, with thisdefinition, c ≥ 1, for both minimization and maximization problems.

In this paper, we consider the complexity class AOC from [40].Definition 20 ( [40]). A problem, P, is in AOC (Asymmetric Online Covering)if it can be defined as follows: The input to an instance of P consists of asequence of n requests, σ = 〈r1, . . . , rn〉, and possibly one final dummy request.An algorithm for P computes a binary output string, y = y1 . . . yn ∈ 0, 1n,where yi = f(r1, . . . , ri) for some function f .

For minimization (maximization) problems, the score function, s, maps a pair,(σ, y), of input and output to a cost (profit) in N ∪ ∞ (N ∪ −∞). Foran input, σ, and an output, y, y is feasible if s(σ, y) ∈ N. Otherwise, y isinfeasible. There must exist at least one feasible output. Let Smin(σ) (Smax(σ))be the set of those outputs that minimize (maximize) s for a given input σ.

If P is a minimization problem, then for every input, σ, the following musthold:

1. For a feasible output, y, s(σ, y) = |y|1.

2. An output, y, is feasible if there exists a y′ ∈ Smin(σ) such that y′ v y.If there is no such y′, the output may or may not be feasible.

151

If P is a maximization problem, then for every input, σ, the following musthold:

1. For a feasible output, y, s(σ, y) = |y|0.

2. An output, y, is feasible if there exists a y′ ∈ Smax(σ) such that y′ v y.If there is no such y′, the output may or may not be feasible.

Recall that no problem in AOC requires more than B(n, c)+O(log n) bits of ad-vice (see Eq. (9.1) for the definition of B(n, c)). The problems in AOC requiringthe most advice are AOC-complete [40]:Definition 21 ( [40]). A problem P ∈ AOC is AOC-complete if for all c > 1,any c-competitive algorithm for P must read at least B(n, c) − O(log n) bits ofadvice.

An AOC-Complete Problem. In [40], an abstract guessing game, min-ASGk, was introduced and shown to be AOC-complete. The minASGk-problemitself is very artificial, but it is well-suited as the starting point of reductions.All minimization problems known to be AOC-complete have been shown to beso via reductions from minASGk.

The input for minASGk is a secret string x = x1x2 . . . xn ∈ 0, 1n given inn rounds. In round i ∈ [n], the online algorithm must answer yi ∈ 0, 1.Immediately after answering, the correct answer xi for round i is revealed tothe algorithm. If the algorithm answers yi = 1, it incurs a cost of 1. If thealgorithm answers yi = 0, then it incurs no cost if xi = 0, but if xi = 1, then theoutput of the algorithm is declared to be infeasible (and the algorithm incursa cost of ∞). The objective is to minimize the total cost incurred. Note thatthe optimal solution has cost |x|1. See the appendix for a formal definition ofminASGk and for definitions of other AOC-complete problems.

The problem minASGk is based on the binary string guessing problem [27,68].Binary string guessing is similar to asymmetric string guessing, except that anywrong guess (0 instead of 1 or 1 instead of 0) gives a cost of 1.

In Theorem 51, we show a very strong lower bound for a weighted version ofminASGk. In Theorem 52, via reductions, we show that this lower bound im-plies similar strong lower bounds for other weighted AOC-complete minimizationproblems.

Weighted AOC. We now formally define weighted versions of the problemsin AOC.Definition 22. Let P be a problem in AOC. We define the weighted versionof P, denoted Pw, as follows: A Pw-input σ = 〈r1, w1, r2, w2, . . . , rn, wn〉consists of n P-requests, r1, ..., rn, each of which has a weight wi ∈ R+. TheP-request ri and its weight wi are revealed simultaneously. An output y =y1 . . . yn ∈ 0, 1n is feasible for the input σ if and only if y is feasible for theP-input 〈r1, . . . , rn〉. The cost (profit) of an infeasible solution is ∞ (−∞).

If P is a minimization problem, then the cost of a feasible Pw-output y for an

152

input σ is

s(σ, y) =

n∑i=1

wiyi

If P is a maximization problem, then the profit of a feasible Pw-output y for aninput σ is

s(σ, y) =

n∑i=1

wi(1− yi)

9.3 Weighted Versions of AOC-Complete Minimiza-tion Problems

In the weighted version of minASGk, minASGkw, each request is a weight forthe current request and the value 0 or 1 of the previous request. Producing afeasible solution requires accepting (answering 1 to) all requests with value 1,and the cost of a feasible solution is the sum of all weights for requests whichare accepted.

We start with a negative result for minASGkw and then use it to obtain similarresults for the weighted online version of Vertex Cover, Set Cover, DominatingSet, and Cycle Finding.Theorem 51. For minASGkw, the competitive ratio of any algorithm with lessthan n bits of advice is not bounded by any function of n.

Proof. Let Alg be any algorithm for minASGkw reading at most n − 1 bitsof advice. We show how an adversary can construct input sequences where thecost of Alg is arbitrarily larger than that of Opt. We only consider sequenceswith at least one 1. It is easy to see that for the unweighted version of thebinary string guessing problem, n bits of advice are necessary in order to guesscorrectly each time: If there are fewer than n bits, there are only 2n−1 possibleadvice strings, so, even if we only consider the 2n − 1 possible inputs with atleast one 1, there are at least two different request strings, x and y, which getthe same advice string. Alg will make an error on one of the strings whenguessing the first bit where x and y differ, since up until that point Alg hasthe same information about both strings.

We describe a way to assign weights to the requests in minASGkw such that ifAlg makes a single mistake (either guessing 0 when the correct answer is 1 orvice versa), its competitive ratio is unbounded. We use a large number a > 1,which we allow to depend on n. All weights are from the interval [1, a] (notethat they are not necessarily integers). We let x = x1, . . . , xn be the input stringand set w1 = a1/2. For i > 1, wi is given by:

wi =

wi−1 · a(−2−i), if xi−1 = 0

wi−1 · a(2−i), if xi−1 = 1

Since the weights are only a function of previous requests, they do not revealany information to Alg about future requests.

153

Observation 7. For each i, the following hold:

• If xi = 0, then wj ≤ wi · a(−2−n) for all j > i.

• If xi = 1, then wj ≥ wi · a(2−n) for all j > i.

We claim that if Alg makes a single mistake, its competitive ratio is notbounded by any function of n. Indeed, if Alg guesses 0 for a request, butthe correct answer is 1, the solution is infeasible and Alg gets a cost of ∞.

We now consider the case where Alg guesses 1 for a request j, but the correctanswer is 0. This request gives a contribution of wj = ab, for some 0 < b < 1, tothe cost of the solution produced by Alg. Define j′ such that wj′ = maxwi |xi = 1. Since Opt only answers 1 if xi = 1, this is the largest contribution tothe cost of Opt from a single request.

If j′ > j, Observation 7 gives that wj′ ≤ wj · a(−2−n) = ab · a(−2−n) = ab−2−n

.The cost of Opt is at most n · wj′ ≤ n · ab−2−n

. Thus,

Alg(x)

Opt(x)≥ ab

n · ab−2−n =a2−n

n.

Since a can be arbitrarily large (recall that it can be a function of n), we seethat the competitive ratio cannot be bounded by any specific function of n.

If j′ < j, Observation 7 gives us that wj ≥ wj′ · a(2−n). Using wj = ab, we getab−2−n ≥ wj′ . We can repeat the argument from the case where j′ > j to seethat the competitive ratio of Alg is not bounded by any function of n.

In order to show that similar lower bounds apply to all minimization problemsknown to be complete for AOC, we define a simple type of advice preservingreduction for online problems. These are much less general than those definedby Sprock in his PhD dissertation [141], mainly because we do not allow theamount of advice needed to change by a multiplicative factor.

Let OptP(σ) denote the value of the optimal solution for request sequence σfor problem P, and let |σ| denote the number of requests in σ.Definition 23. Let P1 and P2 be two online minimization problems, and let I1

be the set of request sequences for P1 and I2 be the set of request sequences forP2. For a given function g : N → R+, we say that there is a length preservingg-reduction from P1 to P2, if there is a transformation function f : I1 → I2

such that

• for all σ ∈ I1, |σ| = |f(σ)|, and

• for every algorithm Alg2 for P2, there is an algorithm Alg1 for P1 suchthat for all σ1 ∈ I1, the following holds:If Alg2 produces a feasible solution for σ2 = f(σ1) with advice φ(σ2),then Alg1, using at most |φ(σ2)|+ g(|σ2|) advice bits, produces a feasiblesolution for σ1 such that

– Alg1(σ1) ≤ Alg2(σ2) + OptP1(σ1) and OptP1

(σ1) ≥ OptP2(σ2),

or

154

– Alg1(σ1) = OptP1(σ1)

Note that the transformation function f is length-preserving in that the lengthsof the request sequences for the two problems are identical. This avoids thepotential problem that the advice for the two problems be functions of two dif-ferent sequence lengths. The amount of advice for the problem being reducedto is allowed to be an additive function, g(n), longer than for the original prob-lem, because this seems to be necessary for some of the reductions showing thatproblems are AOC-complete. Since the reductions are only used here to showthat some competitive ratios are unbounded, the increase in the competitiveratio that occurs with these reductions is insignificant.

The following lemma shows how length-preserving reductions can be used.Lemma 28. Let P1 and P2 be online minimization problems. Suppose thatat least b1(n, c) advice bits are required to obtain a competitive ratio of c + 1for P1 and suppose there is a length preserving g(n)-reduction from P1 to P2.Then, at least b1(n, c) − g(n) advice bits are needed for an algorithm for P2 tobe c-competitive.

Proof. Let f be the transformation function associated with g. Suppose for thesake of contradiction that there is a (strictly) c-competitive algorithm Alg2 forP2 with advice complexity b2(n, c) < b1(n, c)−g(n). Then there exists a constantα such that for any request sequence σ1 ∈ I1, either Alg1(σ1) = OptP1(σ1) or

Alg1(σ1) ≤ Alg2(σ2) + OptP1(σ1)

≤ c ·OptP2(σ2) + α+ OptP1(σ1)

≤ (c+ 1) ·OptP1(σ1) + α,

where σ2 = f(σ1). Thus, Alg1 is (strictly) (c + 1)-competitive, with less thanb1(n, c) bits of advice, a contradiction.

All known AOC-complete problems were proven complete using length-preservingreductions from minASGk, so the following holds for the weighted versions ofall such problems:Theorem 52. For the weighted online versions of Vertex Cover, Cycle Finding,Dominating Set, Set Cover, an algorithm reading less than n−O(log n) bits ofadvice cannot have a competitive ratio bounded by any function of n.

Proof. It is easily checked that the reductions in [40] showing that these prob-lems are AOC-complete are length preservingO(log n)-reductions from minASGk,and hence, the theorem follows from Lemma 28. For example, for Vertex Cover,the following O(log n)-reduction is used:

Each input σ = 〈x1, x2, . . . , xn〉 to the problem minASGk, is transformed tof(σ) = 〈v1, v2, . . . , vn〉, where V = v1, v2, . . . , vn is the vertex set of a graphwith edge set E = (vi, vj) : xi = 1 and i < j.

The advice used by the minASGk algorithm Alg1 consists of the advice usedby the Vertex Cover algorithm Alg2 in combination with 2 bits distinguishingthree cases and possibly (an encoding of) one or two indices to positions in theinput sequence.

155

Let VAlg2⊆ V be the vertex cover constructed by Alg2. Let XAlg1

be the setof requests on which Alg1 returns a 1. Then either

XAlg1= xi | vi ∈ VAlg2

or there exists a vk 6∈ VAlg2such that xk = 1 and

XAlg1 = xi | vi ∈ VAlg2 ∪ xk

(which is the optimal solution) or there exists a vj ∈ VAlg2 and a vk 6∈ VAlg2

such that xk = 1 and

XAlg1 = (xi | vi ∈ VAlg2 ∪ xk) \ xj

The weight of xj is at least 0 and the weight of xk is at most OptP1. Thus, in

all cases, Alg1(σ) ≤ Alg2(f(σ)) + OptP1 .

9.4 Exponential Sparsification

Assume that we are faced with an online problem which we know how to effi-ciently solve, possibly using advice, in the unweighted version (or when there areonly few possible different weights). We use exponential sparsification, a sim-ple technique which can be of help when designing algorithms with advice forweighted online problems by reducing the number of different possible weightsthe algorithm has to handle. The first step is to partition the set of possibleweights into intervals of exponentially increasing length, i.e., for some small ε,0 < ε < 1,

R+ =

∞⋃k=−∞

[(1 + ε)k, (1 + ε)k+1

).

How to proceed depends on the problem at hand. We now informally explainthe meta-algorithm that we repeatedly use in this paper. Note that if w1, w2 ∈[(1 + ε)k, (1 + ε)k+1

)and w1 ≤ w2, then w1 ≤ w2 ≤ (1 + ε)w1. For many online

problems, this means that an algorithm can treat all requests whose weightsbelong to this interval as if they all had weight (1 + ε)k+1 with only a small lossin the competitive ratio.

Consider now a set of weights and let wmax denote the largest weight in theset. Let kmax be the integer for which wmax ∈

[(1 + ε)kmax , (1 + ε)kmax+1

).

We say that a request with weight w ∈[(1 + ε)k, (1 + ε)k+1

)is unimportant if

k < kmax − dlog1+ε(n2)e. Furthermore, we will often categorize the request as

important if kmax−dlog1+ε(n2)e ≤ k < kmax+1 and as huge if k ≥ kmax+1. Each

unimportant request has weight w ≤ (1+ε)k+1 ≤ (1+ε)kmax−dlog1+ε(n2)e−1+1 ≤wmax/n

2, so the total sum of the unimportant weights is O(wmax/n). For manyweighted online problems, this means that an algorithm can easily serve therequests with unimportant weights. In maximization problems, this is done byrejecting them. In minimization problems, it is done by accepting them. Thus,exponential sparsification (when applicable) essentially reduces the problem ofcomputing a good approximate solution for a problem with n distinct weights to

156

that of computing a good approximate solution with only O(log1+ε n) distinctweights.

For a concrete problem, several modifications of this meta-algorithm might benecessary. Often, the most tricky part is how the algorithm can learn kmaxwithout using too much advice. One approach that we often use is the following:The oracle encodes the index i of the first request whose weight is close enoughto (1+ε)kmax that the algorithm only needs a little bit of advice to deduce kmaxfrom the weight of this request. If it is somehow possible for the algorithm toserve all requests prior to i reasonably well, then this approach works well.

Our main application of exponential sparsification is to weighted AOC problems.We begin by considering maximization problems. Note that no assumptions aremade about the weights of Pw in Theorem 53.Theorem 53. If P ∈ AOC is a maximization problem, then for any c > 1 and0 < ε ≤ 1, Pw has a strictly (1 + ε)c-competitive algorithm using B(n, c) +O(ε−1 log2 n) advice bits.

Proof. Fix ε > 0. Let σ = 〈r1, w1, . . . , rn, wn〉 be the input and letx = x1 . . . xn ∈ 0, 1n specify an optimal solution for σ, with zeros indicatingmembership in the optimal solution. Define s = 1+ε/2. Let VOpt = i : xi = 0.Note that VOpt contains exactly those rounds in which Opt answers 0 and thusaccepts. Furthermore, for k ∈ Z, let V k = i : sk ≤ w(i) < sk+1 and letV kOpt = VOpt ∩ V k. Finally, let imax ∈ VOpt be such that w(imax) ≥ w(i) forevery i ∈ VOpt.

The oracle computes the unique m ∈ Z such that imax ∈ V mOpt. We say that arequest ri is unimportant if w(i) < sm−dlogs(n2)e, important if sm−dlogs(n2)e ≤w(i) < sm+1, and huge if w(i) ≥ sm+1. The oracle computes the index i′

of the first important request in the input sequence. Assume that i′ ∈ V m′.

The oracle writes the length n of the input onto the advice tape using a self-delimiting encoding2, and then writes the index i′ and the integerm−m′ (whichis at most dlogs(n

2)e) onto the tape, using a total of O(log n) bits. This adviceallows the algorithm to learn m as soon as the first important request arrives.From there on, the algorithm will know if a request is important, unimportant,or huge. Whenever an unimportant or a huge request arrives, the algorithmanswers 1 (rejects the request). We now describe how the algorithm and oraclework for the important requests.

For each 0 ≤ j ≤ dlogs(n2)e, let nm−j =

∣∣V m−j∣∣. For the requests (whoseindices are) in V m−j , we use the covering design based c-competitive algorithmfor unweighted maxASG. This requires B(nm−j , c)+O(log nm−j) bits of advice.Since B(n, c) is linear in n, this means that we use a total of

b =

dlogs(n2)e∑j=0

(B(nm−j , c) +O(log nm−j)

)≤ B(n, c) +O(logs n · log n)

bits of advice. Note that logs(n) ≤ 2ε−1 log n for ε/2 ≤ 1, giving the bound onthe advice in the statement of the theorem.

2For example, dlogne could be written in unary (dlogne ones, followed by a zero) beforewriting n itself in binary.

157

We now prove that the algorithm achieves the desired competitive ratio. LetVAlg be those rounds in which Alg answers 0 and let V kAlg = VAlg ∩ V k. Weconsider the important requests first. Fix 0 ≤ j ≤ dlogs(n

2)e. Let nOptm−j =∣∣∣V m−jOpt

∣∣∣, i.e., nOptm−j is the number of requests in V m−j which are also in the

optimal solution VOpt. By construction, we have nOptm−j ≤ c

∣∣∣V m−jAlg

∣∣∣. Since the

largest possible weight of a request in V m−j is at most s times larger than thesmallest possible weight of a request in V m−j , this implies that w(V m−jOpt ) ≤s · c · w(V m−jAlg ). Thus, we get that

dlogs(n2)e∑j=0

w(V m−jOpt ) ≤dlogs(n2)e∑

j=0

s · c · w(V m−jAlg ) = s · c ·Alg(σ). (9.2)

We now consider the unimportant requests. If ri is unimportant, then w(i) ≤sm′−dlogs(n2)e ≤ sm′/n2 ≤ w(ximax

)/n2 ≤ Opt(σ)/n2. This implies that

∞∑j=dlogs(n2)e+1

w(V m−jOpt ) ≤ nOpt(σ)

n2=

Opt(σ)

n. (9.3)

We conclude that

Opt(σ) = w(VOpt) =

dlogs(n2)e∑j=0

w(V m−jOpt ) +

∞∑j=dlogs(n2)e+1

w(V m−jOpt ).

By Eq. (9.3), (1− 1

n

)Opt(σ) ≤

dlogs(n2)e∑j=0

w(V m−jOpt ),

so by Eq. (9.2), Opt(σ) ≤ nn−1 · s · c ·Alg(σ).

Note that for n ≥ n0 = 2+2εε , ( n

n−1 )(1 + ε/2) ≤ (1 + ε). For inputs of lengthless than n0, the oracle writes an optimal solution onto the advice tape, usingat most n0 bits. Since n0 ≤ 4

ε , b ∈ O(ε−1 log2 n) as required. For inputs oflength at least n0, we use the algorithm described above. Thus, for every inputσ, it holds that Opt(σ) ≤ (1 + ε)cAlg(σ). Since ε was arbitrary, this provesthe theorem.

It may be surprising that adding weights to AOC-complete maximization prob-lems has almost no effect, while adding weights to AOC-complete minimizationproblems drastically changes the advice complexity. In particular, one mightwonder why the technique used in Theorem 53 does not work for minimizationproblems. The key difference lies in the beginning of the sequence. Let wmax

be the largest weight of a request accepted by Opt.

For maximization problems, the algorithm can safely reject all requests beforethe first important one. For minimization problems, this approach does notwork, since the algorithm must accept a superset of what Opt accepts in orderto ensure that its output is feasible. Thus, rejecting an unimportant request

158

that Opt accepts may result in an infeasible solution. This essentially meansthat the algorithm is forced into accepting all requests before the first importantrequest arrives. Accepting all unimportant requests is no problem, since theywill not contribute significantly to the total cost. However, accepting even asingle huge request can give an unbounded contribution to the algorithm’s cost.As shown in Theorem 51, it is not possible in general for the algorithm to tellif a request in the beginning of the sequence is unimportant or huge withoutusing a lot of advice.

However, if the ratio of the largest to the smallest weight is not too large,exponential sparsification is also useful for minimization problems in AOC. Es-sentially, when this ratio is bounded, it is possible for the algorithm to learn agood approximation of wmax when the first request arrives. This is formalizedin Theorem 54, the proof of which is very similar to the proof of Theorem 53.Theorem 54. If P ∈ AOC is a minimization problem and 0 < ε ≤ 1, then Pwwith all weights in [wmin, wmax] has a (1 + ε)c-competitive algorithm with advicecomplexity at most

B(n, c) +O

(ε−1 log2 n+ log

(ε−1 log

wmax

wmin

)).

Proof. Fix ε > 0. Let σ = 〈r1, w1, . . . , rn, wn〉 be the input and letx = x1 . . . xn ∈ 0, 1n specify an optimal solution for σ, with ones indicatingmembership in the optimal solution. Define s = 1+ε/2. Let VOpt = i : xi = 1.Note that VOpt contains exactly those rounds in which Opt answers 1 and thusaccepts. Furthermore, for k ∈ Z, let V k = i : sk ≤ w(i) < sk+1 and letV kOpt = VOpt ∩ V k. Finally, let imax ∈ VOpt be such that w(imax) ≥ w(i) forevery i ∈ VOpt.

The oracle computes the unique m ∈ Z such that imax ∈ V mOpt. We say that arequest ri is unimportant if w(i) < sm−dlogs(n2)e, important if sm−dlogs(n2)e ≤w(i) < sm+1, and huge if w(i) ≥ sm+1.

When the first request arrives, the algorithm learns its weight, w1. Assumesm′ ≤ w1 < sm

′+1. The adversary writes the values n and m−m′ on the tapein a self-delimiting encoding. Using this, the algorithm can now calculate m.The number of advice bits needed to write m−m′ is O(log(m−m′)).

log (m−m′) ≤ log (logs w(imax)− logs w1) + 1

≤ log logswmax

wmin+ 1

≤ log

(2ε−1 log

wmax

wmin

)+ 1, since logs n ≤ 2ε−1 log n, for ε/2 ≤ 1

Note that since the length ofm−m′ is not known, we need to use a self-delimitingencoding, which means that we use O(log n + log(ε−1 log wmax

wmin)) advice bits at

the beginning. This advice allows the algorithm to learn m as soon as thefirst important request arrives. From there on, the algorithm will know if arequest is important, unimportant, or huge. Whenever a huge request arrives,the algorithm answers 0 (rejects the request). When an unimportant requestarrives, the algorithm answers 1 (accepts the request). We now describe howthe algorithm and oracle work for the important requests.

159

For the important requests (whose indices are) in V m−j , we use the coveringdesign based c-competitive algorithm for unweighted minASG. This is similarto what we do in the proof of Theorem 53. The same calculations yield an upperbound on this advice of B(n, c)+O(logs n·log n). Note that logs(n) ≤ 2ε−1 log nfor ε/2 ≤ 1, giving the bound on the advice in the statement of the theorem.

First, we note that the solution produced is valid, since it is a superset of thesolution of Opt.

We now argue that the cost of the solution is at most (1 + ε)c times the cost ofOpt. Following the proof of Theorem 53 and switching the roles of Opt andAlg, we have by construction that the cost of the important requests for thealgorithm is at most sc times larger than the cost for Opt on the importantrequests. For the huge requests, both this algorithm and Opt incur a cost ofzero.

We now consider the unimportant requests. If ri is unimportant, then

w(i) < sm−dlogs(n2)e ≤ sm/n2 ≤ w(ximax)/n2 ≤ Opt(σ)/n2.

This implies that

∞∑j=dlogs(n2)e+1

w(V m−j) ≤ nOpt(σ)

n2=

Opt(σ)

n. (9.4)

Thus, even if the algorithm accepts all unimportant requests and Opt acceptsnone of them, it only accepts an additional Opt(σ)

n . In total, the algorithmgets a cost of at most (1 + 1

n )(1 + ε/2)cOpt(σ). For n ≥ n0 = 2+εε , this is at

most (1 + ε)cOpt(σ). For inputs of length less than n0, the oracle will writean optimal solution onto the advice tape, using at most n0 bits. Since n0 ≤ 3

ε ,b ∈ O(ε−1 log2 n) as required. For inputs of length at least n0, we use thealgorithm described above. Thus, for every input σ, it holds that Opt(σ) ≤(1 + ε)cAlg(σ).

9.5 Matching and Other Non-Complete AOC Prob-lems

We first provide a general theorem that works for all maximization problems inAOC, giving better results in some cases than that in Theorem 53.Theorem 55. Let P ∈ AOC be a maximization problem. If there exists a c-competitive P-algorithm reading b bits of advice, then there exists a O(c · log n)-competitive Pw-algorithm reading O(b+ log n) bits of advice.

Proof. Use exponential sparsification on the weights with an arbitrary ε, sayε = 1/2, and let s = 1 + ε. For a given request sequence, σ, let wmax be themaximum weight that OptPw accepts. The oracle computes the unique m ∈ Zsuch that wmax ∈ [sm, sm+1). The important requests are those with weight w,where sm−dlogs(n2)e ≤ w < sm+1.

160

We consider only the dlogs(n2)e + 1 important intervals, i.e., the intervals

[si, si+1), m− dlogs(n2)e ≤ i ≤ m, and index them by i. Let k be the index of

the interval of weights contributing the most weight to OptPw(σ). The adviceis a self-delimiting encoding of the index, j, of the first request with weightw ∈ [sk, sk+1), plus the advice used by the given c-competitive P-algorithm.This requires at most b+O(log(n)) bits of advice.

The algorithm rejects all requests before the jth. From the jth request, thealgorithm calculates the index k. The algorithm accepts those requests whichwould be accepted by the P-algorithm when presented with the subsequence ofσ consisting of the requests with weights in [sk, sk+1). Since, by exponentialsparsification, OptPw accepts total weight at most 1

nOptPw(σ) from requestswith unimportant weights, and it accepts at least as much from interval k asfrom any of the other dlogs(n

2)e+1 intervals considered, OptPw accepts weightat least (1− 1

n )OptPw (σ)dlogs(n2)e+1 from interval k. The algorithm, Alg, described here

accepts at least 1c as many requests as OptPw does in this interval, and each

of the requests it accepts is at least a fraction 1s as large as the largest weight

in this interval. Thus, c(1 + ε)Alg(σ) ≥(

1− 1n

dlogs(n2)e+1

)OptPw(σ), so Alg is

O(c log n)-competitive.

In the online matching problem, edges arrive online, the algorithm must irre-vocably accept or reject them as they arrive, and the goal is to maximize thenumber of edges accepted. The natural greedy algorithm for this problem iswell known to be 2-competitive. In terms of advice, the problem is known to bein AOC, but is not AOC-complete [40]. We remark that a version of unweightedonline matching with vertex arrivals (incomparable to our weighted matchingwith edge arrivals) has been studied with advice in [64].Corollary 14. There exists a O(log n)-competitive algorithm for maximumweight matching reading O(log n) bits of advice.

Proof. The result follows from Theorem 55 since there exists a 2-competitivealgorithm without advice for unweighted matching.

9.5.1 Lower bounds

First, we present a result which holds for the weighted versions of many maxi-mization problems in AOC. It also holds for the weighted versions of AOC-completeminimization problems, but Theorem 52 gives a much stronger result.Theorem 56. For the weighted online versions of Independent Set, Clique,Disjoint Path Allocation, and Matching, an algorithm reading o(log n) bits ofadvice cannot have a competitive ratio bounded by any function of n.

To prove Theorem 56, we start by proving the following lemma from which thetheorem easily follows.Lemma 29. Let P ∈ AOC and suppose there exists a family (σn)n∈N of P-inputswith the following properties:

1. σn = 〈r1, r2, . . . , rn〉 consists of n requests.

2. σn+1 is obtained by adding a single request to the end of σn.

161

3. If P is a maximization problem, the feasible solutions are those in whichat most one request is accepted.

If P is a minimization problem, the feasible solutions are those in whichat least one request is accepted.

Then, the competitive ratio of an algorithm for the weighted problem Pw readingo(log n) bits of advice is not bounded by any function of n.

Proof. Let Alg be a Pw-algorithm reading at most b = o(log n) bits of advice.Let f(n) > 0 be an arbitrary non-decreasing function of n. We will show thatfor all sufficiently large n, there exists an input of length n such that the profitobtained by Opt is at least f(n) times as large as the profit obtained by Alg.Since f(n) was arbitrary, it follows that the (non-strict) competitive ratio ofAlg is not bounded by any function of n.

Since b = o(log n), there exists anN ∈ Z such that for any n ≥ N , Alg reads lessthan log(n)− 1 bits of advice on inputs of length at most n. Fix an n ≥ N . For1 ≤ i ≤ n, define the Pw-input σi = 〈r1, f(n), r2, f(n)2, . . . , ri, f(n)n〉.Consider the set of inputs σ1, . . . , σn. For every 1 ≤ i ≤ n, the number ofadvice bits read by Alg on the input σi is at most log(n)− 1 (since the lengthof the input σi is i ≤ n). Thus, by the pigeonhole principle, there must existtwo integers n1, n2 with n1 < n2 such that Alg reads the same advice on σn1

and σn2. If Alg rejects all requests in σn1

, then it achieves a profit of 0 whileOpt obtains a profit of f(n)n1 . If Alg accepts a request in σn1

, then it obtainsa profit of at most f(n)n1 . Since Alg reads the same advice on σn1 and σn2

and since the two inputs are indistinguishable for the first n1 requests, thismeans that Alg also obtains a profit of at most f(n)n1 on the input σn2

. ButOpt(σn2

) = f(n)n2 , and hence Opt(σn2)/Alg(σn2

) ≥ f(n)n2−n1 ≥ f(n).

For minimization problems, we can use the same arguments and the input se-quence σi = 〈r1, f(n)−1, r2, f(n)−2, . . . , ri, f(n)−n〉.

Proof of Theorem 56. For Independent Set, we can use the above lemma witha family of cliques (Kn)n∈N, and for Clique, we can use a family of independentsets. For Matching, we can use a family of stars (K1,n)n∈N. For Disjoint PathAllocation, we use a path P2n = 〈v1, v2, . . . , v2n〉 and ri = 〈vi, vi+1, . . . vi+n〉.

Returning to the example of maximum weight matching, we now know thatO(log n) bits suffice to be O(log n)-competitive, and that o(log n) bits of adviceleads to a competitive ratio unbounded by any function of n. Furthermore, wehave the following result:Theorem 57. An O(1)-competitive algorithm for weighted matching must readat least Ω(n) bits of advice.

Proof. We prove the lower bound using a direct product theorem [128]. Accord-ing to [128], it suffices to show that: (i) Weighted matching is Σ-repeatable3(ii) For every c, there exists a probability distribution pc with finite supportsuch that for every deterministic algorithm Det without advice, it holds that

3See [128] for a definition of Σ-repeatable.

162

Epc [Opt(σ)] ≥ c ·Epc [Det(σ)]. Also, there must be a finite upper bound on theprofit an algorithm can obtain on an input in the support of pc.

It is trivial to see that weighted matching is Σ-repeatable. Fix c ≥ 1 and letk = 2c. We define the probability distribution pc by specifying a probabilisticadversary: The input graph will be a star K1,i consisting of i edges for some1 ≤ i ≤ k. In round i, the adversary reveals the edge ei = (v, vi) where vi isa new vertex and v is the center vertex of the star. The edge ei has weight 2i.If i < c, then with probability 1/2 the adversary will proceed to round i + 1,and with probability 1/2 the input sequence will end. If the adversary reachesround c, it will always stop after revealing the edge ec of round c. Note thatthe support of pc and the largest profit an algorithm can obtain on any inputin the support of pc are both finite.

Let X be the random variable which denotes the number of edges revealed bythe adversary. Note that Pr(X = j) = 2−j if 1 ≤ j < c. Consequently,

Pr(X = c) = 1− Pr(X < c) = 1−c−1∑i=1

2−i = 2−(c−1). (9.5)

Let Det be a deterministic algorithm without advice. We may assume that Detdecides in advance on some 1 ≤ j ≤ k and accepts the edge ej (the only otherpossible deterministic strategy it to never accept an edge, but this is alwaysstrictly worse than following any of the k strategies that accepts an edge). IfX < j, then the profit obtained by Det is zero. If X ≥ j, then Det obtains aprofit of 2j . It follows that

E[Det(σ)] = Pr(X ≥ j)2j = (1− Pr(X < j))2j = 2−(j−1)2j = 2.

The optimal algorithm Opt always accepts the last edge of the input. Thus, ifX = j, then the profit of Opt is 2j . It follows that

E[Opt(σ)] =

k∑j=1

Pr(X = j)2j =

k−1∑j=1

(2−j2j

)+ 2−(k−1)2k = k + 1.

Thus, we conclude that E[Opt(σ)] ≥ k+12 E[Det(σ)] ≥ cE[Det(σ)].

In particular, we cannot achieve a constant competitive ratio using O(log n) bitsof advice for weighted matching. We leave it as an open problem to close thegap between ω(1) and O(log n) on the competitive ratio of matching algorithmswith advice complexity O(log n).

9.6 Scheduling with Sublinear Advice

For the scheduling problems studied, the requests are jobs, each characterizedby its size. Each job must be assigned to one of m available machines. If themachines are identical, the load of a job on any machine is simply its size. Ifthe machines are related, each machine has a speed, and the load of a job, J ,assigned to a machine with speed s is the size of J divided by s. If the machinesare unrelated, each job arrives with a vector specifying its load on each machine.

163

Consider a sequence σ = 〈r1, . . . , rn〉 of n jobs that arrive online. Each jobri ∈ σ has an associated weight-function wi : [m] → R+. Upon arrival, ajob must irrevocably be assigned to one of the m machines. The load Lj ofa machine j ∈ [m] is defined as Lj =

∑i∈Mj

wi(j) where Mj is the set ofjobs scheduled on machine j. The total load of a schedule for σ is the vectorL = (L1, . . . , Lm). We say that (L1, . . . , Lm) ≤ (L′1, . . . , L

′m) if and only if

Li ≤ L′i for 1 ≤ i ≤ m. A scheduling problem of the above type is specified byan objective function f : Rm+ → R+ and by specifying if the goal is to minimizeor maximize f(L) = f(L1, . . . , Lm) ∈ R+. We assume that f is non-decreasing,i.e., f(L) ≤ f(L′) for all L ≤ L′. Some of the classical choices of objectivefunction include:

• Minimizing the `p-norm fp(L) = fp(L1, . . . , Lm) = ‖(L1, . . . , Lm)‖p forsome 1 ≤ p ≤ ∞. That is, for 1 ≤ p < ∞, the goal is to minimize(∑

j∈[m] Lpj

)1/p

and for p = ∞, the goal is to minimize the makespanmaxj∈[m] Lj .

• Maximizing the minimum load f(L) = minj∈[m] Lj . This is also known asmachine covering. Note that this objective function is not a norm4, butit does satisfy that f(αL) = αf(L) for every α ≥ 0 and L ∈ Rm+ .

We begin with a result for unrelated machines.Theorem 58. Let P be a scheduling problem on m unrelated machines wherethe goal is to minimize an objective function f . Assume that f is a norm.Then, for 0 < ε ≤ 1, there exists a (1 + ε)-competitive P-algorithm readingO(( 4ε log(n) + 2)m log(n)

)bits of advice. In particular, if m = O(1) and ε =

Ω(1), then there exists a (1 + ε)-competitive algorithm reading O(polylog(n))bits of advice.

Proof. Since the objective function f is a norm on Rm, we will denote it by ‖ ·‖.Fix an input sequence σ. The oracle starts by computing an optimal schedulefor σ. The oracle uses O(log n) bits to encode n using a self-delimiting encoding.

Let s = 1 + ε/2, let 1j be the j’th unit vector (the vector with a 1 in thej’th coordinate and 0 elsewhere), and let LOpt be the load-vector of the fixedoptimal schedule. Thus, Opt(σ) = ‖LOpt‖. Let k be the unique integer suchthat sk ≤ ‖LOpt‖ < sk+1. A job ri ∈ σ is said to be unimportant if thereexists a machine j ∈ [m] such that ‖wi(j)1j‖ < sk−dlogs(n2)e. A job which is notunimportant is important. The oracle writes the index i′ of the first importantjob ri′ onto the advice tape (or indicates that σ contains no important jobs)using dlog ne+ 1 bits. Let j′ be the machine minimizing ‖wi′(j′)1j′‖ where tiesare broken arbitrarily. The oracle also writes ∆i′ = k − ki′ , where ki′ is theunique integer such that ski′ ≤ ‖wi′(j′)1j′‖ < ski′+1 onto the advice tape usingO(log logs n) bits.

Scheduling unimportant jobs. If a job ri ∈ σ is unimportant, then the algorithmschedules the job on the machine j minimizing ‖wi(j)1j‖ where ties are brokenarbitrarily. We now explain how the algorithm knows if a job is unimportantor not. If ri ∈ σ is a job that arrives before the first important job, i.e., ifi < i′, then ri is unimportant by definition. When job ri′ arrives, the algorithm

4f is a norm if f(αv) = |α|f(v), f(u + v) ≤ f(u) + f(v), and f(v) = 0⇒ v = 0.

164

can deduce k since it knows ∆i′ from the advice and since it can computeminj ‖wi′(j)1j‖ without help. Knowing k (and the number of jobs n), thealgorithm is able to tell if a job is unimportant or not.

Scheduling important jobs. We now describe how the algorithm schedules theimportant jobs. To this end, we define the type of an important job. For animportant job ri, let ∆i(1), . . . ,∆i(m) be defined as follows: For 1 ≤ j ≤ m,if there exists an integer ki(j) ≤ k such that ski(j) ≤ ‖wi(j)1j‖ < ski(j)+1,then ∆i(j) = k − ki(j) (since ri is important, ∆i(j) ≤ dlogs(n

2)e). If no suchinteger exists, then it must be the case that ‖wi(j)1j‖ ≥ sk+1 > ‖LOpt‖. Inthis case, we let ∆i(j) = ⊥ be a dummy symbol. The type of ri is the vector∆i = (∆i(1), . . . ,∆i(m)). Note that there are only (dlogs(n

2)e + 2)m differenttypes. For each possible type ∆ = (∆(1), . . . ,∆(m)), the oracle writes thenumber, a∆, of jobs of type ∆ onto the advice tape. This requires at most(dlogs(n

2)e+ 2)mdlog(n)e bits of advice.

Note that since ‖ · ‖ is a norm, if ri ∈ σ is of type ∆i = (∆i(1), . . . ,∆i(m)),then sk−∆i(j)‖1j‖−1 ≤ wi(j) ≤ sk−∆i(j)+1‖1j‖−1 if ∆i(j) 6= ⊥ and wi(j) >

‖LOpt‖‖1j‖−1 if ∆i(j) = ⊥. The algorithm computes an optimal schedule Simpfor the input σ which for each possible type ∆ contains a∆ jobs with weight-function w∆ where w∆(j) = sk−∆(j)+1‖1j‖−1 if k(j) 6= ⊥ and w∆(j) = ∞otherwise. This choice of weight-function ensures that if ri ∈ σ is a job of type∆i, then for each j with ∆i(j) 6= ⊥,

wi(j) < w∆i(j) ≤ s · wi(j). (9.6)

When an important job of σ arrives, the algorithm computes the type of the job.Based solely on this type, the algorithm schedules the important jobs in σ byfollowing the schedule Simp for σ. Let Limp be the load-vector of the importantjobs of σ scheduled by Alg. Note that by Eq. (9.6), the weight-function of animportant job of σ is strictly smaller (for all machines) than the weight-functionof the corresponding job of σ. Thus, ‖Limp‖ is bounded from above by the costof the schedule Simp for σ.

Putting it all together. The fixed optimal schedule for σ induces a schedulingof σ. Let L be the load-vector of this schedule. By Eq. (9.6), we get that‖L‖ ≤ s‖LOpt‖. Thus, the cost of Simp (which was an optimal scheduling of σ)is at most ‖L‖ ≤ s‖LOpt‖.

Let Lunimp be the load-vector of the unimportant jobs scheduled by Alg. Fur-thermore, let Mj be the set of indices of the unimportant jobs scheduled byAlg on machine j. By subadditivity,

‖Lunimp‖ =

∥∥∥∥∥∥m∑j=1

∑i∈Mj

wi(j)1j

∥∥∥∥∥∥ ≤m∑j=1

∑i∈Mj

‖wi(j)1j‖ <m∑j=1

∑i∈Mj

sk−dlogs(n2)e

≤ n‖LOpt‖n2

≤ ‖LOpt‖n

.

We are finally able to bound the cost of the entire schedule Simp∪Sunimp createdby Alg:

Alg(σ) = ‖Limp + Lunimp‖ ≤ ‖Limp‖+ ‖Lunimp‖ ≤ (s+ 1/n)‖LOpt‖

165

Recall that s = 1 + ε/2. Thus, if n ≥ 2/ε, then Alg(σ) ≤ (1 + ε)Opt(σ). Forinputs of length less than 2/ε, the oracle can simply encode the optimal solutionusing at most 2

εdlogme bits of advice. The total amount of advice used by ouralgorithm is at most

(dlogs(n2)e+ 2)mdlog(n)e+O(log n+ log logs n) = O

((4ε−1 log(n) + 2)m log(n)

).

For the following discussion, assume that ε = Θ(1). We remark that the (1 +ε)-competitive algorithm in Theorem 58 is only of interest if the number ofmachines m is small compared to the number of jobs n. As already noted, themost interesting aspect of Theorem 58 is that our algorithm uses only polylog(n)bits of advice if m is a constant. More generally, if m = o(log n/ log log n), thenour algorithm will use o(n) bits of advice. On the other hand, if m = Θ(log n),then our algorithm uses Ω(log(n)log(n)) bits of advice, which is worse than thetrivial 1-competitive algorithm which uses ndlogme = O(n log log n) bits ofadvice when m = Θ(log n).

The advice complexity of the algorithm in Theorem 58 depends on the number ofmachines m because we want the result to hold even when the machines are un-related. We now show that when restricting to related machines, we can obtaina (1 + ε)-competitive algorithm using O(ε−1 log2 n) bits of advice, independentof the number of machines. The proof resembles that of Theorem 58. The maindifference is that we are able to reduce the number of types to O(log2 n).Theorem 59. Let P be a scheduling problem on m related machines where thegoal is to minimize an objective function f . Assume that f is a norm. Then, for0 < ε ≤ 1, there exists a (1 + ε)-competitive P-algorithm with advice complexityO(ε−1 log2 n

).

Proof. Since the objective function f is a norm on Rm, we will denote it by ‖ ·‖.Fix an input sequence σ. The oracle starts by computing an optimal schedulefor σ. Let s = 1 + ε/2. The oracle uses O(log n) bits to encode n using aself-delimiting encoding.

Let C1, . . . , Cm be the speeds of the m machines. Assume without loss of gener-ality that ‖1j‖/Cj attains its minimum value when j = 1. Define B = ‖11‖/C1.Let LOpt be the load-vector of the fixed optimal schedule. Thus, Opt(σ) =‖LOpt‖. Let k be the unique integer such that sk ≤ ‖LOpt‖ < sk+1. A jobri ∈ σ is said to be unimportant if wiB < sk−dlogs(n2)e. A job which is notunimportant is important. Note that wiB is always bounded from above by‖LOpt‖ since ri must be placed on some machine. The oracle writes the in-dex i′ of the first important job ri′ onto the advice tape (or indicates that σcontains no important jobs) using dlog ne + 1 bits. The oracle also writes theunique integer k′ such that sk−k

′ ≤ wiB < sk−k′+1 onto the advice tape, using

O(log logs(n)) bits.

We now explain how the algorithm knows if a job is unimportant or not. Ifri ∈ σ is a job that arrives before the first important job, i.e., if i < i′, then riis unimportant by definition. When job ri′ arrives, the algorithm can deduce ksince it knows k′ from the advice and since it can compute wi′B without help.

166

Knowing k (and the number of jobs n), the algorithm is able to tell if a job isunimportant or not.

If a job ri ∈ σ is unimportant, then the algorithm schedules the job on machine1.

Scheduling important jobs. We now describe how the algorithm schedules theimportant jobs. To this end, we define the type of an important job. Thetype of an important job ri is the non-negative integer ti such that sk−ti ≤wiB < sk−ti+1. Note that there are only dlogs(n

2)e + 1 different types. Foreach possible type 0 ≤ t ≤ dlogs(n

2)e, the oracle writes the number of jobs atof that type onto the advice tape. This requires at most O(logs(n

2) log(n)) bitsof advice.

Note that since ‖ · ‖ is a norm, if ri ∈ σ is of type ti, then sk−tiB−1 ≤ wi ≤sk−ti+1B−1. The algorithm computes an optimal schedule Simp for the inputσ which for each possible type 0 ≤ t ≤ dlogs(n

2)e contains at jobs with weightwt = sk−ki+1B−1. This choice of weight ensures that if ri ∈ σ is a job of typeti, then,

wi < wti ≤ s · wi. (9.7)

When an important job of σ arrives, the algorithm computes the type of the job.Based solely on this type, the algorithm schedules the important jobs in σ byfollowing the schedule Simp for σ. Let Limp be the load-vector of the importantjobs of σ scheduled by Alg. Note that by Eq. (9.7), the weight of an importantjob of σ is strictly smaller than the weight of the corresponding job of σ. Thus,‖Limp‖ is bounded from above by the cost of the schedule Simp for σ.

Putting it all together. The fixed optimal schedule for σ induces a schedulingof σ. Let L be the load-vector of this schedule. By Eq. (9.7), we get thatL ≤ sLOpt. Thus, the cost of Simp (which was an optimal scheduling of σ) isat most ‖L‖ ≤ s‖LOpt‖.

Let Wu be the total weight of unimportant jobs scheduled on machine 1 byAlg. We have that

‖(Wu/C1)11‖ = WuB ≤ nsk−dlogs(n2)e ≤ n‖LOpt‖n2

≤ ‖LOpt‖n

.

We are finally able to bound the cost of the entire schedule Simp∪Sunimp createdby Alg:

Alg(σ) = ‖Limp + (Wu/C1)11‖ ≤ ‖Limp‖+ ‖(Wu/C1)11‖ ≤ (s+ 1/n)‖LOpt‖

Recall that s = 1 + ε/2. Thus, if n > 2/ε, then Alg(σ) ≤ (1 + ε)Opt(σ). Forinputs of length less than 2/ε, the oracle can simply encode the optimal solutionusing at most 2

εdlogme bits of advice. The total amount of advice used by ouralgorithm is O(ε−1 log2 n).

We now consider scheduling problems where the goal is to maximize an objectivefunction f . Recall that we assume that the objective function is non-decreasing.The most notable example is when f is the minimum load. In the followingtheorem, we show how to schedule almost optimally on unrelated machineswith only a rather weak constraint on f (weaker than f being a norm).

167

Theorem 60. Let P be a scheduling problem on m unrelated machines wherethe goal is to maximize an objective function f . Assume that f(αL) ≤ αf(L)for every α ≥ 0 and L ∈ Rm+ . Then, for every 0 < ε ≤ 1, there exists a (1 + ε)-competitive P-algorithm with advice complexity O(( 4

ε log(n) + 2)mm2 log n). Inparticular, if m = O(1) and ε = Ω(1), the advice complexity is O(polylog(n)).

Proof. Fix an input sequence σ and an optimal schedule. Let s = 1 + ε/2. Theoracle uses O(log n) bits to encode n using a self-delimiting encoding.

For 1 ≤ j ≤ m, let Lj be the load on machine j in the optimal schedule.Furthermore, let kj be the unique integer such that skj ≤ Lj < skj+1. We saythat a job ri is unimportant to machine j if wi(j) < skj−dlogs(n2)e, important tomachine j if skj−dlogs(n2)e ≤ wi(j) < skj+1 and huge to machine j if wi(j) ≥skj+1. Note that if ri is huge to machine j, Opt does not schedule ri on machinej. A job which is important to at least one machine is called important. All otherjobs are called unimportant. Note that any unimportant job is unimportant tothe machine where it is scheduled by Opt. We number the machines such thatthe first job which is important to machine j arrives no later than the first jobwhich is important to machine j′ for every j < j′.

The algorithm works in m + 1 phases (some of which might be empty). Phase0 begins when the first request arrives. For 1 ≤ j ≤ m, phase j − 1 ends andphase j begins when the first important job for machine j arrives. Note that thesame job could be the first important job for more than one machine. Phase mends with the last request of σ. For each phase, j, the oracle writes the index,ij , of the request starting the phase and the unique integer ∆i(j) such thatskj−∆i(j) ≤ wi(j) < skj−∆i(j)+1.

Each unimportant job is scheduled on a machine where it has highest load. Sucha job may be huge for the chosen machine, but this is no problem. Thus, inphase 0, all jobs are scheduled where they have highest load. We now describehow the algorithm schedules the important jobs in phase j for 1 ≤ j ≤ m.By definition, at any point in phase j, we have received an important job formachines 1, 2, . . . , j and no important job for machine j + 1 has yet arrived.

The type of a job ri in phase j is a vector ∆i = (∆i(1), . . . ,∆i(j)) where ∆i(j′) is

the interval of ri on machine j′ (so skj−∆i(j′) ≤ wi(j′) < skj−∆i(j

′)+1) or ⊥ if thejob is not important to machine j′. Note that there are d(2+logs(n

2))je possiblejob types in phase j. The oracle considers how the jobs in phase j are scheduledin the fixed optimal schedule. For each job type ∆ and each machine 1 ≤ j′ ≤ j,the oracle encodes the number of jobs of that type which are scheduled onmachine j′ during phase j. This can be done using O(d2 + logs(n

2)emm log n)bits of advice for a single phase, and O(d2 + logs(n

2)emm2 log n) bits of advicefor all m phases.

Equipped with the advice described above, the algorithm simply schedules theimportant jobs in the current phase based on their types. This ensures that, foreach machine j, the total load of important jobs that Opt schedules on machinej is at most s times as large as the total load of important jobs scheduled byAlg on machine j (since if ri and ri′ are important to machine j and of thesame type, then wi(j) < s · wi′(j)).

168

In order to finish the proof, we need to show that the contribution of unim-portant jobs to Opt(σ) is negligible. To this end, let Lunimp

j (resp. Limpj ) be

the load on machine j of the unimportant (resp. important) jobs scheduled onthat machine in the optimal schedule. Note that Lj = Lunimp

j + Limpj . By the

definition of an unimportant job (and since there trivially can be no more thann unimportant jobs), we find that for every 1 ≤ j ≤ m,

Lunimpj < n · s−dlogs(n2)e · Lj ≤

Ljn.

Thus, Lj = Lunimpj + Limp

j ≤ Lj/n+ Limpj from which Lj ≤ n

n−1 · Limpj follows,

assuming that n > 1. Since this holds for all machines, and since as previouslyargued Limp

Opt ≤ s · LimpAlg ≤ s · LAlg, we get that

LOpt ≤n

n− 1Limp

Opt ≤ sn

n− 1LAlg.

By assumption, the objective function f satisfies f(αL) ≤ αf(L) and is non-decreasing. Thus, we conclude that

Opt(σ) = f(LOpt) ≤ f(s

n

n− 1LAlg

)≤ s n

n− 1f(LAlg) = s

n

n− 1Alg(σ).

For n ≥ 2 + 2ε , this gives a ratio of at most 1 + ε.

169

Appendix: AOC-Complete Problems

For completeness, we state the full definition of minASGk from [40]:Definition 24 ( [40]). The minimum asymmetric string guessing problem withknown history, minASGk, has input 〈?, x1, . . . , xn〉, where x = x1 . . . xn ∈0, 1n, for some n ∈ N. For 1 ≤ i ≤ n, round i proceeds as follows:

1. If i > 1, the algorithm learns the correct answer, xi−1, to the request inthe previous round.

2. The algorithm answers yi = f(x1, . . . , xi−1) ∈ 0, 1, where f is a functiondefined by the algorithm.

The output y = y1 . . . yn computed by the algorithm is feasible, if x v y. Oth-erwise, y is infeasible. The cost of a feasible output is |y|1, and the cost of aninfeasible output is ∞.

In addition to minASGk, the class of AOC-complete problems also containsmany graph problems. The following four graph problems are studied in thevertex-arrival model, so the requests are vertices, each presented together withits edges to previous vertices. The first three problems are minimization prob-lems and the last one is a maximization problem. In Vertex Cover, an algorithmmust accept a set of vertices which constitute a vertex cover, so for every edge inthe requested graph, at least one of its endpoints is accepted. For DominatingSet, the accepted vertices must constitute a dominating set, so every vertex inthe requested graph must be accepted, or one its neighbors must be accepted.In Cycle Finding, an algorithm must accept a set of vertices inducing a cyclicgraph. For Independent Set, the accepted vertices must form an independentset, i.e., no two accepted vertices share an edge.

For Disjoint Path Allocation a path P is given, and the requests are subpathsof P . The aim is to accept as many edge disjoint paths as possible.

For Set Cover, the requests are finite subsets from a known universe, and theunion of the accepted subsets must be the entire universe. The aim is to acceptas few subsets as possible.

170

Bibliography

[1] Dimitris Achlioptas, Marek Chrobak, and John Noga. Competitive analy-sis of randomized paging algorithms. Theor. Comput. Sci., 234(1–2):203–218, 2000. Preliminary version in ESA’96. doi:10.1016/S0304-3975(98)00116-9.

[2] Anna Adamaszek, Marc P. Renault, Adi Rosén, and Rob van Stee. Re-ordering buffer management with advice. J. Sched., 2016. Preliminaryversion in WAOA’13. doi:10.1007/s10951-016-0487-8.

[3] Susanne Albers and Matthias Hellwig. Semi-online scheduling revisited.Theor. Comput. Sci., 443:1–9, 2012. doi:10.1016/j.tcs.2012.03.031.

[4] Susanne Albers and Matthias Hellwig. Online makespan minimizationwith parallel schedules. In SWAT, volume 8503 of LNCS, pages 13–25,2014. doi:10.1007/978-3-319-08404-6_2.

[5] Robert B. Allan and Renu Laskar. On domination and independent dom-ination numbers of a graph. Discrete Mathematics, 23(2):73–76, 1978.

[6] Noga Alon, Baruch Awerbuch, Yossi Azar, Niv Buchbinder, and JosephNaor. The online set cover problem. SIAM J. Comput., 39(2):361–370,2009.

[7] Spyros Angelopoulos, Reza Dorrigiv, and Alejandro López-Ortiz. On theseparation and equivalence of paging strategies. In 18th ACM-SIAM Sym-posium on Discrete Algorithms, SODA, pages 229–237, 2007.

[8] Spyros Angelopoulos, Christoph Dürr, Shahin Kamali, Marc P. Re-nault, and Adi Rosén. Online bin packing with advice of small size.In WADS, volume 9214 of LNCS, pages 40–53, 2015. doi:10.1007/978-3-319-21840-3_4.

[9] Sanjeev Arora and Boaz Barak. Computational Complexity: A ModernApproach. Cambridge University Press, 2009.

[10] Yossi Azar and Leah Epstein. On-line machine covering. J. Sched.,1(2):67–77, 1997. Preliminary version in ESA’97. doi:10.1002/(SICI)1099-1425(199808)1:2<67::AID-JOS6>3.0.CO;2-Y.

[11] Yossi Azar and Oded Regev. On-line bin-stretching. Theor. Comput.Sci., 268:17–41, 2001. Preliminary version in RANDOM’98. doi:10.1016/S0304-3975(00)00258-9.

171

[12] János Balogh, József Békési, and Gábor Galambos. New lower boundsfor certain classes of bin packing algorithms. Theor. Comput. Sci., 440–441:1–13, 2012. Preliminary version in WAOA’10. doi:10.1016/j.tcs.2012.04.017.

[13] Nikhil Bansal, Niv Buchbinder, Aleksander Mądry, and Joseph Naor. Apolylogarithmic-competitive algorithm for the k-server problem. J. ACM,62(5):40:1–40:49, 2015. Preliminary version in FOCS’11. doi:10.1145/2783434.

[14] Amotz Bar-Noy, Rajeev Motwani, and Joseph Naor. The greedy algorithmis optimal for on-line edge coloring. Inform. Process. Lett., 44:251–253,1992. doi:10.1016/0020-0190(92)90209-E.

[15] Kfir Barhum. Tight bounds for the advice complexity of the online min-imum Steiner tree problem. In SOFSEM, volume 8327 of LNCS, pages77–88, 2014. doi:10.1007/978-3-319-04298-5_8.

[16] Kfir Barhum, Hans-Joachim Böckenhauer, Michal Forišek, Heidi Gebauer,Juraj Hromkovič, Sacha Krug, Jasmin Smula, and Björn Steffen. Onthe power of advice and randomization for the disjoint path allocationproblem. In SOFSEM, volume 8327 of LNCS, pages 89–101, 2014. doi:10.1007/978-3-319-04298-5_9.

[17] Yair Bartal, Amos Fiat, and Stefano Leonardi. Lower bounds for on-linegraph problems with application to on-line circuit and optical routing.In Proceedings of the Twenty-eighth Annual ACM Symposium on Theoryof Computing, STOC ’96, pages 531–540, New York, NY, USA, 1996.ACM. URL: http://doi.acm.org/10.1145/237814.238001, doi:10.1145/237814.238001.

[18] Andrzej Pelc Barun Gorain. Deterministic graph exploration with advice.ArXiv, 2016. arXiv:1607.01657 [cs.DS]. URL: http://arxiv.org/abs/1607.01657.

[19] Laszlo A. Belady. A study of replacement algorithms for virtual-storagecomputer. IBM Systems Journal, 5(2):78–101, 1966. doi:10.1147/sj.52.0078.

[20] Shai Ben-David, Allan Borodin, Richard M. Karp, Gábor Tardos, andAvi Wigderson. On the power of randomization in on-line algorithms.Algorithmica, 11(1):2–14, 1994. Preliminary version in STOC’90. doi:10.1007/BF01294260.

[21] Maria Paola Bianchi, Hans-Joachim Böckenhauer, Tatjana Brülisauer,Dennis Komm, and Beatrice Palano. Online minimum spanning treewith advice. In SOFSEM, volume 9587 of LNCS, pages 195–207, 2016.doi:10.1007/978-3-662-49192-8_16.

[22] Maria Paola Bianchi, Hans-Joachim Böckenhauer, Juraj Hromkovič, andLucia Keller. Online coloring of bipartite graphs with and without advice.Algorithmica, 70(1):92–111, 2014. Preliminary version in COCOON’12.doi:10.1007/s00453-013-9819-7.

172

[23] Maria Paola Bianchi, Hans-Joachim Böckenhauer, Juraj Hromkovič,Sacha Krug, and Björn Steffen. On the advice complexity of the on-line L(2, 1)-coloring problem on paths and cycles. Theor. Comput. Sci.,554:22–39, 2014. Preliminary version in COCOON’13. doi:10.1016/j.tcs.2014.06.027.

[24] Avrim Blum and Carl Burch. On-line learning and the metrical task sys-tem problem. Machine Learning, 39(1):35–58, 2000. Preliminary versionin COLT’97. doi:10.1023/A:1007621832648.

[25] Hans-Joachim Böckenhauer, Richard Dobson, Sacha Krug, and Kath-leen Steinhöfel. On energy-efficient computations with advice. In CO-COON, volume 9198 of LNCS, pages 747–758, 2015. doi:10.1007/978-3-319-21398-9_58.

[26] Hans-Joachim Böckenhauer, Juraj Hromkovič, and Dennis Komm. A tech-nique to obtain hardness results for randomized online algorithms – Asurvey. In Computing with New Resources, volume 8808 of LNCS, pages264–276, 2014. doi:10.1007/978-3-319-13350-8_20.

[27] Hans-Joachim Böckenhauer, Juraj Hromkovič, Dennis Komm, SachaKrug, Jasmin Smula, and Andreas Sprock. The string guessing prob-lem as a method to prove lower bounds on the advice complexity. Theor.Comput. Sci., 554:95–108, 2014. Preliminary version in COCOON’13.doi:10.1016/j.tcs.2014.06.006.

[28] Hans-Joachim Böckenhauer, Dennis Komm, Rastislav Královič, andRichard Královič. On the advice complexity of the k-server problem.In ICALP (1), volume 6755 of LNCS, pages 207–218, 2011. doi:10.1007/978-3-642-22006-7_18.

[29] Hans-Joachim Böckenhauer, Dennis Komm, Rastislav Královič, RichardKrálovič, and Tobias Mömke. On the advice complexity of online prob-lems. In ISAAC, volume 5878 of LNCS, pages 331–340, 2009. doi:10.1007/978-3-642-10631-6_35.

[30] Hans-Joachim Böckenhauer, Dennis Komm, Richard Královič, and PeterRossmanith. The online knapsack problem: Advice and randomization.Theor. Comput. Sci., 527:61–72, 2014. Preliminary version in LATIN’12.doi:10.1016/j.tcs.2014.01.027.

[31] Hans L. Bodlaender. On the complexity of some coloring games. INTER-NAT. J. FOUND. COMPUT. SCI, 2:133–147, 1989.

[32] Martin Böhm. Lower bounds for online bin stretching with several bins. InStudent Research Forum Papers and Posters at SOFSEM, volume 1548 ofCEUR Workshop Proceedings, pages 1–12, 2016. URL: http://ceur-ws.org/Vol-1548/001-Bohm.pdf.

[33] Martin Böhm and Pavel Veselý. Online chromatic number is pspace-complete. In Combinatorial Algorithms - 27th International Workshop,IWOCA 2016, Helsinki, Finland, August 17-19, 2016, Proceedings, pages16–28, 2016. URL: http://dx.doi.org/10.1007/978-3-319-44543-4_2, doi:10.1007/978-3-319-44543-4_2.

173

[34] Allan Borodin and Ran El-Yaniv. Online Computation and CompetitiveAnalysis. Cambridge University Press, 1998.

[35] Allan Borodin and Ran El-Yaniv. On randomization in on-linecomputation. Information and Computation, 150(2):244 – 267,1999. URL: http://www.sciencedirect.com/science/article/pii/S0890540198927754, doi:http://dx.doi.org/10.1006/inco.1998.2775.

[36] Allan Borodin, Nathan Linial, and Michael E. Saks. An optimal on-line algorithm for metrical task system. J. ACM, 39(4):745–763, 1992.Preliminary version in STOC’87. doi:10.1145/146585.146588.

[37] Joan Boyar, Leah Epstein, Lene M. Favrholdt, Kim S. Larsen, and AsafLevin. Online bounded analysis. In CSR, volume 9691 of LNCS, pages131–145, 2016. doi:10.1007/978-3-319-34171-2_10.

[38] Joan Boyar and Lene M. Favrholdt. The relative worst order ratio foronline algorithms. ACM Transactions on Algorithms, 3(2), 2007.

[39] Joan Boyar, Lene M. Favrholdt, Christian Kudahl, Kim S. Larsen, andJesper W. Mikkelsen. Online algorithms with advice: A survey. SIGACTNews, 47(3):93–129, August 2016. URL: http://doi.acm.org/10.1145/2993749.2993766, doi:10.1145/2993749.2993766.

[40] Joan Boyar, Lene M. Favrholdt, Christian Kudahl, and Jesper W.Mikkelsen. Advice complexity for a class of online problems. In STACS,volume 30 of LIPIcs, pages 116–129. Schloss Dagstuhl - Leibniz-Zentrumfuer Informatik, 2015. Full paper to appear in Theor. Comput. Syst.doi:10.4230/LIPIcs.STACS.2015.116.

[41] Joan Boyar, Lene M. Favrholdt, Christian Kudahl, and Jesper W.Mikkelsen. Weighted online problems with advice. In IWOCA, vol-ume 9843 of LNCS, pages 170–190. Springer, 2016. doi:10.1007/978-3-319-44543-4_14.

[42] Joan Boyar, Lene Monrad Favrholdt, and Paul Medvedev. The rela-tive worst order ratio of online bipartite graph coloring. Unpublishedmanuscript.

[43] Joan Boyar, Sandy Irani, and Kim S. Larsen. A comparison of perfor-mance measures for online algorithms. Algorithmica, 72(4):969–994, 2015.Preliminary version in WADS’09. doi:10.1007/s00453-014-9884-6.

[44] Joan Boyar, Shahin Kamali, Kim S. Larsen, and Alejandro López-Ortiz.On the list update problem with advice. In LATA, volume 8370 of LNCS,pages 210–221, 2014. Full paper to appear in Inform. Comput. doi:10.1007/978-3-319-04921-2_17.

[45] Joan Boyar, Shahin Kamali, Kim S. Larsen, and Alejandro López-Ortiz.Online bin packing with advice. Algorithmica, 74(1):507–527, 2016. Pre-liminary version in STACS’14. doi:10.1007/s00453-014-9955-8.

[46] Joan Boyar and Christian Kudahl. Adding isolated vertices makes someonline algorithms optimal. In Combinatorial Algorithms - 26th Interna-tional Workshop, IWOCA 2015, Verona, Italy, October 5-7, 2015, Revised

174

Selected Papers, pages 65–76, 2015. URL: http://dx.doi.org/10.1007/978-3-319-29516-9_6, doi:10.1007/978-3-319-29516-9_6.

[47] Joan Boyar, Kim S. Larsen, and Abyayananda Maiti. Frontiers in Algo-rithmics and Algorithmic Aspects in Information and Management: JointInternational Conference, FAW-AAIM 2012, Beijing, China, May 14-16,2012. Proceedings, chapter A Comparison of Performance Measures viaOnline Search, pages 303–314. Springer Berlin Heidelberg, Berlin, Heidel-berg, 2012. URL: http://dx.doi.org/10.1007/978-3-642-29700-7_28, doi:10.1007/978-3-642-29700-7_28.

[48] Elisabet Burjons, Juraj Hromkovič, Xavier Muñoz, and Walter Unger.Online graph coloring with advice and randomized adversary. In SOF-SEM, volume 9587 of LNCS, pages 229–240, 2016. doi:10.1007/978-3-662-49192-8_19.

[49] Joseph Wun-Tat Chan, Francis Y. L. Chin, Deshi Ye, and Yong Zhang.Absolute and asymptotic bounds for online frequency allocation in cel-lular networks. Algorithmica, 58(2):498–515, 2010. doi:10.1007/s00453-009-9279-2.

[50] Joseph Wun-Tat Chan, Francis Y. L. Chin, Deshi Ye, Yong Zhang,and Hong Zhu. Frequency allocation problems for linear cellular net-works. In ISAAC, volume 4288 of LNCS, pages 61–70, 2006. doi:10.1007/11940128_8.

[51] Marie G. Christ, Lene M. Favrholdt, and Kim S. Larsen. Online multi-coloring with advice. Theor. Comput. Sci., 596:79–91, 2015. Preliminaryversion in WAOA’14. doi:10.1016/j.tcs.2015.06.044.

[52] Jhoirene Clemente, Christian Kudahl, Dennis Komm, and JurajHromkovič. Advice complexity of the online search problem. In IWOCA,volume 9843 of LNCS, pages 203–212. Springer, 2016. doi:10.1007/978-3-319-44543-4_16.

[53] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory.Wiley, 2nd edition, 2006. doi:10.1002/047174882X.

[54] Marc Demange, Xavier Paradon, and Vangelis Th. Paschos. On-linemaximum-order induced hereditary subgraph problems. InternationalTransactions in Operational Research, 12(2):185–201, 2005.

[55] Marc Demange and Vangelis Th. Paschos. On-line vertex-covering. The-oretical Computer Science, 332(1–3):83–108, 2005.

[56] Reinhard Diestel. Graph Theory, 4th Edition, volume 173 of Graduatetexts in mathematics. Springer, 2012.

[57] Jeffrey H. Dinitz and Douglas R. Stinson, editors. Contemporary De-sign Theory: a Collection of Surveys. Wiley-Interscience series in dis-crete mathematics and optimization. Wiley, New York, 1992. URL:http://opac.inria.fr/record=b1088981.

[58] Stefan Dobrev, Rastislav Královič, and Richard Královič. Advice complex-ity of maximum independent set in sparse and bipartite graphs. Theor.

175

Comput. Syst., 56(1):197–219, 2015. Preliminary version in WAOA’12.doi:10.1007/s00224-014-9592-2.

[59] Stefan Dobrev, Rastislav Královič, and Euripides Markou. Online graphexploration with advice. In SIROCCO, volume 7355 of LNCS, pages 267–278, 2012. doi:10.1007/978-3-642-31104-8_23.

[60] Stefan Dobrev, Rastislav Královič, and Dana Pardubská. Measuringthe problem-relevant information in input. RAIRO - Theor. Inf. Appl.,43(3):585–613, 2009. Preliminary version in SOFSEM’08. doi:10.1051/ita/2009012.

[61] Jérôme Dohrau. Online makespan scheduling with sublinear advice. InSOFSEM, volume 8939 of LNCS, pages 177–188, 2015. doi:10.1007/978-3-662-46078-8_15.

[62] Reza Dorrigiv, Meng He, and Norbert Zeh. On the advice complexity ofbuffer management. In ISAAC, volume 7676 of LNCS, pages 136–145,2012. doi:10.1007/978-3-642-35261-4_17.

[63] Reza Dorrigiv and Alejandro López-Ortiz. A survey of performance mea-sures for on-line algorithms. SIGACT News, 36:67–81, 2005.

[64] Christoph Dürr, Christian Konrad, and Marc P. Renault. On the powerof advice and randomization for online bipartite matching. In ESA, vol-ume 57 of LIPIcs, pages 37:1–37:16. Schloss Dagstuhl - Leibniz-Zentrumfuer Informatik, 2016. doi:10.4230/LIPIcs.ESA.2016.37.

[65] Tomás Ebenlendr and Jirí Sgall. Optimal and online preemptive schedul-ing on uniformly related machines. J. Sched., 12(5):517–527, 2009. Pre-liminary version in STACS’04. doi:10.1007/s10951-009-0119-7.

[66] Ran El-Yaniv, Amos Fiat, Richard Karp, and G. Turpin. Optimal Searchand One-Way Trading Online Algorithms. Algorithmica, 30:101–139, 2001.doi:10.1007/s00453-001-0003-0.

[67] Peter Elias. Universal codeword sets and representations of the integers.IEEE T. Inform. Theory, 21(2):194–203, 1975. doi:10.1109/TIT.1975.1055349.

[68] Yuval Emek, Pierre Fraigniaud, Amos Korman, and Adi Rosén. Onlinecomputation with advice. Theor. Comput. Sci., 412(24):2642–2656, 2011.Preliminary version in ICALP’09. doi:10.1016/j.tcs.2010.08.007.

[69] Paul Erdős and Joel Spencer. Probabilistic Methods in Combinatorics.Academic Press, 1974.

[70] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound onapproximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci.,69(3):485–497, 2004. Preliminary version in STOC’03. doi:10.1016/j.jcss.2004.04.011.

[71] Rudolf Fleischer and Michaela Wahl. Online scheduling revisited. J.Sched., 3:343–353, 2000. Preliminary version in ESA’00. The doi is for theconference version. doi:10.1007/3-540-45253-2_19.

176

[72] Michal Forišek, Lucia Keller, and Monika Steinová. Advice complexity ofonline coloring for paths. In LATA, volume 7183 of LNCS, pages 228–239,2012. doi:10.1007/978-3-642-28332-1_20.

[73] Michal Forišek, Lucia Keller, and Monika Steinová. Advice complexityof online graph coloring. Unpublished manuscript, 2012. URL: http://people.ksp.sk/~misof/junk/chwd.pdf.

[74] Pierre Fraigniaud, David Ilcinkas, and Andrzej Pelc. Tree explorationwith advice. Inf. Comput., 206(11):1276–1287, 2008. Preliminary versionin MFCS’06. doi:10.1016/j.ic.2008.07.005.

[75] Pierre Fraigniaud, David Ilcinkas, and Andrzej Pelc. Communication al-gorithms with advice. J. Comput. Syst. Sci., 76(3-4):222–232, 2010. Pre-liminary verison in PODC’06. doi:10.1016/j.jcss.2009.07.002.

[76] Michael R. Garey and David S. Johnson. Computers and Intractability:A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., NewYork, NY, USA, 1979. under Dominating Set on page 190. doi:10.1137/1024022.

[77] Heidi Gebauer, Dennis Komm, Rastislav Královič, Richard Královič, andJasmin Smula. Disjoint path allocation with sublinear advice. In CO-COON, volume 9198 of LNCS, pages 417–429, 2015. doi:10.1007/978-3-319-21398-9_33.

[78] R. L. Graham. Bounds for certain multiprocessing anomalies. BellSyst. Tech. J., 45(9):1563–1581, 1966. doi:10.1002/j.1538-7305.1966.tb01709.x.

[79] Sushmita Gupta, Shahin Kamali, and Alejandro López-Ortiz. On ad-vice complexity of the k-server problem under sparse metrics. InSIROCCO, volume 8179 of LNCS, pages 55–67, 2013. doi:10.1007/978-3-319-03578-9_5.

[80] Grzegorz Gutowski, Jakub Kozik, Piotr Micek, and Xuding Zhu. Lowerbounds for on-line graph colorings. In ISAAC, volume 8889 of LNCS,pages 507–515, 2014. doi:10.1007/978-3-319-13075-0_40.

[81] András Gyárfás, Zoltán Király, and Jenö Lehel. On-line competi-tive coloring algorithms. Technical Report TR-9703-1, Institute ofMathematics at Eötvös Loránd University, 1997. Available online athttp://www.cs.elte.hu/tr97/.

[82] András Gyárfás, Zoltán Király, and Jenö Lehel. On-line 3-chromaticgraphs i. triangle-free graphs. SIAM J. Discrete Math., 12:385–411, 1999.

[83] András Gyárfás and Jenő Lehel. First fit and on-line chromatic numberof families of graphs. Ars Combinatoria, 29(C):168–176, 1990.

[84] András Gyárfas and Jenö Lehel. Online and first-fit colorings of graphs.J. Graph Theor., 12(2):217–227, 1988. doi:10.1002/jgt.3190120212.

[85] Magnús M. Halldórsson. Online coloring known graphs. In 10th ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 917–918, 1999.

177

[86] Magnús M. Halldórsson, Kazuo Iwama, Shuichi Miyazaki, and Shiro Take-tomi. Online independent sets. Theoretical Computer Science, 289(2):953–962, 2002. doi:http://dx.doi.org/10.1016/S0304-3975(01)00411-X.

[87] Magnús M. Halldórsson, Kazuo Iwama, Shuichi Miyazaki, and ShiroTaketomi. Online independent sets. Theor. Comput. Sci., 289(2):953–962, 2002. Preliminary version in COCOON’00. doi:10.1016/S0304-3975(01)00411-X.

[88] Magnús M. Halldórsson and Mario Szegedy. Lower bounds for on-linegraph coloring. Theor. Comput. Sci., 130(1):163–174, 1994. Preliminaryversion in SODA’92. doi:10.1016/0304-3975(94)90157-0.

[89] Johan Håstad. Clique is hard to approximate within n1−ε. Acta Math.,182(1):105–142, 1999.

[90] Sandy Heydrich and Rob van Stee. Beating the harmonic lower boundfor online bin packing. In ICALP, volume 55 of LIPIcs, pages 41:1–41:14. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016. doi:10.4230/LIPIcs.ICALP.2016.41.

[91] Wassily Hoeffding. Probability inequalities for sums of bounded ran-dom variables. J. Am. Stat. Assoc., 58(301):13–30, 1963. doi:10.2307/2282952.

[92] Juraj Hromkovič, Rastislav Královič, and Richard Královič. Informationcomplexity of online problems. In MFCS, volume 6281 of LNCS, pages24–36, 2010. doi:10.1007/978-3-642-15155-2_3.

[93] Yikun Huang and Yong Wu. Optimal semi-online algorithm for machinecovering with nonsimultaneous machine available times. InternationalMathematical Forum, 5(4):185–190, 2010. URL: http://www.m-hikari.com/imf-2010/1-4-2010/wuyongIMF1-4-2010.pdf.

[94] Yiwei Jiang and Yong He. Optimal semi-online algorithms for preemp-tive scheduling problems with inexact partial information. Acta In-form., 44(7–8):571–590, 2007. Preliminary version in ISAAC’05. doi:10.1007/s00236-007-0058-8.

[95] Bala Kalyanasundaram and Kirk R. Pruhs. Constructing competitivetours from local information. Theor. Comput. Sci., 130(1):125–138, 1994.Preliminary version in ICALP’93. doi:10.1016/0304-3975(94)90155-4.

[96] Shahin Kamali and Alejandro López-Ortiz. Almost online square packing.In The Canadian Conference on Computational Geometry, pages 162–168,2014. URL: http://www.cccg.ca/proceedings/2014/papers/paper24.pdf.

[97] Shahin Kamali and Alejandro López-Ortiz. Better compression throughbetter list update algorithms. In Data Compression Conference, pages372–381, 2014. doi:10.1109/DCC.2014.86.

[98] Anna R. Karlin, Mark S. Manasse, Larry Rudolph, and Daniel DominicSleator. Competitive snoopy caching. Algorithmica, 3:77–119, 1988. Pre-liminary version in FOCS’86. doi:10.1007/BF01762111.

178

[99] Lucia Keller. Complexity of optimization problems, advice and approxima-tion. PhD thesis, ETH Zürich, 2014. doi:10.3929/ethz-a-010143463.

[100] Hans Kellerer, Vladimir Kotov, and Michaël Gabay. An efficient algorithmfor semi-online multiprocessor scheduling with given total processing time.J. Sched., 18(6):623–630, 2015. doi:10.1007/s10951-015-0430-4.

[101] Hal A. Kierstead. Coloring graphs online. In Online Algorithms –The State of the Art, pages 281–305. Springer, 1998. doi:10.1007/BFb0029574.

[102] Hal A. Kierstead and William T. Trotter. On-line graph coloring. In On-Line Algorithms, volume 7 of DIMACS Series in Discrete Mathematicsand Theoretical Computer Science, pages 85–92. DIMACS/AMS, 1991.

[103] Dennis Komm, Rastislav Královič, Richard Královič, and Christian Ku-dahl. Advice complexity of the online induced subgraph problem. InMFCS, volume 58 of LIPIcs, pages 59:1–59:13. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2016. doi:10.4230/LIPIcs.MFCS.2016.59.

[104] Dennis Komm, Rastislav Královič, Richard Královič, and Jasmin Smula.Treasure hunt with advice. In SIROCCO, volume 9439 of LNCS, pages328–341, 2015. doi:10.1007/978-3-319-25258-2_23.

[105] Dennis Komm and Richard Královič. Advice complexity and barely ran-dom algorithms. RAIRO - Theor. Inf. Appl., 45(2):249–267, 2011. Pre-liminary version in SOFSEM’11. doi:10.1051/ita/2011105.

[106] Dennis Komm, Richard Královič, and Tobias Mömke. On the advicecomplexity of the set cover problem. In CSR, volume 7353 of LNCS,pages 241–252, 2012. doi:10.1007/978-3-642-30642-6_23.

[107] Elias Koutsoupias. The k-server problem. Computer Science Review,3(2):105–118, 2009. doi:10.1016/j.cosrev.2009.04.002.

[108] Elias Koutsoupias and Christos H. Papadimitriou. On the k-server con-jecture. J. ACM, 42(5):971–983, 1995. Preliminary version in STOC’94.doi:10.1145/210118.210128.

[109] Rastislav Královič. Advice complexity: Quantitative approach to a-prioriinformation. In SOFSEM, volume 8327 of LNCS, pages 21–29, 2014.doi:10.1007/978-3-319-04298-5_3.

[110] J. Kratochvíl. Precoloring extension with fixed color bound. Acta Mathe-matica Universitatis Comenianae. New Series, 62(2):139–153, 1993. URL:http://eudml.org/doc/118661.

[111] Tomasz Krawczyk and Bartosz Walczak. Coloring relatives of intervaloverlap graphs via on-line games. In Javier Esparza, Pierre Fraigniaud,Thore Husfeldt, and Elias Koutsoupias, editors, Automata, Languages,and Programming, volume 8572 of Lecture Notes in Computer Science,pages 738–750. Springer Berlin Heidelberg, 2014.

[112] Sacha Krug. Towards using the history in online computation with ad-vice. RAIRO - Theor. Inf. Appl., 49(2):139–152, 2015. doi:10.1051/ita/2015003.

179

[113] Christian Kudahl. On-line graph coloring. Master’s thesis, University ofSouthern Denmark, 2013.

[114] Christian Kudahl. On-line graph coloring. Master’s thesis, University ofSouthern Denmark, 2013.

[115] Christian Kudahl. Deciding the on-line chromatic number of a graphwith pre-coloring is pspace-complete. CoRR, abs/1406.1623, 2014. URL:http://arxiv.org/abs/1406.1623.

[116] Christian Kudahl. Deciding the on-line chromatic number of a graphwith pre-coloring is PSPACE-Complete. In Vangelis Th. Paschos andPeter Widmayer, editors, Algorithms and Complexity - 9th InternationalConference, CIAC, volume 9079 of LNCS, pages 313–324. Springer, 2015.doi:10.1007/978-3-319-18173-8_23.

[117] Troy Lee and Adi Shraibman. Lower bounds in communication complexity.Now Publishers Inc, 2009.

[118] A. Gyarfas J. Lehel and Z. Kiraly. On-line graph coloring and finite basisproblems. Combinatorics: Paul Erdos is Eighty, Volume 1.:207–214, 1993.

[119] John M. Lewis and Mihalis Yannakakis. The node-deletion problem forhereditary properties is np-complete. J. Comput. Syst. Sci., 20(2):219–230, 1980. URL: http://dx.doi.org/10.1016/0022-0000(80)90060-4,doi:10.1016/0022-0000(80)90060-4.

[120] Julian Lorenz, Konstantinos Panagiotou, and Angelika Steger. Opti-mal algorithms for k-search with application in option pricing. Al-gorithmica, 55(2):311–328, 2009. URL: http://dx.doi.org/10.1007/s00453-008-9217-8, doi:10.1007/s00453-008-9217-8.

[121] László Lovász, Michael Saks, and W.T. Trotter. An on-line graphcoloring algorithm with sublinear performance ratio. Discrete Math-ematics, 75(1–3):319–325, 1989. doi:http://dx.doi.org/10.1016/0012-365X(89)90096-4.

[122] Carsten Lund and Mihalis Yannakakis. The approximation of maximumsubgraph problems, pages 40–51. Springer Berlin Heidelberg, Berlin, Hei-delberg, 1993. URL: http://dx.doi.org/10.1007/3-540-56939-1_60,doi:10.1007/3-540-56939-1_60.

[123] Alberto Marchetti-Spaccamela and Carlo Vercellis. Stochastic on-lineknapsack problems. Math. Program., 68:73–104, 1995. doi:10.1007/BF01585758.

[124] Dániel Marx. Precoloring extension on unit interval graphs. DiscreteApplied Mathematics, 154(6):995 – 1002, 2006. URL: http://www.sciencedirect.com/science/article/pii/S0166218X05003392, doi:http://dx.doi.org/10.1016/j.dam.2005.10.008.

[125] Lyle A. McGeoch and Daniel D. Sleator. A strongly competitive ran-domized paging algorithm. Algorithmica, 6:816–825, 1991. doi:10.1007/BF01759073.

180

[126] Nicole Megow, Kurt Mehlhorn, and Pascal Schweitzer. Online graphexploration: New results on old and new algorithms. Theor. Com-put. Sci., 463:62–72, 2012. Preliminary version in ICALP’11. doi:10.1016/j.tcs.2012.06.034.

[127] Jesper W. Mikkelsen. Optimal online edge coloring of planar graphs withadvice. In CIAC, volume 9079 of LNCS, pages 352–364, 2015. doi:10.1007/978-3-319-18173-8_26.

[128] Jesper W. Mikkelsen. Randomization can be as helpful as a glimpse ofthe future in online computation. In ICALP, LIPIcs, 2016. URL: http://arxiv.org/abs/1511.05886.

[129] Michael Mitzenmacher and Eli Upfal. Probability and Computing - Ran-domized Algorithms and Probabilistic Analysis. Cambridge UniversityPress, 2005.

[130] Shuichi Miyazaki. On the advice complexity of online bipartite matchingand online stable marriage. Inform. Process. Lett., 114(12):714–717, 2014.doi:10.1016/j.ipl.2014.06.013.

[131] F. P. Ramsey. On a problem of formal logic. Proceedings ofthe London Mathematical Society, s2-30(1):264–286, 1930. URL:http://plms.oxfordjournals.org/content/s2-30/1/264.short,arXiv:http://plms.oxfordjournals.org/content/s2-30/1/264.full.pdf+html, doi:10.1112/plms/s2-30.1.264.

[132] Ran Raz and Shmuel Safra. A sub-constant error-probability low-degreetest, and a sub-constant error-probability PCP characterization of NP. InSTOC97, pages 475–484. ACM, 1997.

[133] Marc Renault. Lower and upper bounds for online algorithms with advice.PhD thesis, Université Paris Diderot – Paris 7, 2014. URL: https://www.irif.univ-paris-diderot.fr/~mrenault/papers/renaultPhD.pdf.

[134] Marc P. Renault and Adi Rosén. On online algorithms with advice for thek-server problem. Theor. Comput. Syst., 56(1):3–21, 2015. Preliminaryversion in WAOA’11. doi:10.1007/s00224-012-9434-z.

[135] Marc P. Renault, Adi Rosén, and Rob van Stee. Online algorithms withadvice for bin packing and scheduling problems. Theor. Comput. Sci.,600:155–170, 2015. doi:10.1016/j.tcs.2015.07.050.

[136] John F. Rudin, III. Improved Bounds for the On-Line Scheduling Problem.PhD thesis, University of Texas at Dallas, 2001.

[137] Sebastian Seibert, Andreas Sprock, and Walter Unger. Advice complexityof the online coloring problem. In CIAC, volume 7878 of LNCS, pages345–357, 2013. doi:10.1007/978-3-642-38233-8_29.

[138] Steven S. Seiden, Jirí Sgall, and Gerhard J. Woeginger. Semi-onlinescheduling with decreasing job sizes. Oper. Res. Lett., 27(5):215–221,2000. doi:10.1016/S0167-6377(00)00053-5.

181

[139] Daniel D. Sleator and Robert E. Tarjan. Amortized efficiency of list up-date and paging rules. Commun. ACM, 28(2):202–208, 1985. Preliminaryversion in STOC’84. doi:10.1145/2786.2793.

[140] Jasmin Smula. Information Content of Online Problems, Advice versusDeterminism and Randomization. PhD thesis, ETH Zürich, 2015. doi:10.3929/ethz-a-010497710.

[141] Andreas Sprock. Analysis of hard problems in reoptimization and onlinecomputation. PhD thesis, ETH, Zürich, 2013.

[142] Björn Christian Steffen. Advice complexity of online graph problems. PhDthesis, ETH Zürich, 2014. doi:10.3929/ethz-a-010185054.

[143] Larry J. Stockmeyer. The polynomial-time hierarchy. Theoretical Com-puter Science, 3(1):1 – 22, 1976.

[144] Zhiyi Tan and Yong Wu. Optimal semi-online algorithms for machinecovering. Theor. Comput. Sci., 372(1):69–80, 2007. doi:10.1016/j.tcs.2006.11.015.

[145] Zsolt Tuza. Graph colorings with local constraints - a survey. Math. GraphTheory, 17:161–228, 1997.

[146] Wen-Guey Tzeng. On-line dominating set problems for graphs. In Ding-Zhu Du and PanosM. Pardalos, editors, Handbook of Combinatorial Op-timization, pages 1271–1288. Springer, 1999.

[147] Vijay V. Vazirani. Approximation Algorithms. Springer, 2003. doi:10.1007/978-3-662-04565-7.

[148] Sundar Vishwanathan. Randomized online graph coloring. Journalof Algorithms, 13(4):657–669, 1992. doi:http://dx.doi.org/10.1016/0196-6774(92)90061-G.

[149] Vadim G. Vizing. Critical graphs with given chromatic class. MetodyDiskret. Analiz., 5:9–17, 1965. In Russian.

[150] David Wehner. A new concept in advice complexity of job shop scheduling.In MEMICS, volume 8934 of LNCS, pages 147–158, 2014. doi:10.1007/978-3-319-14896-0_13.

[151] David Wehner. Advice complexity of fine-grained job shop scheduling.In CIAC, volume 9079 of LNCS, pages 416–428, 2015. doi:10.1007/978-3-319-18173-8_31.

[152] Gerhard J. Woeginger. A polynomial-time approximation scheme formaximizing the minimum machine completion time. Oper. Res. Lett.,20(4):149–154, 1997. doi:10.1016/S0167-6377(96)00055-7.

[153] Yinfeng Xu, Wenming Zhang, and Feifeng Zheng. Optimal algorithmsfor the online time series search problem. Theoretical Computer Science,412(3):192–197, 2011. URL: http://dx.doi.org/10.1016/j.tcs.2009.09.026, doi:10.1016/j.tcs.2009.09.026.

[154] Andrew Chi-Chin Yao. Probabilistic computations: Toward a unifiedmeasure of complexity. In Proceedings of the 18th Annual Symposium on

182

Foundations of Computer Science, SFCS ’77, pages 222–227, Washington,DC, USA, 1977. IEEE Computer Society. URL: http://dx.doi.org/10.1109/SFCS.1977.24, doi:10.1109/SFCS.1977.24.

[155] Jinjiang Yuan, C. T. Ng, and T. C. E. Cheng. Best semi-online algorithmsfor unbounded parallel batch scheduling. Discrete Appl. Math., 159:838–847, 2011. doi:10.1016/j.dam.2011.01.003.

[156] Xiaofan Zhao and Hong Shen. On the advice complexity of one-dimensional online bin packing. In Frontiers in Algorithmics, volume 8497of LNCS, pages 320–329, 2014. doi:10.1007/978-3-319-08016-1_29.

183