51
A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School, Boston, MA, [email protected] Milan Vojnovic Department of Statistics, London School of Economics (LSE), London, UK, [email protected] Se-Young Yun Department of Industrial and System Engineering, KAIST, South Korea, [email protected] We study the canonical problem of maximizing a stochastic submodular function subject to a cardinality constraint, where the goal is to select a subset from a ground set of items with uncertain individual per- formances to maximize their expected group value. Although near-optimal algorithms have been proposed for this problem, practical concerns regarding scalability, compatibility with distributed implementation, and expensive oracle queries persist in large-scale applications. Motivated by online platforms that rely on individual item scores for content recommendation and team selection, we propose a special class of algo- rithms that select items based solely on individual performance measures known as test scores. The central contribution of this work is a novel and systematic framework for designing test score based algorithms for a broad class of naturally occurring utility functions. We introduce a new scoring mechanism that we refer to as replication test scores and prove that as long as the objective function satisfies a diminishing returns property, one can leverage these scores to compute solutions that are within a constant factor of the optimum. We then extend our results to the more general stochastic submodular welfare maximization problem, where the goal is to select items and assign them to multiple groups to maximize the sum of the expected group values. For this more difficult problem, we show that replication test scores can be used to develop an algorithm that approximates the optimum solution up to a logarithmic factor. The techniques presented in this work bridge the gap between the rigorous theoretical work on submodular optimization and simple, scalable heuristics that are useful in certain domains. In particular, our results establish that in many applications involving the selection and assignment of items, one can design algorithms that are intuitive and practically relevant with only a small loss in performance compared to the state-of-the-art approaches. Key words : stochastic combinatorial optimization, submodular functions, welfare maximization, test scores 1. Introduction A common framework for combinatorial optimization that captures problems arising in wide- ranging applications is that of selecting a finite set of items from a larger candidate pool and assigning these items to one or more groups. Such problems form the core basis for the online content recommendation systems encountered in platforms pertaining to knowledge-sharing (e.g., 1 arXiv:1605.07172v4 [cs.DS] 9 May 2019

A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

A Test Score Based Approach to StochasticSubmodular Optimization

Shreyas SekarHarvard Business School, Boston, MA, [email protected]

Milan VojnovicDepartment of Statistics, London School of Economics (LSE), London, UK, [email protected]

Se-Young YunDepartment of Industrial and System Engineering, KAIST, South Korea, [email protected]

We study the canonical problem of maximizing a stochastic submodular function subject to a cardinality

constraint, where the goal is to select a subset from a ground set of items with uncertain individual per-

formances to maximize their expected group value. Although near-optimal algorithms have been proposed

for this problem, practical concerns regarding scalability, compatibility with distributed implementation,

and expensive oracle queries persist in large-scale applications. Motivated by online platforms that rely on

individual item scores for content recommendation and team selection, we propose a special class of algo-

rithms that select items based solely on individual performance measures known as test scores. The central

contribution of this work is a novel and systematic framework for designing test score based algorithms

for a broad class of naturally occurring utility functions. We introduce a new scoring mechanism that we

refer to as replication test scores and prove that as long as the objective function satisfies a diminishing

returns property, one can leverage these scores to compute solutions that are within a constant factor of

the optimum. We then extend our results to the more general stochastic submodular welfare maximization

problem, where the goal is to select items and assign them to multiple groups to maximize the sum of the

expected group values. For this more difficult problem, we show that replication test scores can be used to

develop an algorithm that approximates the optimum solution up to a logarithmic factor. The techniques

presented in this work bridge the gap between the rigorous theoretical work on submodular optimization

and simple, scalable heuristics that are useful in certain domains. In particular, our results establish that

in many applications involving the selection and assignment of items, one can design algorithms that are

intuitive and practically relevant with only a small loss in performance compared to the state-of-the-art

approaches.

Key words : stochastic combinatorial optimization, submodular functions, welfare maximization, test scores

1. Introduction

A common framework for combinatorial optimization that captures problems arising in wide-

ranging applications is that of selecting a finite set of items from a larger candidate pool and

assigning these items to one or more groups. Such problems form the core basis for the online

content recommendation systems encountered in platforms pertaining to knowledge-sharing (e.g.,

1

arX

iv:1

605.

0717

2v4

[cs

.DS]

9 M

ay 2

019

Page 2: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization2

Stack Overflow, Reddit), e-commerce (Li 2011), and digital advertising as well as team selection

problems arising in gaming (Graepel et al. 2007) and traditional hiring. A crucial feature of these

environments is the intrinsic uncertainty associated with the underlying items and consequently,

sets of items. Given this uncertainty, the decision maker’s objective in these domains is to maximize

the expected group-value associated with the set of items and their assignment.

As a concrete application, consider an online gaming platform where the items correspond to

players; the platform may seek to assign (a subset of) players to teams in order to ensure competitive

matches or to maximize the winning probability for a specific team. Other scenarios relating to

team selection—e.g., a company hiring a set of candidates or a school identifying top students

for a tournament—can also be modeled in an analogous fashion. Alternatively, these optimization

problems arise in online communities such as Stack Overflow or Reddit. Here, the items represent

topics or questions and the platform wishes to present a collection of relevant topics to an incoming

user with the goal of maximizing that user’s engagement measured via clicks or answers. Finally,

in digital advertising, items may refer to ads displayed to a user in a marketing campaign and the

value results from conversion events such as a click or product purchase. Naturally, all of these

constitute stochastic environments due to the underlying uncertainty, e.g., the performance of any

individual player is not deterministic in the case of a gaming platform, and there is considerable

uncertainty regarding a user’s propensity to click or respond to a topic on knowledge platforms.

There are several fundamental challenges in the above applications that necessitate innovative

algorithmic approaches. First, the value derived from a set of items may not be linear in that of

the individual items and may in fact, model a more subtle relationship. For example, agents or

topics may complement or supplement each other; the efficiency of a team may grow with team

size but exhibit diminishing returns as more members are added due to coordination inefficiencies.

Second, the intrinsic uncertainty regarding the value of individual items may affect the group value

in surprising ways due to the non-linearity of the objective. As we depict later, there are situations

where a set of ‘high-risk high-reward’ items may outperform a collection of stable-value items even

when the latter type provides higher value in expectation. Finally, we also face issues relating

to computational complexity since the number of items and groups can be very large in online

platform scenarios and the underlying combinatorial optimization problems are usually NP-Hard.

Despite the above challenges, a litany of sophisticated algorithmic solutions have been developed

for the problems mentioned previously. Given to the intricacies of the setting, these algorithms

tend to be somewhat complex and questions remain on whether these methods are suitable for the

scenarios outlined earlier owing to issues regarding scalability, interpretability, and the difficulties

of function evaluation. On the other hand, it is common practice in many domains to select or

assign items by employing algorithms that base their decisions on individual item scores—these

Page 3: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization3

represent unique statistics associated with each item that serve as a proxy for the item’s quality or

the relevance to the task at hand. At a high level, these algorithms only use the scores computed

for individual items—each item’s score is independent of other items—to select items and as such,

avoid the practical issues that plague traditional algorithmic paradigms.

To expand on this thesis, consider a dynamic online portal such as Stack Overflow that hosts

over eighteen million questions and wishes to recommend the most relevant subset to each incoming

user. The platform may find it impractical to recompute the optimal recommended set of questions

every time a new batch of questions is posted and thus, many traditional optimization methods

are not scalable. At the same time, content recommendation services typically maintain relevance

scores for each question and user-type pair that do not vary as new questions are posted and are

utilized in practice to generate recommendation sets. In a similar vein, online gaming platforms

estimate skill ratings (scores) for individual players based only on their past performance, which

are in turn used as inputs for matchmaking. When it comes to team formation, these score based

approaches may be preferable to standard algorithms that require oracle access to the performance

of every possible team. Indeed, evaluating the expected value of every subset of players even before

the teams are formed seems prohibitively expensive.

Clearly, algorithms for selecting or assigning items based solely on individual item scores are

appealing in many domains because of their conceptual and computational simplicity. However, a

natural concern is that restricting the algorithmic landscape to these simple score based approaches

may result in suboptimal solutions because they may be unable to account for complicated depen-

dencies between individual item performance and the group output. Motivated by this tension, we

study the following fundamental question:

Can algorithms that assign items to groups based on individual item scores achieve near-

optimal group performance and if so, under what conditions?

We briefly touch upon our framework for stochastic combinatorial optimization. Let N =

1,2, . . . , n be a ground set of items and let 2N denote all possible subsets of N . Given a feasi-

ble set F ⊆ 2N of items, a value function f : 2N ×Rn→R+, and a distribution P of a random

n-dimensional vector X = (X1,X2, . . . ,Xn), our goal is to select a set S∗ ∈F that is a solution to

maxS∈F

u(S) := EX∼P [f(S,X)]. (1)

In later sections, we generalize this formulation to consider problems where the goal is to select

multiple subsets of N and assign them to separate groups. The optimization problem (1) is further

refined as follows (see Section 2 for formal definitions):

Page 4: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization4

(a) We focus primarily on the canonical problem of maximizing a stochastic monotone submodular

function subject to a cardinality constraint. This is a special case of the optimization problem

in (1) where F is defined by the cardinality constraint |S|= k for a given parameter k, and

value function f is such that the set function u : 2N →R+ is submodular.

(b) We restrict our attention to value functions f where the output of f(S,X) depends only

on the elements of X that correspond to S, i.e., (Xi)i∈S. Further, Xi denotes the random

performance of item i∈N and is distributed independently of all other Xj for j 6= i. Therefore,

P = P1×P2× . . .×Pn so that Xi ∼ Pi.The framework outlined above captures a broad class of optimization problems arising in diverse

domains. For example, submodular functions have featured in a variety of applications such as

facility location (Ahmed and Atamturk 2011), viral influence maximization, job scheduling (Cohen

et al. 2019), content recommendation and team formation. In particular, submodularity allows us

to model positive synergies among items and capture the natural notion of diminishing returns to

scale that is prevalent in so many situations—i.e., the marginal value derived by adding an item to

a set cannot be greater than that obtained by adding it to any of its subsets. Moreover, in content

recommendation as well as team selection, it is natural to expect that the performance of a group

of elements S would simply be a function (albeit a non-linear one) of the individual performances

of the members in S—(Xi)i∈S. This is represented by our assumptions on the value function f .

The problem of maximizing a submodular function subject to a cardinality constraint is known

to be NP-Hard and consequently, there is a rich literature on approximation algorithms for both

the deterministic (Krause and Golovin 2014) and stochastic variants (Asadpour and Nazerzadeh

2016). In a seminal paper, Nemhauser et al. (1978) established that a natural greedy algorithm

(sequentially selecting items that yield largest marginal value) guarantees a 1−1/e approximation

of the optimum value, which is tight (Feige 1998). Despite the popularity of greedy and other

approaches, it is worth noting for our purposes that almost all of the algorithms in this literature

are not robust to changes in the input. That is, as the ground set N grows, it is necessary to

re-run the entire greedy algorithm to generate an approximately optimal subset. Furthermore, as

mentioned earlier, these methods extensively utilize value oracle queries—access to the objective

function is through a black-box returning u(S) for any given set S.

Test Score Algorithms We now formalize the notion of individual item scores, which we refer

to henceforth, as test scores. Informally, a test score is an item-specific parameter that quantifies

the suitability of the item for the desired objective (i.e., f). To ensure scalability, it is crucial that

an item’s score depends only on the marginal distribution the item’s individual performance and

the problem specification. Formally, the test score ai ∈ [0,∞) of an item i∈N is defined as:

ai = h(f,F , Pi), (2)

Page 5: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization5

where h is a mapping from the item’s marginal distribution (Pi), the objective value function f and

constraint set F to a single number. Naturally, there are innumerable ways to devise a meaningful

test score mapping h. Obvious examples include: (a) mean test scores where ai = E[Xi], and (b)

quantile test scores, where ai is the θ-quantile of distribution Pi for some suitable θ. However, we

prove later that algorithms that base their decisions on these natural candidates do not always

yield near-optimal solutions.

The design question studied in this paper is to identify a suitable test score mapping rule h

such that algorithms that leverage these scores can obtain near-optimal guarantees for the problem

defined in (1). Formally, a test score algorithm is a procedure that computes the test scores for

each item in N according to some mapping h and uses only these scores to determine a feasible

solution S for (1), e.g., by selecting the k items with the highest scores. Test score algorithms were

first introduced by Kleinberg and Raghu (2015), who developed algorithms for a team formation

problem for a single specific function f . In this work, we propose a novel test score mechanism and

utilize it to retrieve improved guarantees for a large class of naturally occuring functions.

Test score algorithms are particularly salient in large-scale applications when compared to a more

traditional optimization method such as greedy. First, as the ground set N changes (e.g., posts

are added or deleted), this does not alter the scores of items still present in the ground set since

an item’s test score depends only on its own performance distribution. This allows us to eliminate

significant computational overhead in dynamic environments such as online platforms. Second,

test score computations are trivially parallelizable—implemented via distributed computation—

since each item’s test score can be computed on a separate machine. Designing algorithms that

are amenable to distributed implementation (Balkanski et al. 2019) is a major concern nowadays

and it is worth noting that standard greedy or linear programming approaches do not fulfill this

criterion. Finally, test score algorithms allow us to make fewer and simpler oracle calls (function

evaluations) as we highlight later. We now present a stylized formulation of a stochastic submodular

optimization problem in an actual application in order to better illustrate the role of test scores.

Example 1. (Content Recommendation on Stack Overflow or Reddit) The ground set

N comprises of topics created by users on the website. The platform is interested in selecting a

set of k topics from the ground set and present them to an arriving user in order to maximize

satisfaction or engagement. For simplicity, the topics can be broadly classified into two categories—

set A consisting of useful but not very exciting topics and set B which encapsulates topics that are

polarizing or exciting1. Mathematically, we can capture this selection problem using our framework

by taking Xi to denote the utility that a user derives from topic i ∈ N (alternatively Xi could

1 For instance, Reddit identifies certain posts as controversial based on the ratio of upvotes and downvotes

Page 6: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization6

denote the probability of clicking or responding to a topic). For example, Xi = a with probability

one for i ∈A as these topics are stable, whereas Xi = b/p with probability p for each risky topic

i ∈B. The selection problem becomes particularly interesting when b < a < b/p. Due to cognitive

limitations, one can assume that a user engages with at most r ≤ k topics from the assortment.

Therefore, the objective function is defined as follows: f(S,X) =∑r

j=1X(j)(S), where X(j)(S) refers

to the j-th largest variable Xi for i∈ S. In the extreme case, r= 1 and each user clicks on at most

one topic. We refer to these as the top-r and best-shot functions respectively in Section 2.

The tradeoff between ‘high-risk-high-reward’ items and more stable items arises in a large class

of selection problems in the presence of uncertainty. For example, in online gaming as in other

team selection scenarios, a natural contention occurs between high performing players who exhibit

a large variance (set B) and more consistent players (set A). In applications involving team for-

mation, it is natural to use the CES (Constant Elasticity of Substitution) utility function as the

objective, i.e., f(S,X) = (∑

i∈SXri )1/r, where the value of r indicates the degree of substitutability

of the task performed by the players (Fu et al. 2016). In this work, we design a natural test score

based algorithm that allows us to obtain constant factor approximations for stochastic submodular

optimization for all of the above objectives functions.

1.1. Main Contributions

The primary conceptual contribution of this study is the introduction of a framework for analysis of

test score based algorithms for stochastic combinatorial optimization problems involving selection

and assignment. We believe that this paradigm helps bridge the gap between theory and practice,

particularly in large-scale applications where quality or relevance scores are prominently used for

optimization. For these cases, the mechanisms developed in this work provides a rigorous framework

for computing and utilizing these scores.

Our main technical contribution is the design of a test score mapping which gives us good approx-

imation algorithms for two NP-Hard problems, namely: (a) maximizing a stochastic monotone

submodular function subject to a cardinality constraint, and (b) maximizing a stochastic submod-

ular welfare function, defined as a sum of stochastic monotone submodular functions subject to

individual cardinality constraints. The welfare maximization problem is a strict generalization of

the former and is of interest in online platforms, where items are commonly assigned to multiple

groups, e.g., selection of multiple disjoint teams for an online gaming tournament.

We now highlight our results for the first problem. We identify a special type of test scores that

we refer to as replication test scores and show that under a sufficient condition on the value function

(extended diminishing returns), we achieve a constant factor approximation for the problem of

maximizing a stochastic submodular function subject to a cardinality constraint. At a high level,

Page 7: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization7

replication test scores can be interpreted as a quantity that measures both an item’s individual

performance as well its marginal contribution to larger team of equally skilled items—see Section 3

for a formal treatment. Additionally, we also show the following:

• We provide an intuitive interpretation of the extended diminishing returns property and prove

that it is satisfied by a number of naturally occuring value functions including but not limited

to the functions mentioned in our examples such as best-shot, top-r, and CES.

• We show that replication scores enjoy a special role in the family of all feasible test scores: in

particular, for any given value function, if there exist any test scores that guarantee a constant

factor approximation for the submodular maximization problem, then it is possible to obtain a

constant factor approximation using replication test scores. This has an important implication

that in order to find good approximation factors, it suffices to consider replication test scores.

• We highlight cases where natural test score measures such as mean and quantile test scores

do not yield a constant factor approximation. We provide a tight characterization of their

efficiency for the CES function—specifically, mean test scores provide only a 1/k1−1/r-

approximation to the optimum and quantile scores do not guarantee a constant-factor approx-

imation when r <Θ(log(k)). Recall that r denotes the degree of substitutability among items.

Finally, for the more general problem of stochastic submodular welfare maximization subject

to cardinality constraints, with the value functions satisfying the extended diminishing returns

condition, we establish that replication test scores guarantee a Ω( 1log(k)

)-approximation to the

optimum value, where k is the maximum cardinality constraint. This approximation is achieved via

a slightly more intricate algorithm that greedily assigns items to groups based on their replication

test scores.

Our results are established by a novel framework that can be seen as approximating (sketching)

set functions using test scores. In general, a sketch of a set function is defined by two simpler

functions that lower and upper bound the original set function within given approximation factors.

In our context, we present a novel construction of a sketch that only relies on replication test

scores to approximate a submodular function everywhere. By leveraging this sketch, we show that

selecting the k items with the highest test scores is only a constant factor smaller than the optimal

set. These results may be of independent interest.

1.2. Related Work

The problem of maximizing a stochastic submodular function subject to a cardinality constraint

by using test scores was first posed by Kleinberg and Raghu (2015) who developed constant factor

approximation algorithms but only for a specific value function, namely the top-r function. They

introduced the term ‘test scores’ in the context of designing algorithms for team hiring to indicate

Page 8: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization8

that the relevant score for each candidate can often be measured by means of an actual test.

Their work also provides some impossibility results, namely that test score algorithms cannot

yield desirable guarantees for certain submodular functions. Our work differs in several respects.

First, we show that test scores can guarantee a constant factor approximation for a broad class

of stochastic monotone submodular functions, which includes different instances of value functions

used in practice. Second, we extend this theoretical framework to the more general problem of

stochastic submodular welfare maximization, and obtain novel approximation results by using

test scores. Third, we develop a unifying and systematic framework based on approximating set

functions by simpler test score based sketches.

As we touched upon earlier, submodular functions are found in a plethora of settings and there

is a rich literature on developing approximation algorithms for different variants of the cardinality-

constrained and welfare maximization problems (Lehmann et al. 2006, Vondrak 2008). Commonly

used algorithmic paradigms for these problems include greedy, local search, and linear programming

(with rounding). Due to their reliance on these sophisticated techniques, most if not all of these

algorithms are (a) not scalable in dynamic environments as the algorithm has to be fully re-executed

every time the ground set changes, and (b) hard to implement in a parallel computing model.

More importantly, these policies are inextricably tied to the value oracle model and hence, tend to

query the oracle a large number of times; often these queries are aimed at evaluating the function

value for arbitrary subsets of the ground set. As we illustrate in Section 2.4, oracle queries can

be expensive in certain cases. On the other hand, the test score algorithm proposed in this work

makes use of much fewer oracle queries. Within the realm of submodular maximization, there are

three distinct strands of literature that seek to tackle each of the three issues mentioned above.

• Dynamic Environments: A growing body of work has sought to develop online algorithms for

submodular and welfare maximization problems in settings where the elements of the ground

set arrive sequentially (Feldman and Zenklusen 2018, Korula et al. 2018) In contrast to this

work, the decisions made by online algorithms are irrevocable, where test score algorithms are

only aimed at reducing the computational burden when the ground set changes.

• Distributed Implementation: Following the rise of big data applications and map-reduce mod-

els, there has been a renewed focus on developing algorithms for submodular optimization

that are suitable for parallel computing. The state-of-the-art (distributed) algorithms for sub-

modular maximization are O(log(n))-adaptive—they run for O(log(n)) sequential rounds with

parallel computations in each round (Balkanski et al. 2019, Fahrbach et al. 2019). Since each

test score can be computed independently, our results can be interpreted as identifying a

well-motivated special class of submodular functions which admit 1-adaptive algorithms.

Page 9: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization9

• Oracle Queries: The design of test score algorithms is well-aligned with the body of work on

maximizing submodular set functions using a small number of value oracle queries (Badani-

diyuru et al. 2014, Balkanski and Singer 2018, Fahrbach et al. 2019). In fact, our replication

test score based algorithms only query the function value for subsets comprising of similar or

identical items.

Although there is a promising literature pertaining to each of these three challenges, our test

score based techniques represent the first attempt at addressing all of them. While many of the

above papers propose algorithms for deterministic environments, recently, there has been consider-

able focus on maximizing submodular functions in a stochastic setting (e.g., Hassidim and Singer

2017, Singla et al. 2016, Asadpour and Nazerzadeh 2016, Gotovos et al. 2015, Kempe et al. 2015,

Asadpour et al. 2008). However, the methods presented in these works do not address any of the

concerns mentioned earlier and to a large extent, focus explicitly on settings where it is feasible to

adaptively probe items of the ground set to uncover the realization of their random variable (Xi).

More generally, a powerful paradigm for solving stochastic optimization problems as defined

in (1) is the technique of Sample Average Approximation (SAA) (Kleywegt et al. 2002, Shapiro

and Nemirovski 2005, Swamy and Shmoys 2012). These methods are typically employed when the

following conditions are applicable, see e.g. Kleywegt et al. (2002): (a) the function u(S) cannot be

written in a closed form, (b) the value of the function f(S,x) can be evaluated for every given set S

and vector x, and (c) the set F of feasible solutions is large. The fundamental principle underlying

this technique is to generate samples (x(1), . . . ,x(T )) independently from the distribution P and use

these to compute the set S∗ that is the optimal solution to arg maxS∈F1T

∑T

i=1 f(S,x(i)).

In addition to the same drawbacks regarding scalability mentioned above, there are other situa-

tions where it may be advantageous to use a test score algorithm over SAA methods: (a) when the

function f is accessed via a value oracle, a large number of queries may be required to optimize the

sample-approximate objective, and (b) even if oracle access is not a concern and the underlying

function is rather simple (e.g., best-shot function from Example 1), computing the optimal set S∗

may be NP-Hard (see Appendix H). Finally, by means of numerical simulations in Section 5.1, we

highlight well-motivated scenarios where SAA methods may result in a higher error probability

compared to test score algorithms under the same number of samples drawn.

The techniques in our work are inspired by the theory on set function sketching (Goemans et al.

2009, Balcan and Harvey 2011, Cohavi and Dobzinski 2017), and their application to optimization

problems (Iyer and Bilmes 2013). While the Ω(1/√n) sketch of Goemans et al. (2009) for general

submodular functions does apply to our setting, we are able to provide tighter bounds (the loga-

rithmic bound of Lemma 6) for a special class of well-motivated submodular functions that cannot

be captured by existing frameworks such as curvature (Sviridenko et al. 2017). Our approach is

Page 10: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization10

also similar in spirit to Iyer and Bilmes (2013), where upper and lower bounds in terms of so-called

surrogate functions were used for submodular optimization; the novelty in the present work stems

from our usage of test scores for function approximation, which are conceptually similar to jun-

tas (Feldman and Vondrak 2014). We believe that the intuitive and natural interpretation of test

score-based algorithms make them an appealing candidate for other problems as well.

1.3. Organization of the Paper

The paper is structured as follows. Section 2 provides a formal definition of optimization problems

studied in this paper and introduces examples of value functions. Section 3 contains our main result

for the problem of maximizing a stochastic montotone submodular function subject to a cardinality

constraint. Section 4 contains our main result for the problem of maximizing a stochastic monotone

submodular welfare function subject to cardinality constraints. Section 5 presents a numerical

evaluation of a test score algorithm for a simple illustrative example, a tight characterization of

approximation guarantees achieved by mean and quantile test scores for the CES value function,

and some discussion points. Finally, we conclude in Section 6. All the proofs of theorems and

additional discussions are provided in Appendix.

2. Model and Problem Formulation

In this section, we introduce basic definitions of submodular functions, more formal definitions of

the optimization problems that we study, and examples of various value functions.

2.1. Preliminaries: Submodular Functions

Given a ground set N = 1,2, . . . , n of items or elements with 2N being the set of all possible

subsets of N , a set function u : 2N → R+ is submodular if u(S ∪ T ) + u(S ∩ T ) ≤ u(S) + u(T ),

for all S,T ∈ 2N . This condition is equivalent to saying that u satisfies the intuitive diminishing

returns property : u(T ∪i)−u(T )≤ u(S∪i)−u(S) for all i∈N and S,T ∈ 2N such that S ⊆ T .

Furthermore, we say that u is monotone if u(S)≤ u(T ) for all S,T ∈ 2N such that S ⊆ T .

Next, we adapt the definition of a stochastic submodular function, e.g. used in (Asadpour and

Nazerzadeh 2016), as the expected value of a submodular value function. Let g : Rn→R+ be a

value function that maps n-dimensional vectors to non-negative reals—g is said to be a submodular

value function if for any two vectors x,y belonging to its domain:

g(x∨y) + g(x∧y)≤ g(x) + g(y). (3)

In the above definition, x∨y denotes the component-wise maximum and x∧y the component-

wise minimum. Note that when the domain of g is the set of Boolean vectors (all elements taking

either value 0 or 1), then (3) reduces to the definition of a submodular set function. Hence, sub-

modular value functions are a strict generalization of submodular set functions. Finally, we say

Page 11: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization11

that the value function g is monotone if for any two vectors x and y satisfying y≥ x (y dominates

x component-wise), we have g(y)≥ g(x).

Consider the ground set N and for every S ∈ 2N , we define x 7→MS(x) to be a mapping such

that MS(x)i = xi if i ∈ S and MS(x) = φ, otherwise. Here, φ is a minimal element which does not

change the function value by adding an item of individual value φ. For example, for the mapping

g(x) = maxx1, x2, . . . , xn, we may define φ= 0. When it is clear from the context, we sometimes

abuse notation by writing g(x) for a vector of dimension d < n, instead of g(x,z) where z is a

vector x of dimension n−d that has all elements equal to φ. Now, we are ready to define stochastic

submodular functions. Suppose that each item i ∈ N is associated with a non-negative random

variable Xi that is drawn independently from distribution Pi. We assume that each Pi(x) is a

cumulative distribution function, i.e. Pi(x) = Pr[Xi ≤ x]. Given a monotone submodular value

function g, a set function u : 2N →R+ is said to be a stochastic monotone submodular function if

for all S ∈ 2N :

u(S) = E[g(MS(X1,X2, . . . ,Xn))]. (4)

For example, if g is the max or best-shot function, then u(S) = E[maxi∈SXi]. The following

result, which we borrow from Lemma 3 in Asadpour and Nazerzadeh (2016), provides sufficient

reasoning on why it is accurate to interpret u to be submodular.

Lemma 1. Suppose that g is a monotone submodular value function. Then, a set function u that

is defined as in (4) is a monotone submodular set function.

2.2. Problem Definitions

In this work, we study the design of test score algorithms for two combinatorial optimization

problems, namely: (a) maximizing a stochastic monotone submodular function subject to a cardi-

nality constraint, and (b) maximizing a stochastic monotone submodular welfare function defined

as the sum of stochastic monotone submodular functions subject to cardinality constraints. We

begin with the first problem. Recall the optimization problem presented in (1), and suppose that

F = S ⊆ N | |S| = k for a given cardinality constraint 0 < k ≤ n and let X = (X1, . . . ,Xn) be

a vector of random, independently and not necessarily identically distributed item performances

such that for each i ∈ N , Xi ∼ Pi. By recasting problem (1) in terms of the notation developed

in Section 2.1, we can now define the problem of maximizing a stochastic monotone submodular

function subject to a cardinality constraint k as follows2

arg maxS∈F

u(S) := E[f(S,X)] := E[g(MS(X))], (5)

2 We used the formulation u(S) = E[f(S,X)] in the introduction to maintain consistency with the literature onstochastic optimization, e.g., (Kleywegt et al. 2002). For the rest of this paper, we will exclusively write u(S) =E[g(MS(X))] for convenience and to delineate the interplay between the set S and the submodular value function g.

Page 12: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization12

where g is a monotone submodular value function. Additionally, we assume the function g to

be symmetric, meaning that its value is invariant to permutations of its input arguments, i.e. for

every x ∈Rn, g(x) = g(π(x)) for any permutation π(x) of the elements x1, x2, . . . , xn. This is

naturally motivated by scenarios where the group value of a set of items depends on the individual

performance values than the identity of the members who generate these values. For example, in

the case of non-hierarchical team selection, it is reasonable to argue that two mutually exclusive

teams S,T whose members yield identical performances on a given day also end up providing the

same group value. Similarity, in content recommendation, the probability that user clicks on at

least one topic can be viewed as a function of the user’s propensity to click on each individual

topic. Finally, by seeking to optimize the expected function value in (5), we implicitly model a

risk-neutral decision maker as is typically the case in online platforms.

The stochastic submodular maximization problem specified in (5) is NP-Hard even when

the value function g is symmetric (in fact, Goel et al. (2006) show this is true for g(x) =

minx1, . . . , xn), and hence, we focus on finding approximation algorithms. Formally, given α≤ 1,

an algorithm is said to provide an α-approximation guarantee for (5) if for any given instance of

the problem with optimum solution set OPT, the solution S returned by the algorithm satisfies

u(S) ≥ αu(OPT). Although a variety of approximation algorithms have been proposed for the

submodular maximization problem, in this work, we focus on a special class of methods we refer

to as test score algorithms. Specifically, these are algorithms that take as input a vector of non-

negative test scores a1, a2, . . . , an, and use only these scores to determine a feasible solution S for

the problem (5). As defined in (2), the value of each test score ai can depend only on g, k, and Pi.

Furthermore, we are particularly interested in proposing test score algorithms that simply select

the k items with the highest test scores in (a1, a2, . . . , an); such an approach is naturally appealing

due to its intuitive interpretation. Clearly, the main challenge in this case is to design a suitable

test score mapping rule that enables such a trivial algorithm to yield desirable guarantees.

Stochastic Submodular Welfare Maximization Maximizing a stochastic submodular wel-

fare function is a strict generalization of the problem of maximizing a stochastic monotone sub-

modular function subject to a cardinality constraint as defined in (5). Here, we are given a ground

set N = 1, . . . , n, and a collection of stochastic monotone submodular set functions uj : 2N →R+

with corresponding submodular value functions gj : Rn→R+ for j ∈M := 1,2, . . . ,m. The goal

is to find disjoint subsets S1, S2, . . . , Sm of the ground set of given cardinalities |S1|= k1, |S2|= k2,

. . ., |Sm|= km that maximize the welfare function defined as

u(S1, S2, . . . , Sm) =m∑j=1

uj(Sj). (6)

Page 13: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization13

We refer to M as the set of partitions. Similarly as for the previous problem, we consider sym-

metric, monotone, submodular value functions gj for each partition j ∈M so that the stochastic

submodular set functions can be represented as follows:

uj(S) = E[gj(MS(X1,j,X2,j, . . . ,Xn,j))] for all j ∈M.

In the above expression, Xi,j denotes the individual performance of item i ∈ N with respect to

partition j ∈M . Each Xi,j is drawn independently from a marginal distribution Pi,j that is the

cumulative distribution function Pi,j(x) = Pr[Xi,j ≤ x]. Our formulation allows for considerable

heterogeneity as items can have different realizations of their individual performances for different

partitions. Submodular welfare maximization problems arise naturally in domains such as team

formation where decision makers are faced with the dual problem of selecting agents and assigning

them to projects or teams. For example, this could model an online gaming platform seeking to

choose a collection of teams to participate in a tournament or an organization partitioning its

employees to focus on asymmetric tasks. In these situations, the objective function (6) captures

the aggregate value generated by all of the teams.

Once again, we are interested in designing test score algorithms for stochastic submodular welfare

maximization. Due to the generality of the problem, we define test score based approaches in a

broad sense here and defer the specifics to Section 4. More formally, a test score algorithm for

problem (6) is a procedure whose input only comprises of vectors of test scores (ai,j)i∈N,j∈M , where

the elements of each test score vector ai,j are a function of gj, kj, and Pi,j. Note that in this general

formulation, each item i∈N and partition j ∈M is associated with multiple test scores ai,j.

2.3. Examples of Value Functions

Many value functions used in literature to model production and other systems satisfy the condi-

tions of being symmetric, monotone non-decreasing submodular value functions. In this section,

we introduce and discuss several well known examples.

A common value function is defined to be an increasing function of the sum of individual values:

g(x) = g (∑n

i=1 xi), where g is a non-negative increasing function. In particular, this value function

allows to model production systems that exhibit a diminishing returns property when g is concave.

This value function appears frequently in optimization problems when modeling risk aversion and

decreasing marginal preferences, for instance, in risk-averse capital budgeting under uncertainty,

competitive facility location, and combinatorial auctions (Ahmed and Atamturk 2011). A popular

example of such a function is the threshold or budget-additive function, i.e., g(x) = min∑n

i=1 xi,Bfor some B > 0, which arises in a number of applications.

Another example is the best-shot value function defined as the maximum individual value g(x) =

maxx1, x2, . . . , xn. This value function allows to model scenarios when one only derives values

Page 14: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization14

from the best individual option. For example, this arises in online crowdsourcing systems in which

solutions to a problem are solicited by an open call to the online community, several candidate

solutions are received, but eventually only a best submitted solution is used.

A natural generalization of the best-shot value function is a top-r value function defined as the

sum of r highest individual values, for a given parameter r ≥ 1, i.e. g(x) = x(1) + x(2) + · · ·+ x(r),

where x(i) is the i-th largest element of input vector x. This value function boils down to the

best-shot value function for r = 1. This value function is of interest in a variety of applications

such as information retrieval and recommender systems, where the goal is to identify a set of most

relevant items. This value function was used in Kleinberg and Raghu (2015) to evaluate efficiency

of test-score based algorithms for maximizing a stochastic monotone submodular function subject

to a cardinality constraint.

A well known value function is the constant elasticity of substitution (CES) value function, which

is defined by g(x) = (∑n

i=1 xri )

1/r, for a positive value parameter r. This value function has been

in common use to model production systems in economics and other areas (Fu et al. 2016, Dixit

and Stiglitz 1977, Armington 1969, Solow 1956). The family of CES value functions accommodates

different types of production by suitable choice of parameter r, including the linear production for

r = 1 and the best-shot production in the limit as the value of parameter r goes to infinity. The

CES value function is a submodular value function for values of parameter r ≥ 1. For r 6= 1, the

term 1/(1− r) is referred to as the elasticity of substitution—it is the elasticity of two input values

to a production with respect to the ratio of their marginal products.

Finally, we make note of the success probability value function, defined by g(x) = 1−∏n

i=1(1−

p(xi)), where p : R→ [0,1] is an increasing function that satisfies p(0) = 0. This value function is

often used as a model of tasks for which input solutions are independent and either good or bad

(success or failure), and it suffices to have at least one good solution for the task to be successfully

solved, e.g., see Kleinberg and Oren (2011).

2.4. Computation, Implementation, and the Role of Value Oracles

We conclude this section with a discussion of some practical issues surrounding test scores algo-

rithms and function evaluation. Given that submodular set functions have representations that

are exponential in size (2n), a typical modeling concession is to assume access to a value oracle

for function evaluation. Informally, a value oracle is a black-box that when queried with any set

S ∈ 2N , returns the function value u(S) in constant time. Although value oracles are a theoretically

convenient abstraction, function evaluation can be rather expensive in applications pertaining to

online platforms. This problem is further compounded in the case of stochastic submodular func-

tions when the underlying item performance distributions (P1, P2, . . . , Pn) are unknown. Naturally,

Page 15: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization15

one would expect a non-zero query-cost to be associated with evaluating g(x) even for a single

realization x of the random vector X. Under these circumstances, there is a critical need for algo-

rithms that achieve desirable guarantees using significantly fewer queries and to eschew traditional

approaches (e.g., greedy) that require polynomially many oracle calls.

To illustrate these challenges, consider the content recommendation application from Example 1

and suppose that both the distributions (Pi)i∈N and the value function g are unknown. In order

to (approximately) compute u(S) for any S ⊆ N , it is necessarily to present the set S of topics

repeatedly to a large number of users and average their response (e.g., upvotes or click behavior).

Clearly, a protracted experimentation phase brought about by too many oracle queries could lead

to customer dissatisfaction or even a loss in revenue. Alternatively, in team hiring or online gaming,

evaluating the function value for arbitrary subsets S ⊆ N may be prohibitively expensive as it

may not be possible to observe group performance before the team is even formed. The replication

test score algorithm proposed in Section 3 addresses these issues by not only making use of fewer

oracle calls but also allowing for easier implementation since each evaluation of the function g only

requires samples from a single item’s (or agent’s) performance distribution Pi.

A secondary issue concerns the noise in the function evaluation or test score computation brought

about by sampling the distributions (Pi)i∈N . It may not be possible to precisely compute test

scores ai that represent the expected value of some function under distribution Pi—e.g., mean test

scores where ai = EXi∼Pi [Xi] or replication test scores in (7). In applications, test scores are defined

as sample estimators with values determined by the observed data, i.e., utilize a sample mean

instead of the population mean. In our analysis, we ignore the issue of estimation noise and assume

oracle access that facilitates the precise computation of test scores that denote some expectation

taken over (Pi)i∈N . This assumption is justified provided that the estimators are unbiased and the

test scores are estimated using a sufficient number of samples. We leave accounting for statistical

estimation noise as an item for future research.

Finally, it is worth highlighting that the benefits of test score algorithms do not come without a

price. Using a test score based approach severely limits what an algorithm can do, which in turn may

affect the achievable quality of approximation. For instance, the aforementioned greedy algorithm is

able to leverage its unrestricted access to a value oracle and achieve a 1−1/e-approximation for (5)

by carefully querying the function value for many different subsets S ∈ 2N . Test score algorithms,

however, do not have this luxury—instead, they rely indirectly on approximating answers to value

oracle queries using only limited information, namely parameters associated with individual items

i∈N evaluated separately on the function g.

Page 16: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization16

3. Submodular Function Maximization

In this section we present our main result on the existence of test scores that guarantee a constant-

factor approximation for maximizing a stochastic monotone submodular function subject to a car-

dinality constraint, for symmetric submodular value functions that satisfy an extended diminishing

returns condition. We will show that this is achieved by special type of test scores.

We begin by introducing some basic terminology required for our sufficient condition. Given a

value function g : Rn+→R+ and v ≥ 0, we say that v has a non-empty preimage under g if there

exists at least one z∈Rn+ such that g(z) = v.

Definition 1 (Extended Diminishing Returns). A symmetric submodular value function

g : Rn+→R+ is said to satisfy the extended diminishing returns property if for every v≥ 0 that has

a non-empty preimage under g, there exists z∈Rn−1+ such that:

(a) g(z) = v, and

(b) for all y ∈Rn−1+ such that g(y)≤ v, we have that g(y, x)−g(y)≥ g(z, x)−g(z) for all x∈R+.

Informally, the condition states that given that a value v such that the function evaluates to

this number at one or more points in its domain, then for at least one such point, say z, the

marginal benefit of adding an element of value x to z cannot be larger than the marginal benefit of

adding the same element to another vector y whose performance is smaller than v. The extended

submodularity condition holds for a wide range of functions. For example, the condition is satisfied

by all value functions defined and discussed in Section 2.3, which is proved in Appendix A.

We refer to this property as extended diminishing returns as it is consistent with the spirit of

‘decreasing marginal returns’ as the function value grows. Indeed, as in the case of traditional

submodular functions, adding a new element (x) provides greater marginal benefit to a vector

yielding a smaller performance (y) than to one providing a larger value (z). In other words, we have

diminishing marginal returns as the value provided by a vector z grows. Consider for example, a

team application: a new member with some potential would be expected to make a less significant

contribution to a high performing team than a low performing one. Similarly, in content recom-

mendation, the added benefit provided by a new topic would be felt more strongly by a user who

derives limited value from the original assortment than one who was highly satisfied to begin with.

The underlying mechanism in both these examples is that a new member or topic would have a

greater overlap in skills or content with a high performing group of items.

A subtle point is worth mentioning here. For any given v, if there exist multiple points in the

domain at which the function g evaluates to v, then the extended diminishing returns property

only guarantees the existence of a single vector z for which g(z, x)− g(z) ≥ g(y, x)− g(y) holds

for all y, x such that g(y) ≤ v. Simply put, there may be other vectors which also evaluate to v

Page 17: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization17

which do not satisfy the above inequality.3 We remark that this is actually a weaker requirement

than imposing that all such vectors satisfy condition (b) in Definition 1—this allows our results to

be applicable for a broader set of functions. That being said, most of the value functions that we

specify in Section 2.3 except for top-r (r > 1) satisfy a stronger version of extended diminishing

returns where the condition g(z, x)− g(z)≥ g(y, x)− g(y) holds for every two points z,y ∈Rn−1+

such that g(y)≤ g(z).

We next introduce the special type of test scores, we refer to as replication test scores.

Definition 2 (replication test scores). Given a symmetric submodular value function g

and cardinality parameter k, for every item i∈N , the replication test score ai is defined by

ai = E[g(X(1)i ,X

(2)i , . . . ,X

(k)i , φ, . . . , φ)] (7)

where X(1)i ,X

(2)i , . . . ,X

(k)i are independent and identically distributed random variables with dis-

tribution Fi.

The replication test score of an item can be interpreted as the expected performance of a virtual

group of items that consists of k independent replicas of this item, hence the name replication

scores. Note that a replication test score is defined for a given function g and cardinality parameter

k; we omit to indicate this in the notation ai for simplicity.

In contrast to mean or quantile test scores that simply provide some measure of an item’s perfor-

mance, replication test scores capture both the item’s individual merit as well as its contribution to

a larger group. To understand this distinction, consider Example 1 where g(x) = maxx1, x2, . . . , xn

and p = 1/k. Clearly, the mean performance of stable type A items (a) is larger than the mean

performance of polarizing topics of type B (b). However, the replication score of a type B item

is (1− (1− p)k) bp≥ (1− 1

e)bk which for large enough k can be larger than the replication score of

a type A item which still remains a. The larger replication score of type B topics captures the

intuition that risky topics can often provide significant marginal benefits to an existing assortment.

Finally, in the case of content recommendation, one can employ a natural mechanism to estimate

the replication scores even when the objective function g and distributions (Pi)i∈N are unknown.

Namely, in order to compute the replication score for a topic of type A (or B), it suffices to present

k items of this type to a large number of incoming users and compute the average response.

We now present the main result of this section.

Theorem 1. Suppose that the utility set function is the expected value of a symmetric, monotone

submodular value function that satisfies the extended diminishing returns property. Then, the greedy

3 Suppose that g is the top−r function defined in Section 2.3 for r= 2, x = (1,1), and v= 4. Consider vectors y1 = (2,2)and y2 = (4) such that g(y1) = g(y2) = v = 4. It is not hard to deduce that for any 0< z ≤ 1, g(x, z)− g(x) = 0 =g(y1, z)− g(y1)< g(y2, z)− g(y2) = z. That is y1 satisfies the conditions in Definition 1 but y2 does not.

Page 18: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization18

selection of items in decreasing order of replication test scores yields the utility value that is at least

(1− 1/e)/(5− 1/e) times the optimal value.

In the remainder of this section, we prove Theorem 1. Along the way, we derive several results

that connect the underlying discrete optimization problem with approximating set functions, which

may be of independent interest.

The key mathematical concept that we use is a sketch of a set function, which is an approximation

of a potentially complicated set function using simple polynomial-time computable lower and upper

bound set functions, we refer to as a minorant and a majorant sketch function, respectively.

Definition 3 (Sketch). A pair of set functions (v, v) is said to be a (p, q)-sketch of a set

function u : 2N →R+, if the following condition holds:

pv(S)≤ u(S)≤ qv(S), for all S ⊆N. (8)

In particular, if (v, v) is a (p, q)-sketch, we refer to v as a strong sketch function.4

Although the above definition is quite general, and subsumes many trivial sketches (for e.g,

v = 0, v =∞), practically useful sketches would satisfy a few fundamental properties such as (a)

when given a set function whose description may be exponential in n, v and v must be polynomially

expressible, and (b) v and v must be sufficiently close to each other at points of interest for the

sketch to be meaningful. Our first result provides sufficient conditions on the sketch functions to

obtain an approximation algorithm for maximizing a monotone submodular set function subject

to a cardinality constraint.

Lemma 2. Suppose that (a) v and v are minorant and majorant set functions that are a (p, q)-

sketch of a submodular set function u : 2N →R+ and (b) there exists S∗ ⊆ arg maxS:|S|=k v(S) that

satisfies v(S)≤ v(S∗) for every S ⊆N that has cardinality k and is completely disjoint from S∗,

i.e. S ∩S∗ = ∅. Then, the following relation holds:

u(S∗)≥ p

q+ pu(OPT),

where OPT denotes an optimum set of cardinality k.

The proof of Lemma 2 is provided in Appendix B. The proofs follows by basic properties of

submodular set functions and conditions of the lemma.

The result in Lemma 2 tells us that if we can find a minorant set function v and a majorant set

function v that are a (p, q)-sketch for a submodular set function u and that satisfy the conditions

4 Our definition of a strong sketch is closely related to the following definition of a sketch used in literature (e.g., seeCohavi and Dobzinski (2017)): a set function v is said to be a α-sketch of u if v(S)≤ u(S)≤ αv(S) for all S ⊆N .Indeed, if v is a (p, q)-strong sketch of u, then v(S) := pv(S) is a q/p-sketch of u.

Page 19: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization19

of the lemma, then any solution of the problem of maximizing the submodular set function v

subject to a cardinality constraint is a p/(p+ q)-approximation for the problem of maximizing the

submodular set function u subject to the same cardinality constraint. What remains to be done is

to find such minorant and majorant set functions, and moreover, show that for every S, the value

of these functions can be computed in polynomial-time by using only test scores of items in S.

We define a minorant set function v and a majorant set function v which for any given test

scores a1, a2, . . . , an are defined as, for every S ⊆N ,

v(S) = minai | i∈ S and v(S) = maxai | i∈ S. (9)

For the minorant set function v defined in (9), the problem of maximizing v(S) over S ⊆ Nsubject to cardinality constraint |S|= k boils down to selecting a set of k items with largest test

scores. Obviously, the set functions v and v defined in (9) satisfy condition (b) in Lemma 2.

We only need to show that there exist test scores a1, a2, . . . , an such that (v, v) is a (p, q)-sketch

of the set function u. We say that a1, a2, . . . , an are (p, q)-good test scores if (v, v) is a (p, q)-sketch

of the set function u. If p/q is a constant, we refer to a1, a2, . . . , an as good test scores. In this

case, by Lemma 2, selecting a set of k items with largest test scores guarantees a constant-factor

approximation for the problem of maximizing the set function u(S) subject to the cardinality

constraint |S|= k. More generally, we have the following corollary.

Corollary 1. Suppose that test scores a1, a2, . . . , an are (p, q)-good. Then, greedy selection of

items in decreasing order of these test scores yields a utility of value that is at least p/(p + q)

times the optimum value. In particular, if p/q is a constant, than the greedy selection guarantees

a constant-factor approximation for maximizing the submodular set function u(S) subject to the

cardinality constraint |S|= k.

We next need to address the question whether for a given stochastic monotone submodular

function, there exists good test scores. If good test scores exist, it is possible that there are different

definitions of test scores that are good test scores. The lemma shows that replication test scores,

defined in Definition 2, are good test scores, whenever good test scores exist.

Lemma 3. Suppose that a utility function has (p, q)-good test scores. Then, replication scores

are (p/q, q/p)-good test scores.

The proof of Lemma 3 is provided in Appendix C. The lemma tells us to check whether a utility

function has good test scores, it suffices to check whether for this utility function, replication test

scores are good test scores. If replication test scores are not good test scores for a given utility

function, then there exist no good test scores for this utility function.

In the next lemma, we show that extended diminishing returns, which we introduced in Defini-

tion 1, is a sufficient condition for replication test scores to be good test scores.

Page 20: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization20

Lemma 4. Suppose that g : Rn+→R+ is a symmetric, monotone submodular value function that

satisfies the extended diminishing returns property. Then, replication test scores are (1− 1/e,4)-

good test scores, and consequently are good test scores.

The proof of Lemma 4 is provided in Appendix D. Here we briefly discuss some of the key

steps of the proof. First, for the lower bound, we need to show that for every S ⊆ N : u(S) ≥

(1−1/e)v(S) = (1−1/e)minai | i∈ S, where ai is the replication test score of item i. Suppose that

S = 1,2, . . . , k and without loss of generality, a1 = minai | i ∈ S. Then, we show by induction

that for every j ∈ 1, . . . , k,

u(1,2, . . . , j)≥(

1− 1

k

)u(1,2, . . . , j− 1) +

1

ka1. (10)

The proof involves showing that the marginal contribution of adding item j to the set 1,2, . . . , j−

1 is closely tied to the marginal contribution of adding item j to a set comprising of k− 1 other

(independently drawn) copies of item j. The latter quantity is at most aj/k, which by definition is

greater than or equal to a1/k. The exact factor of 1−1/e comes from applying the above inequality

in a cascading fashion from u(1,2, . . . , k) to u(1).

The proof of the upper bound is somewhat more intricate. The first step involves carefully

constructing a vector z∈Rn−1+ such that g(z) is larger than u(S) by an appropriate constant factor

(say c). Imagine that S∗ represents some set of −1 items such that u(S∗) = g(z). By leveraging

monotonicity and submodularity, we have that u(S) ≤ u(S∗) +∑

i∈S(u(S∗ ∪ i) − u(S∗)). Let

x represent a vector comprising of k − 1 independent copies of random variables drawn from

distribution Fi. Now, as per the extended diminishing returns condition, for any realization of x such

that g(x)≤ g(z), it must be true (assuming that the careful construction z leverages Definition 1)

that:

u(S∗ ∪i)−u(S∗)) = g(z, xi)− g(z)≤ g(x, xi)− g(x) given that g(x)≤ g(z).

Moreover, one can apply Markov’s inequality to show that g(z)≥ g(x) is true with probability

at least 1/c. Taking the expectation of x conditional upon g(z)≥ g(x) gives us the desired upper

bound.

The statement of Theorem 1 follows from Corollary 1 and Lemma 4.

4. Submodular Welfare Maximization

In this section we present our main result for the stochastic monotone submodular welfare maxi-

mization problem. Here, the goal is to find disjoint S1, S2, . . . , Sm ⊆N satisfying cardinality con-

straints |Sj| = kj for all j ∈ 1,2, . . . ,m that maximize the welfare function u(S1, S2, . . . , Sm) =∑m

j=1 uj(Sj).

Page 21: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization21

Theorem 2. Given an instance of the submodular welfare maximization problem such that the

utility functions satisfy the extended diminishing returns property, and the maximum cardinality

constraint (i.e., maxk1, k2, . . . , km) is k, there exists a test score-based algorithm (Algorithm 1)

that achieves a welfare value of at least 1/(24(log(k) + 1)) times the optimum value.

We briefly comment on the efficacy of test score algorithms for the submodular welfare maximiza-

tion problem. Unlike the constant factor approximation guarantee obtained in Theorem 1, test

score algorithms only yield a logarithmic-approximation to the optimum for this more general prob-

lem. Although constant factor approximation algorithms are known for the submodular welfare

maximization problem (Calinescu et al. 2011), these approaches rely on linear programming and

other complex techniques and hence, may not be scalable or amenable to distributed implementa-

tion. On the other hand, we focus on an algorithm that is easy to implement in practice but relies

on a more restrictive computational model, leading to a worse approximation. Finally, it is worth

noting in many actual settings, the value of the cardinality constraint k tends to be rather small

in comparison to n; e.g., in content recommendation, it is typical to display 25-50 topics per page.

In such cases, the loss in approximation due to the logarithmic factor would not be significant.

In the remainder of this section, we provide a proof of Theorem 2. We will present an algorithm

that uses replication test scores, in order to achieve the logarithmic guarantee. The proof is based

on using strong sketches of set functions.

We follow the same general framework as for the submodular function maximization problem,

presented in Section 3, which in this case amounts to identifying a strong sketch function for each

utility set function, defined by using replication test scores, and then using a greedy algorithm for

welfare maximization that carefully leverages these replication test scores to achieve the desired

approximation guarantee. The following lemma establishes a connection between the submodular

welfare maximization problem and strong sketches.

Lemma 5. Consider an instance of the submodular welfare maximization with utility func-

tions u1, u2, . . . , um and parameters of the cardinality constraints k1, k2, . . . , km. Let OPT =

(OPT1,OPT2, . . . ,OPTm) denote an optimum partition of items. Suppose that for each j ∈M ,

(vj, vj) is a (p, q)-sketch of uj, and that S1, S2, . . . , Sm is an α-approximation to the welfare maxi-

mization problem with utility functions v1, v2, . . . , vm and the same cardinality constraints. Then,

m∑j=1

uj(Sj)≥ αp

qu(OPT) = α

p

q

m∑j=1

uj(OPTj).

Page 22: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization22

The proof of Lemma 5 is provided in Appendix E

We next define set functions that we will show to be strong sketch for utility functions of

the welfare maximization problem that satisfy the extended diminishing returns property. Fix an

arbitrary set S ⊆ N such that |S| = k and j ∈M . Let ari,j be the replication score of item i for

value function gj and cardinality parameter r, i.e.,

ari,j = E[gj(X(1)i ,X

(2)i , . . . ,X

(r)i , φ, . . . , φ)].

Let π(S, j) = (π1(S, j), . . . , πk(S, j)) be a permutation of items in S defined as follows:

π1(S, j) = arg maxi∈S a1i,j

π2(S, j) = arg maxi∈S\π1(S,j) a2i,j

...πk(S, j) = arg maxi∈S\π1(S,j),...,πk−1(S,j) a

ki,j.

(11)

We define a set function vj : 2N →Rn+ for every set S ⊆N of cardinality k as follows:

vj(S) = a1π1(S,j),j +1

2a2π2(S,j),j + · · ·+ 1

kakπk(S,j),j. (12)

The definition of set function vj in (12) can be interpreted as defining the value vj(S) for every

given set S to be additive with coefficients associated with items corresponding to their virtual

marginal values in a greedy ordering of items with respect to these virtual marginal values.

Given a partition of items in disjoint sets S1, S2, . . . , Sm, we define a welfare function

v(S1, S2, . . . , Sm) =∑m

j=1 vj(Sj). We next show that that set functions defined in (12) are strong

sketch functions.

Lemma 6. Suppose that a set function uj is defined as the expected value of a symmetric, mono-

tone submodular value function that satisfies the extended diminishing returns condition. Then, the

set function vj given by (12) is a (1/(2(log(k) + 1)),6) strong sketch of uj.

The proof of Lemma 6 is provided in Appendix F.

By Lemma 5 and Lemma 6, for any stochastic monotone submodular welfare maximization prob-

lem with utility functions satisfying the extended diminishing returns condition, any α-approximate

solution to the submodular welfare maximization problem, we refer to as a surrogate welfare

maximization problem with the welfare function v(S1, S2, . . . , Sm) subject to the same cardinality

constraints as in the original welfare maximization problem, is a cα/(log(k)+1)-approximate solu-

tion to the original welfare maximization problem, where c is a positive constant. It remains to now

to show that the surrogate welfare maximization problem admits an α-approximate solution. We

next show that a naural greedy algorithms applied to the surrogate welfare maximization problem

guarantees a 1/2-approximation for this problem.

Page 23: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization23

ALGORITHM 1: Greedy Algorithm for Submodular Welfare Maximization Problem

Initialize assignment S1 = S2 = . . .= Sm = ∅ A= 1,2, . . . , n, P = 1,2, . . . ,m

/* Sj and A denote the set of assigned items to partition j and the set of unassigned items */

while |A|> 0 and |P |> 0 do

(i∗, j∗) = arg max(i,j)∈A×P a|Sj |+1

i,j /(|Sj|+ 1) /* with random tie break */

Sj∗← Sj∗ ∪i∗ and A←A \ i∗ /* assign item i∗ to partition j∗ */

if |Sj∗ | ≥ kj thenP ← P \ j∗ /* remove partition j∗ from the list */

endend

Consider a natural greedy algorithm for the surrogate welfare maximization problem that works

for the case of one or more partitions. Given the replication test scores for all items and all

partitions, in each step r, the algorithm adds an unassigned item i and partition j that maximizes

arji,j where rj is the number of elements assigned to partition j in previous steps. That is, in each

iteration, an assignment of an item to a partition is made that yields the largest marginal increment

of the surrogate welfare function. The algorithm is more precisely defined in Algorithm 1.

In the following lemma, we show that the greedy algorithm guarantees a 1/2-approximation for

the surrogate welfare maximization problem.

Lemma 7. The greedy algorithm defined by Algorithm 1 outputs a solution that is a 12-

approximation for the submodular welfare maximization problem of maximizing v(S1, S2, . . . , Sm)

over partitions of items (S1, S2, . . . , Sm) that satisfy cardinality constraints.

The proof of Lemma 7 can be found in Appendix 1. The proof is similar in spirit to that of the

12-approximate greedy algorithm for submodular welfare maximization proposed by Lehmann et al.

(2006). Unfortunately, one cannot directly utilize the arguments in that paper since the sketch

function that we seek to optimize—vj(Sj)—may not be submodular. Instead, we present a novel

montonicity argument and leverage it to provide the following upper and lower bounds: vj(Sj)≥

vj(Sj \πr(Sj, j))≥ vj(Sj)−arπr(Sj,j),j

|r| for all Sj ⊆N and 1≤ r≤ kj. Finally, we apply these bounds

in a cascading manner to show the desired 12-approximation factor claimed in Lemma 7.

5. Discussion and Additional Results

In this section we first illustrate the use of test scores and discuss numerical results for the simple

example introduced in Section 1. We then discuss performance of simple test scores, namely mean

and quantile scores, and characterize their performance for the constant elasticity of substitution

Page 24: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization24

value function. Finally, we discuss why for the stochastic monotone submodular welfare maxi-

mization problem we have to use different sketch functions than those we used for the stochastic

monotone submodular function maximization problem.

5.1. Numerical Results for a Simple Illustrative Example

We consider the example of two types of items that we introduced in Section 1. Recall, in this

example the ground set of items is partitioned in two sets A and B with set A comprising of safe

items whose each individual performance is of value a with probability 1 and set B comprising

of risky items whose each individual performance is of value b/p with probability p, and value 0,

otherwise. Here a, b, and p are parameters such that a, b > 0 and p ∈ (0,1]. We assume that b≥ a

and |A|, |B| ≥ k. The value function is assumed to be the best-shot value function.

We say that a set S of items of cardinality k is of type r if it contains exactly r risky items for

r= 0,1, . . . , k. For each r ∈ 0,1 . . . , k, let Sr denote an arbitrary type-r set. The realized value of

set Sr is b/p if at least one risky item in Sr achieves value b/p and is equal to a, otherwise. Hence,

we have

u(Sr) = a(1− p)r +b

p(1− (1− p)r).

Notice that the value of u(Sr) monotonically increases in r, hence it is optimal to select a set that

comprises of k risky items, i.e. a set of type k.

We consider sample-average replication test scores, which for a given number of samples per

item replica T ≥ 1, are defined as

ai =1

T

T∑t=1

maxX(t,1)i ,X

(t,2)i , . . . ,X

(t,k)i

where X(t,j)i are independent samples over i, t and j with X

(t,j)i sampled from distribution Pi.

The output of the test score algorithm consists of a set of k items with highest sample-average

replication test scores. The output results in an error if, and only if, it contains at least one safe

item. We evaluate the probability of error of the algorithm by running the test score algorithm for

a number of repeated experiments.

In Figure 1, we show the probability of error versus the number of samples per item, for different

values of parameters k and p. Notice that the number of samples per item is equal to Tk where T

is the number of samples per item replica. We observe that (a) the probability of error decreases

with the number of samples per item, (b) the probability of error is larger for larger set size,

and (c) the number of samples per item required to achieve a fixed value of probability of error

increases with the risk of item values, i.e. for smaller values of parameter p. In Figure 2, we show

the probability of error versus the value of parameter p, for different values of parameters k and T .

Page 25: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization25

Figure 1 Probability of error of the test score algorithm versus the number of samples per item for (left) k= 5

and (b) k= 10, in each case for different values of parameter p= 0.025,0.05 and 0.1. Other parameters are set as

|A|= |B|= 10, a= 1 and b= 2. The results are for the number repeated experiments equal to 1000.

Figure 2 Probability of error of the test score algorithm versus the value of parameter p for (left) k= 5 and

(right) k= 10, in each case for different number of samples per item replica T = 5, 10 and 20. Other parameters

are set as given in the caption of Figure 1.

This further illustrates that a larger number of samples is needed to achieve a given probability of

error the later the risk of items. In fact, one can show that a sufficient number of samples per item

is O((k/p2) log(n/δ)) to guarantee that the probability of error is of value at most δ; we provide

details in Appendix.

We further consider a sample averaging method that amounts to enumerating feasible sets of

items, for each feasible set S of items estimating the value of u(S), and selecting a set with largest

estimated value. The value of u(S) is estimated by the estimator defined as

u(S) =1

T

T∑t=1

maxX(t)i | i∈ S

where X(t)i are independent samples over i and t with X

(t)i sampled from distribution Pi.

In Figure 3 we show the probability of error versus the number of samples per item for the test

score algorithm and the sample averaging approach (SAA). We observe that the probability of

Page 26: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization26

Figure 3 Probability of error versus the number of samples per item for SAA and test score algorithms, for

(left) p= 0.1, (middle) p= 0.05, and (right) p= 0.025. The setting of other parameters is as in Figure 1 for k= 5.

error is larger for the SAA method. Intuitively, this happens because the SAA method amounts to

comparison of all possible sets of items of different types, while the test score method for replication

test scores amounts to comparison of sets that consists of either all safe or all risky items. The

SAA method is computationally expensive as it requires enumeration of(nk

)sets of items, which is

prohibitive in all cases but for small values of parameter k. For the example under consideration,

the number of samples per item needed to guarantee a given probability of error can be analytically

characterized; we show this in Appendix. The sufficient number of samples per item scales in the

same as way as for the test score algorithm, for fixed value of k and asymptotically small values of

parameter p, but for a fixed value of p increases exponentially in parameter k.

In summary, our numerical results demonstrate the efficiency of the test score algorithm for

different values of parameters and in comparison with the sample averaging approach.

5.2. Mean and Quantile Test Scores

As we already mentioned, the mean test scores are defined as expected values ai = E[Xi]. The

quantile test scores are defined as ai = E[Xi | Pi(Xi)≥ θ], for a parameter θ ∈ [0,1]. For the value

of parameter θ= 0, the quantile test score corresponds to mean test score. In general, the quantile

test score is the expected individual performance of an item conditional on it being larger than a

threshold value.

Neither mean test scores nor quantile test scores can guarantee a constant-factor approximation

for the submodular function maximization problem. We demonstrate this by two simple examples

that convey intuitive explanations on why these test scores can fail to provide desired guarantee.

We then present tight approximation bounds for the CES utility functions.

Example 1 (mean test scores): Suppose that the utility is according to the best-shot function

and that the selection is greedy using mean test scores. Suppose that there are two types of

items: (a) deterministic performance items whose each individual performance is of value 1 with

probability 1 and (b) random performance items whose individual performances are independent

with expected value strictly smaller than 1 and a strictly positive probability of being larger than

Page 27: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization27

1. Then, the algorithm will select all items to be those with deterministic performance. This is

clearly suboptimal under the best-shot production where having selected an item with deterministic

performance, the only way to increase the performance of a set of items with some probability is to

select an item with random performance. Such an instance can be chosen such that the algorithm

yields the utility that is only factor O(1/k) of the optimum value.

Example 2 (quantile test scores): Suppose that the utility function is the sum of individual

performances and consider greedy selection with respect to quantile test scores with threshold

parameters θi = 1−1/k. Suppose there are two types of items: (a) deterministic performance items

whose each individual performance is of value 1 with probability 1 and (b) random performance

items whose individual performances are independent of value a > 1 with probability p > 1/k and

otherwise equal to zero. For random performance items, the mean test score is of value ap and the

quantile test score is of value a. The algorithm will choose all items to be random performance items,

which yields the utility of value kap. On the other hand, choosing items that output deterministic

performance, yields the utility of value k. Since a and p can be chosen to be arbitrarily near to

values 1 and 1/k, respectively, we observe that the algorithm yields the utility that is O(1/k) of

the optimum value.

We next present a tight approximation bound for the CES utility function with parameter r≥ 1.

Recall that the CES utility production provides an interpolation between two extreme cases: a

linear function (for r= 1) and the best-shot function (for the limit as r goes to infinity). Intuitively,

we would expect that greedy selection with respect to mean test scores would perform well for

small values of parameter r, but that the approximation would get worse by increasing parameter

r. The following theorem makes this intuition precise.

Proposition 1 (mean test scores). Suppose that the utility function u is according to the

CES production function with parameter r ≥ 1. For given cardinality parameter k ≥ 1, let M be a

set of k items in N with highest mean test scores. Then, we have

u(M)≥ 1

k1−1/ru(OPT).

Moreover, this bound is tight.

The proof of Proposition 1 is provided in Appendix I. The proposition shows how the approxi-

mation factor decreases with the value of parameter r. In the limit of asymptotically large r, the

approximation factor goes to 1/k. This coincides with the approximation factor obtained for the

best-shot function in Kleinberg and Raghu (2015).

Intuitively, we would expect that quantile test scores would yield a good approximation guarantee

for the CES utility function with large enough parameter r. This is because we know that for the

Page 28: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization28

best-shot utility function, the quantile test scores can guarantee a constant-factor approximation,

which was established in Kleinberg and Raghu (2015). The following theorems makes this intuition

precise.

Proposition 2 (quantile test scores). Suppose that the utility is according to the CES pro-

duction function with parameter r and that the selection is greedy using quantile test scores with

θ= 1− c/k and c > 0. Then, we have

(a) if r= o(log(k)) and r > 1, the quantile test scores cannot guarantee a constant-factor approx-

imation for any value of parameter c > 0;

(b) if r= Ω(log(k)), the quantile test scores with c= 1 guarantee a constant-factor approximation.

The proof of Proposition 2 is provided in Appendix J. The proposition establishes that quantile

test scores can guarantee a constant-factor approximation if, and only if, the parameter r is larger

than a threshold whose value is Θ(log(k)).

5.3. Sketch Functions used for the Welfare Maximization Problem

In Section 4 we established an approximation guarantee for the stochastic monotone submodular

welfare maximization problem using the concept of strong sketches of set functions. This is in

contrast to Section 3 where used non-strong sketches for the submodular function maximization

problem. One may wonder whether we could have used the theory of good test scores developed

for submodular function maximization for the more general problem of submodular welfare max-

imization. Specifically, given an instance, one may have used the characterization in Definition 3

to maximize either v(S1, S2, . . . , Sm) =∑m

j=1 vj(Sj) or v(S1, S2, . . . , Sm) =∑m

j=1 vj(Sj), with vj and

vj as defined in (9), over all feasible assignments. However, as we show next, such approaches can

lead to highly sub-optimal assignments even for simple instances.

Example 1: Suppose we use an algorithm for maximizing the welfare function v(S1, S2, . . . , Sm)

subject to cardinality constraints.

Consider a problem instance with n = r2 items and m = r partitions with each partition hav-

ing a cardinality constraint with kj = r for all r. All items are assumed to exhibit deterministic

performance: r items (referred to as heavy items) have performance of value 1, i.e., Xi = 1 with

probability 1, while the remaining items have performance of zero value. Assume that value func-

tions are best-short functions gj(S) = maxxi | i∈ S for each partition j.

The optimum solution for the given problem instance is when each of the heavy items is assigned

to a different partition, leading to the welfare of value r. On the contrary, the algorithm assigns

all heavy items to same partition, which yields a welfare of value 1. Hence, the algorithm achieves

the welfare that is 1/√n factor of the optimum, which can be made arbitrarily small by choosing

large enough number of items n.

Page 29: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization29

Example 2: Suppose now that we use an algorithm for maximizing the welfare function

v(S1, S2, . . . , Sm) subject to cardinality constraints.

Consider a problem instance with n= 2r items and m= r+ 1 partitions, where partition 1 has a

cardinality constraint with k1 = r, and each partition 1< j ≤m has kj = 1. All items are assumed

to have deterministic performance once again: one heavy item with performance of value√r, r−1

medium items with performance of value of 1, and, finally, the remaining items with zero-valued

performance. Assume that value functions are g1(x) =∑n

i=1 xi and gj(x) = (1/√r)maxxi | i =

1,2, . . . , n for partitions 1< j ≤m.

The optimum solution assigns all items to partition 1, which yields a welfare of value r+√r−1,

whereas the algorithm assigns the heavy item to partition 1 and the medium items spread across,

which yields a welfare of value less than 2√r. Hence, the algorithm achieves the welfare which is

less than 2√

2/√n of the optimum welfare, which can be made arbitrarily small by taking large

enough number of items n.

6. Conclusion

In this work, we presented a new algorithmic approach for the canonical problem of (stochas-

tic) submodular maximization known as test score algorithms. These algorithms are particularly

appealing due to their simplicity and natural interpretation as their decisions are contingent only

on individual item scores that are computed based on the distribution that captures the uncer-

tainty in the respective item’s performance. Although test score based methods have been studied

in an ad-hoc manner in previous literature (Kleinberg and Raghu 2015), our work presents the

first systematic framework for solving a broad class of stochastic combinatorial optimization prob-

lems by approximating complex set functions using simpler test score based sketch functions. By

leveraging this framework, we show that it is possible to obtain good approximations under a

natural (extended) diminishing returns property, namely: (i) a constant factor approximation for

the problem of maximizing a stochastic submodular function subject to a cardinality constraint,

and (ii) a logarithmic-approximation guarantee for the more general stochastic submodular wel-

fare maximization problem. It is worth noting that since test score algorithms represent a more

restrictive computational model, the guarantees obtained in this paper are not as good as those of

the best known algorithms for both these problems. However, test score based approaches provide

three key advantages over more traditional algorithms that make them highly desirable in practical

situations relating to online platforms:

• Scalability : The test score of an item depends only on its own performance distribution.

Therefore, when new items are added or existing items are removed from the ground set, this

does not alter the scores of any other items. Since our algorithm selects items with the highest

test scores, its output would only require simple swaps when the ground set changes.

Page 30: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization30

• Distributed Implementation: Test score algorithms can be easily parallelized as the test score

of an item can be computed independently of the performance distribution of other items.

Moreover, the final algorithm itself involves a trivial greedy selection and does not require any

complex communication between parallel machines.

• Fewer Oracle Calls: Test score algorithms only query the value of the function E[g(x)] once

per item—n oracle calls in total—which is an order of magnitude smaller than the number

required by traditional approaches. Moreover, these oracle calls are simple in that they do not

require drawing samples from the distributions of multiple items, which may be expensive.

Future work may consider lower bounds for test score-based algorithms for different sub-classes

of monotone stochastic submodular set functions. In particular, it would be of interest to consider

instances of set functions that do not belong to the class of set functions identified in this paper.

It is also of interest to consider tightness (inapproximability) of approximation factors. Finally, it

would also be of interest to study approximation guarantees when using statistical estimators for

test scores, and not expected values as in this paper.

References

Ahmed S, Atamturk A (2011) Maximizing a class of submodular utility functions. Mathematical Programming

128(1):149–169.

Armington PS (1969) A theory of demand for products distinguished by place of production. Staff Papers

(International Monetary Fund) 16(1):159–178.

Asadpour A, Nazerzadeh H (2016) Maximizing stochastic monotone submodular functions. Management

Science 62(8):2374–2391.

Asadpour A, Nazerzadeh H, Saberi A (2008) Stochastic submodular maximization. International Workshop

on Internet and Network Economics (WINE), 477–489.

Badanidiyuru A, Mirzasoleiman B, Karbasi A, Krause A (2014) Streaming submodular maximization: mas-

sive data summarization on the fly. The 20th ACM SIGKDD International Conference on Knowledge

Discovery and Data Mining, KDD ’14, New York, NY, USA - August 24 - 27, 2014, 671–680.

Balcan M, Harvey NJA (2011) Learning submodular functions. Proceedings of the 43rd ACM Symposium on

Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, 793–802.

Balkanski E, Rubinstein A, Singer Y (2019) An exponential speedup in parallel running time for submodular

maximization without loss in approximation. Proceedings of the Thirtieth Annual ACM-SIAM Sympo-

sium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, 283–302.

Balkanski E, Singer Y (2018) The adaptive complexity of maximizing a submodular function. Proceedings of

the 50th Annual ACM SIGACT Symposium on Theory of Computing, 1138–1151, STOC 2018 (ACM).

Page 31: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization31

Calinescu G, Chekuri C, Pal M, Vondrak J (2011) Maximizing a monotone submodular function subject to

a matroid constraint. SIAM J. Comput. 40(6):1740–1766.

Cohavi K, Dobzinski S (2017) Faster and simpler sketches of valuation functions. ACM Trans. Algorithms

13(3):30:1–30:9.

Cohen MC, Keller PW, Mirrokni V, Zadimoghaddam M (2019) Overcommitment in cloud services: Bin

packing with chance constraints. Management Science .

Dixit AK, Stiglitz JE (1977) Monopolistic Competition and Optimum Product Diversity. American Economic

Review 67(3):297–308.

Fahrbach M, Mirrokni VS, Zadimoghaddam M (2019) Submodular maximization with nearly optimal approx-

imation, adaptivity and query complexity. Proceedings of the Thirtieth Annual ACM-SIAM Symposium

on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, 255–273.

Feige U (1998) A threshold of ln n for approximating set cover. J. ACM 45(4):634–652.

Feldman M, Zenklusen R (2018) The submodular secretary problem goes linear. SIAM J. Comput. 47(2):330–

366.

Feldman V, Vondrak J (2014) Optimal bounds on approximation of submodular and XOS functions by

juntas. Information Theory and Applications Workshop, ITA 2014, San Diego, CA, USA, February

9-14, 2014, 1–10.

Fu R, Subramanian A, Venkateswaran A (2016) Project characteristics, incentives, and team production.

Management Science 62(3):785–801.

Goel A, Guha S, Munagala K (2006) Asking the right questions: model-driven optimization using probes. Pro-

ceedings of the Twenty-Fifth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database

Systems, June 26-28, 2006, Chicago, Illinois, USA, 203–212.

Goemans MX, Harvey NJA, Iwata S, Mirrokni VS (2009) Approximating submodular functions everywhere.

Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2009,

New York, NY, USA, January 4-6, 2009, 535–544.

Gotovos A, Hassani SH, Krause A (2015) Sampling from probabilistic submodular models. Proceedins of

the 29th International Conference on Neural Information Processing Systems (NIPS), December 7-12,

2015, Montreal, Quebec, Canada, 1945–1953.

Graepel T, Minka T, Herbrich R (2007) Trueskill(tm): A bayesian skill rating system. Proceedings of the

19th International Conference on Neural Information Processing Systems (NIPS) 19:569–576.

Hassidim A, Singer Y (2017) Submodular optimization under noise. Kale S, Shamir O, eds., Proceedings

of the 2017 Conference on Learning Theory, volume 65 of Proceedings of Machine Learning Research,

1069–1122.

Hoeffding W (1963) Probability inequalities for sums of bounded random variables. Journal of the American

Statistical Association 58(301):13–30.

Page 32: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization32

Iyer RK, Bilmes JA (2013) Submodular optimization with submodular cover and submodular knapsack

constraints. Proceedings of the 27th International Conference on Neural Information Processing Systems

(NIPS), 2436–2444.

Kempe D, Kleinberg JM, Tardos E (2015) Maximizing the spread of influence through a social network.

Theory of Computing 11:105–147.

Kleinberg J, Raghu M (2015) Team performance with test scores. Proceedings of the 16th ACM Conference

on Economics and Computation (EC), 511–528.

Kleinberg JM, Oren S (2011) Mechanisms for (mis)allocating scientific credit. Proceedings of the 43rd ACM

Symposium on Theory of Computing, STOC 2011, San Jose, CA, USA, 6-8 June 2011, 529–538.

Kleywegt A, Shapiro A, Homem-de Mello T (2002) The sample average approximation method for stochastic

discrete optimization. SIAM Journal on Optimization 12(2):479–502.

Korula N, Mirrokni V, Zadimoghaddam M (2018) Online submodular welfare maximization: Greedy beats

1/2 in random order. SIAM Journal on Computing 47(3):1056–1086.

Krause A, Golovin D (2014) Submodular function maximization. Tractability: Practical Approaches to Hard

Problems, 71–104 (Wiley).

Lehmann B, Lehmann D, Nisan N (2006) Combinatorial auctions with decreasing marginal utilities. Games

and Economic Behavior 55(2):270–296.

Li H (2011) Learning to Rank for Information Retrieval and Natural Language Processing (Morgan & Clay-

pool).

Nemhauser G, Wolsey L, Fisher M (1978) An analysis of approximations for maximizing submodular set

functions—i. Math. Programming 14(1):265–294.

Shapiro A, Nemirovski A (2005) On Complexity of Stochastic Programming Problems, 111–146 (Boston, MA:

Springer US).

Singla A, Tschiatschek S, Krause A (2016) Noisy submodular maximization via adaptive sampling with appli-

cations to crowdsourced image collection summarization. Proceedings of the Thirtieth AAAI Conference

on Artificial Intelligence, 2037–2043, AAAI’16 (AAAI Press).

Solow RM (1956) A contribution to the theory of economic growth. The Quaterly Journal of Economics

70:65–94.

Sviridenko M, Vondrak J, Ward J (2017) Optimal approximation for submodular and supermodular opti-

mization with bounded curvature. Math. Operations Research 42(4).

Swamy C, Shmoys DB (2012) Sampling-based approximation algorithms for multistage stochastic optimiza-

tion. SIAM Journal on Computing 41(4):975–1004.

Vondrak J (2008) Optimal approximation for the submodular welfare problem in the value oracle model.

Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia,

Canada, May 17-20, 2008, 67–74.

Page 33: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization33

Appendix. Proofs and Additional Results

A. Validation of the Extended Diminishing Returns Property

It is easy to verify that all value functions defined in Section 2.3 are such that their expected values are

non-negative, monotone submodular set functions. We next show that all these value functions also satisfy

the extended diminishing returns condition, formally defined in Definition 1.

We need to check that a value function g is such that whenever for given v ∈R+ there exists y′ ∈Rd+ such

that g(y′) = v, then there exists y = (y1, . . . , yd)> ∈Rd

+ such that g(y) = v and for all x = (x1, . . . , xd)> ∈Rd

+

such that g(x)≤ g(y), it holds

g(x1, . . . , xd, z)− g(x1, . . . , xd)≥ g(y1, . . . , yd, z)− g(y1, . . . , yd), for all z ∈R+. (13)

We first prove that for all of the functions defined in Section 2.3 except for top-r with r > 1 satisfy a stronger

version of the above condition which is true for all points y ∈ Rd+ such that g(y) = v. According to the

stronger condition, for every x,y ∈Rd+ such that g(x)≤ g(y), it holds:

g(x1, . . . , xd, z)− g(x1, . . . , xd)≥ g(y1, . . . , yd, z)− g(y1, . . . , yd), for all z ∈R+. (14)

We begin by proving that all of the functions defined in Section 2.3 except top-r satisfy the stronger

condition as per (14).

Total production: g(x) = g(∑n

i=1 xi) In this case, g(x)≤ g(y) is equivalent to∑d

i=1 xi ≤∑d

i=1 yi and

(14) is equivalent to

g

(d∑i=1

xi + z

)− g

(d∑i=1

xi

)≥ g

(d∑i=1

yi + z

)− g

(d∑i=1

yi

), for all z ∈R+.

Let x=∑d

i=1 xi and y =∑d

i=1 yi. With this new notation, the extended diminishing returns condition is

equivalent to saying that for all x, y ∈R+ such that x≤ y,

g(x+ z)− g(x)≥ g(y+ z)− g(y), for all z ∈R+

which obviously holds true because g is assumed to be a monotone increasing and concave function.

Best-shot: g(x) = maxx1, x2, . . . , xn. In this case, g(x)≤ g(y) is equivalent to

maxx1, . . . , xd ≤maxy1, . . . , yd

and (14) is equivalent to

maxx1, . . . , xd, z−maxx1, . . . , xd ≥maxy1, . . . , yd, z−maxy1, . . . , yd for all z ∈R+.

We consider three different cases.

• Case 1: z ≥ maxy1, . . . , yd. In this case, maxx, z − maxx = z − maxx ≥ z − maxy =

maxy, z−maxy. Hence, extended diminishing returns holds.

• Case 2: maxx1, . . . , xd ≤ z < maxy1, . . . , yd. In this case, condition (14) is equivalent to z ≥maxx1, . . . , xd, which holds by assumption.

• Case 3: z <maxx1, . . . , xd. In this case, condition (14) is equivalent to 0≥ 0 and thus trivially holds.

Page 34: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization34

CES: g(x) = (∑n

i=1 xri )

1/r, for parameter r≥ 1. Let x=∑d

i=1 xri , y =

∑r

i=1 yri and w= zp. Condition

(14) is equivalent to

(x+w)1/r −x1/r ≥ (y+w)1/r − y1/r

while g(x) ≤ g(y) is equivalent to x ≤ y. Since r ≥ 1, the function f(x) = x1/r is an increasing concave

function. Hence, it follows that condition (14) holds as long as g(x)≤ g(y).

Success-probability: g(x) = 1−∏n

i=1(1− p(xi)) By a simple algebra, condition (14) is equivalent to

d∏i=1

p(xi)(1− p(z))≥d∏i=1

p(yi)(1− p(z))

while g(x)≤ g(y) is equivalent tod∏i=1

p(xi)≥d∏i=1

p(yi).

Hence, condition (14) holds as long as g(x)≤ g(y).

Finally, we prove that the top-r function satisfies (13) for r > 1. Recall that when r = 1, top-r coincides

with the best-shot function, for which we already showed that the extended diminishing returns condition

holds.

Top-r: g(x) =∑r

i=1 x(i), where x(i) is the i–th largest element in x. Fix v ∈R+. Without loss of

generality, suppose thatd≥ r and define y = (y1, . . . , yd)> ∈Rd such that yj = v/r for 1≤ j ≤ r and yj = 0

for all r < j ≤ d.5 Clearly, g(y) = v.

Let x∈Rd+ be any point such that g(x)≤ g(y). We prove (13) for the following two different cases:

• Case 1: z ≥ v/r: In this case, g(y, z) − g(y) = z − v/r. Since g(x) ≤ g(y), it must be the case that

the r-th largest element in x, i.e. x(r), is smaller than or equal to g(y)/r = v/r. Thus, we have that

g(x, z)− g(x) = z−x(r) ≥ z− g(y)/r= g(y, z)− g(y) and so, the claim follows.

• Case 2: z ≤ v/r: The claim trivially follows in this case because g(y, z) = g(y) and so, g(y, z)−g(y) = 0,

whereas g(x, z)− g(x)≥ 0.

B. Proof of Lemma 2

We first note the following inequalities

u(OPT)≤ u(S∗) +u(OPT \S∗)≤ u(S∗) + qv(OPT \S∗).

The first inequality comes from the fact that all submodular functions are subadditive, i.e. for any sub-

modular set function u, it holds u(A∪B)≤ u(A)+u(B). The second inequality comes from the sketch upper

bound.

Now, consider any set T of cardinality k such that OPT \S∗ ⊆ T that is disjoint from S∗, i.e. S∗ ∩T = ∅.By the condition of the lemma, we have that v(T )≤ v(S∗) and pv(S∗)≤ u(S∗). Therefore, we have

u(OPT)≤ u(S∗) + qv(S∗)≤ u(S∗) +q

pu(S∗)

which completes the proof.

5 The proof when r < d is trivial because g(x, z)− g(x) = g(y, z)− g(y) = z.

Page 35: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization35

C. Proof of Lemma 3

Suppose that a set function u has (p, q)-good test scores a1, a2, . . . , an, i.e. for every S ⊆N such that |S|= k,

pminai | i∈ S ≤ u(S)≤ qmaxai | i∈ S. (15)

Let r1, r2, . . . , rn be replication test scores, i.e.6

ri = E[g(X(1)i , . . . ,X

(k)i , φ, . . . , φ)] = u(i(1), . . . , i(k)) (16)

where X(1)i ,X

(2)i , . . . ,X

(k)i are independent random variables with distribution Pi and i(1), i(2), . . . , i(k) are

independent replicas of item i.

By assumption, a1, a2, . . . , an are (p, q)-good test scores, hence

pai ≤ u(i(1), . . . , i(k))≤ qai. (17)

From (15), (16), and (17), we have that for every S ⊆N such that |S|= k,

p

qminri | i∈ S ≤ pminai | i∈ S ≤ u(S)≤ qmaxai | i∈ S ≤

q

pmaxri | i∈ S

which implies that replication test scores are (p/q, q/p)-good test scores.

D. Proof of Lemma 4

We first prove the lower bound and and then the upper bound as follows.

Proof of the lower bound. Without loss of generality, let us consider the set S = 1,2, . . . , k and assume

that a1 = minai | i∈ S. We claim that

u(1, . . . , j)≥(

1− 1

k

)u(1, . . . , j− 1) +

1

ka1 for all j ∈ 1,2, . . . , k. (18)

From this, we can use a cascading argument to show that u(S)≥ (1− (1− 1k)k)a1 ≥ (1− 1

e)a1.

We begin by proving the claim by (18). For j = 1, since u is a non-negative, monotone submodular set

function, we have

u(1) =1

k

k∑t=1

u(1(t))≥ 1

ku(1(1), . . . ,1(k)) =

1

ka1. (19)

For j > 1, we have

u(1, . . . , j) = u(1, . . . , j− 1) + [u(1, . . . , j)−u(1, . . . , j− 1)](a)

≥ u(1, . . . , j− 1) +1

k[u(1, . . . , j− 1, j(1), . . . , j(k))−u(1, . . . , j− 1)]

(b)

≥ u(1, . . . , j− 1) +1

k[u(j(1), . . . , j(k))−u(1, . . . , j− 1)]

=

(1− 1

k

)u(1, . . . , j− 1) +

1

kaj

≥(

1− 1

k

)u(1, . . . , j− 1) +

1

ka1 (20)

6 Hereinafter, we slightly abuse the notation by writing u(S) for a set of item i replicas S = i(1), . . . , i(k) while u is

defined as a set function over 2N . A proper definition would extend the definition of u over 2N where N includes ninstances of each item i∈N but this would be at the expense of more complex notation.

Page 36: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization36

where (a) follows by submodularity of u and (b) follows by non-negativity and monotonicity of u.

We now proceed with the cascading argument:

u(1, . . . , k)≥(

1− 1

k

)u(1, . . . , k− 1) +

1

ka1

≥(

1− 1

k

)2

u(1, . . . , k− 2) +

(1− 1

k

)a1k

+a1k

≥ . . .

≥ a1k

(k−1∑j=0

(1− 1

k

)j)

≥ a1

(1−

(1− 1

k

)k)

≥(

1− 1

e

)a1.

For the last step, we use the fact that (1− 1/k)k ≤ 1/e, for all k≥ 1.

Proof of the upper bound. Without loss of generality, assume that S = 1,2, . . . , k and a1 ≤ a2 ≤ · · · ≤ ak.Recall that the value function g is defined on Rn. We will slightly abuse notation by writing g(y) to denote

g(y, φ, . . . , φ), for any vector y of dimension 1 ≤ d < n, where φ is some minimal-value element defined in

Section 2. Moreover, for convenience, we will assume that the value function g is continuous on any given

dimension.

Define gmaxi to be the maximum value of the submodular function g on a vector of dimension i, i.e.,

gmaxi = max

z1,z2,...,zi∈R+

g(z1, z2, . . . , zi).

Suppose that v = mincak, gmaxk−1 , for some constant c > 1 whose value we will determine later. We first

claim that there exists at least one vector z such that g(z) = v. Our proof will leverage this vector z as

follows. We consider a fictitious set of items S∗ whose individual performances correspond to z and show

that the marginal benefit of adding an item i ∈N to this fictitious set is at most twice the marginal value

of adding item i to a set comprising of k− 1 replicas of item i. This allows us to establish an upper bound

in terms of the test scores. Although g(z) = v= cak is sufficient for our proof to hold, it is possible that the

function g is capped at a value smaller than cak and there does not exist any z satisfying g(z) = cak. To

handle this corner case, we define v to be the minimum of cak and gmaxk−1 .

We now prove the above claim that v has a non-empty preimage under g. When v= gmaxk−1 , the claim follows

trivially since by the definition of gmaxi , there exists a (k−1)-dimension vector whose function value is gmax

k−1 .

On the other hand, when cak < gmaxk−1 , this comes from continuity arguments since we know that there exist

points in Rk−1+ where g evaluates to values greater than and smaller than v respectively. In summary, there

exists at least one point where the function evaluates to v. Since g satisfies the extended diminishing returns

property, we can abuse notation and infer from the definition that there exists a vector7 z ∈Rn−1+ such that

g(z) = v and for any y ∈Rk−1+ having g(y)≤ g(z), it must be the case that

g(z, x)− g(z)≤ g(y, x)− g(y), for all x∈R+. (21)

7 Note that some elements of this vector can be φ or zero

Page 37: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization37

It is worth pointing out that while Definition 1 guarantees that (21) holds when the vector y is of dimension

n−1, one can simply start with a (k−1)-dimension vector y and simply pad a sufficient number of φ elements

to arrive upon a (n−1)-dimension vector whose value is still g(y). Therefore, let z = (z1, z2, . . . , zn−1)> be an

arbitrary vector such that g(z) = v and that it satisfies (21) for any y ∈Rk−1+ , x≥ 0 as long as g(y)≤ g(z).

Let S∗ = q1, q2, . . . , qn−1 be a set of (fictitious) items such that Xqj = zj with probability 1 (performance

of each of these fictitious items is deterministic). Therefore, the performance of the set of items S∗ is given

by

u(S∗) = g(z) = mincak, gmaxk−1 .

Since u is a non-negative, increasing and submodular function, we have

u(S) ≤ u(S∗ ∪S) (22)

≤ u(S∗) +

k∑i=1

(u(S∗ ∪i)−u(S∗)) (23)

≤ cak +

k∑i=1

(u(S∗ ∪i)−u(S∗)) . (24)

Let X(1)i ,X

(2)i , . . . ,X

(k)i be independent random variables with distribution Pi. Let Xi = X

(k)i and Yi =

(X(1)i ,X

(2)i , . . . ,X

(k−1)i )>. Note that

u(S∗ ∪i)−u(S∗) = E [g(z,Xi)− g(z)] (25)

(a)= E [g(z,Xi)− g(z) | g(Yi)≤ g(z)] (26)

(b)

≤ E [g(Yi,Xi)− g(Yi) | g(Yi)≤ g(z)] (27)

≤u(i(1), . . . , i(k)

)−u

(i(1), . . . , i(k−1)

)Pr[g(Yi)≤ g(z)

] (28)

(c)

≤ 1

Pr[g(Yi)≤ g(z)

] aik

(29)

(d)

≤(

1− 1

c

)−1akk, (30)

where (a) comes from the fact that, by definition, Xi and Yi are independent; the inequality (b) follows from

the extended diminishing returns property outlined in (21) for y = Yi–note that for any instantiation Yi

where g(Yi)≤ g(z), extended diminishing returns tells us that g(z,Xi)− g(z)≤ g(Yi,Xi)− g(Yi) for all Xi,

thus taking the expectation over all Yi,Xi conditional upon g(Yi)≤ g(z) gives us (b); inequality (c) can be

shown using only the definition of submodularity as can be seen via the below sequence of inequalities:

u(i(1), . . . , i(k)

)−u

(i(1), . . . , i(k−1)

)≤ 1

k

k−1∑j=0

(u(i(1), . . . , i(j), i(k))−u(i(1), . . . , i(j))

)=

1

k

k−1∑j=0

(u(i(1), . . . , i(j), i(j+1))−u(i(1), . . . , i(j))

)=

1

ku(i(1), . . . , i(k))

=aik.

Page 38: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization38

It remains to prove (d) which follows by the fact ai ≥ ak for all i∈ 1,2 . . . , k and showing that Pr[g(Yi)≤

g(z)]≥ 1 − 1/c. Recall that g(z) = mincak, gmaxk−1 . Let us proceed by separately considering two cases

depending on the value of g(z). If g(z) = gmaxk−1 , then Pr[g(Yi) ≤ g(z)

]= 1 trivially. This is because by

definition gmaxk−1 is the maximum value that the function can take for any vector of length k−1. On the other

hand, when g(z) = cak, we can apply Markov’s inequality to obtain

Pr[g(Yi)≥ cak]≤E[g(Yi)]

cak≤ E[g(Yi,Xi)]

cak≤ 1

c.

Hence, it follows Pr[g(Yi)≤ cak]≥ 1−Pr[g(Yi)≥ cak]≥ 1− 1/c. Combining this with (24) and (30), we

obtain u(S) ≤ cak + (1− 1/c)−1ak = (c2/(c− 1))ak. Since we can choose c arbitrarily, by taking c = 2, we

obtain u(S)≤ 4ak, which proves the upper bound.

E. Proof of Lemma 5

Suppose that S∗ is the optimum solution to the submodular welfare maximization problem with sketch utility

functions v1, v2, . . . , vm, and v(S)≥ αv(S∗). Then,

u(OPT) =

m∑j=1

uj(OPTj)

≤ qm∑j=1

vj(OPTj)

≤ qm∑j=1

vj(S∗j ) (since this solution is optimal for sketch utility functions)

≤ 1

αq

m∑j=1

vj(Sj)

≤ 1

pαq

m∑j=1

uj(Sj).

F. Proof of Lemma 6

It suffices to consider an arbitrary partition j. To simplify the presentation, with a slight abuse of notation,

we omit the index j in our notation.

Let ar1, ar2, . . . , a

rn denote replication test scores for parameter r. For any set S ⊆N such that |S|= k, let

π(S) = (π1(S), π2(S), . . . , πk(S)) be a permutation of the elements of S defined in (11).

Let v be a set function, which for any S ⊆N such that |S|= k is defined by

v(S) =

k∑r=1

1

rarπr(S). (31)

We need to establish the following relations, for every S ⊆N ,

u(S)≥ 1

2(log(k) + 1)v(S) (32)

and

u(S)≤ 6v(S). (33)

Page 39: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization39

Proof of lower bound (32) Suppose that S is of cardinality k and define

τ := arg maxt

atπt(S).

We begin by noting the following basic property of replication test scores.

Lemma 8. For replication test scores ar1, ar2, . . . , a

rn for 1≤ r ≤ k, for every item i ∈ 1,2, . . . , k, the fol-

lowing relations hold:asis≥ ati

t, for 1≤ s≤ t≤ k.

The assertion in Lemma 8 follows easily by the diminishing increments property of replication test scores

ari with respect to parameter r.

In our proof, we will also need the following lemma:

Lemma 9. For every set S ⊆ N such that |S| = k and ordering of items of this set π(S) =

(π1(S), π2(S), . . . , πk(S)), the following relation holds:

1

τ

τ∑r=1

arπr(S) ≥1

2aτπτ (S).

The proof of the lemma is as follows. For every r ∈ 1,2, . . . , τ, we have

arπr(S)r≥arπτ (S)r≥aτπτ (S)τ

where the first inequality is by definition of π(S) and the second inequality is by Lemma 8. Hence, we have

τ∑r=1

arπr(S) ≥aτπτ (S)τ

τ∑r=1

r≥aτπτ (S)τ

τ(τ + 1)

2≥ aτπτ (S)

τ

2

which corresponds to the claim of the lemma.

Lemma 10. For every S ⊆N , the following relation holds:

u(S)≥ 1

τ

τ∑r=1

arπr(S).

The proof of Lemma 10 is by induction as we show next. The inductive statement is u(π1(S), . . . , πr(S))≥1r

∑r

s=1 asπs(S)

for every r ∈ 1,2, . . . , τ. Base case: r = 1. The base case indeed holds because by definition

of replication test scores u(π1(S)) = a1π1(S). Inductive step: assume that the statement is true up to r− 1

and we need to show that it holds for r. We have the following relations:

u(π1(S), . . . , πr(S))−u(π1(S), . . . , πr−1(S))

=1

r

(u(π1(S), . . . , πr−1(S), πr(S)(1)) + · · ·+u(π1(S), . . . , πr−1(S), πr(S)(r))− ru(π1(S), . . . , πr−1(S))

)≥ 1

r

(u(π1(S), . . . , πr−1(S), πr(S)(1), . . . , πr(S)(r))−u(π1(S), . . . , πr−1(S))

)≥ 1

r

(u(πr(S)(1), . . . , πr(S)(r))−u(π1(S), . . . , πr−1(S))

)=arπr(S)r− u(π1(S), . . . , πr−1(S))

r

Page 40: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization40

where the first and second inequality is by submodularity and monotonicity of set function u, respectively.

From the inductive hypothesis, we know that u(π1(S), . . . , πr−1(S)) ≥ 1r−1

∑r−1s=1 a

sπs(S)

, so we add

u(π1(S), . . . , πr−1(S)) to both sides of the above equation and obtain

u(π1(S), . . . , πr(S))≥arπr(S)r

+r− 1

ru(π1(S), . . . , πr−1(S))≥ 1

r

r∑s=1

asπs(S)

which proves the claim of Lemma 10.

Now, combining Lemma 9 and Lemma 10, we obtain u(S)≥ aτπτ (S)/2.

Finally, we conclude the lower bound as follows:

u(S)≥ 1

2aτπτ (S) =

aτπτ (S)2

1 + 12

+ . . .+ 1k

1 + 12

+ . . .+ 1k

≥a1π1(S)

+a2π2(S)

2+ . . .+

akπk(S)

k

2(log(k) + 1)= v(S)

where in the last inequality we use the facts that aτπτ (S) ≥ arπr(S)

for all r, and 1+1/2+ · · ·+1/k≤ log(k)+1,

for all k≥ 1.

Proof of the upper bound (33) The proof of the upper bound is almost identical to the upper bound

proof of Lemma 4. Once again, we will abuse notation by writing g(y) instead of g(y, φ, . . . , φ) for any vector

y of dimension r < n, where φ is some minimal-value element as defined in Section 2.

Analogous (but slightly different) than in the proof of Lemma 4, consider a deterministic vector z =

(z1, z2, . . . , zn−1) such that g(z) = mincaτπτ (S), gmaxk−1 , for a positive constant c > 1 whose value will be deter-

mined later. In choosing this vector, we will apply the definition of extended diminishing returns so that for

any y satisfying g(y)≤ g(z) and x≥ 0, Equation (21) is satisfied.

Let S∗ = v1, v2, . . . , vn−1 be a set of (fictitious) items such that Xvj = zj with probability 1 (the perfor-

mance of each of these fictitious items is deterministic). Therefore, the performance of the set of items S∗ is

given by u(S∗) = g(z) = mincaτπτ (S), gmaxk−1 .

By definition, we know that arπr(S) ≤ aτπτ (S)

for all r. Moreover, we can upper bound u(S) as follows,

u(S)≤ u(S ∪S∗)≤ u(S∗) +

k∑r=1

[u(S∗ ∪πr(S))−u(S∗)]. (34)

Let X(1)πr(S)

,X(2)πr(S)

, . . . ,X(r)πr(S)

be independent random variables with distribution Pπr(S). Let X =X(r)πr(S)

and

Y = (X(1)πr(S)

,X(2)πr(S)

, . . . ,X(r−1)πr(S)

). Note that

u(S∗ ∪πr(S))−u(S∗) = E[g(z,X)− g(z)]

= E [g(z,X)− g(z) | g(Y)≤ g(z)]

(a)

≤ E[g(Y,X)− g(Y) | g(Y)≤ g(z)

]≤ E[g(Y,X)− g(Y)]

Pr[g(Y)≤ g(z)](b)

≤ 1

Pr[g(Y)≤ g(z)]

arπr(S)r

.

Inequality (a) follows from the extended diminishing returns property defined in Definition 1. Note that

from our definition of z, for any instantiation Y where g(Y)≤ g(z), extended diminishing returns tells us

that g(z,X) − g(z) ≤ g(Y,X) − g(Y) for all X. Taking the expectation over all Y,X conditional upon

g(Y)≤ g(z) gives us (a).

Page 41: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization41

Inequality (b) can be shown using only the definition of submodularity as can be seen via the below

sequence of inequalities: suppose that i= πr(S).

E[g(Y,X)− g(Y)]≤ 1

r

r−1∑s=0

(u(i(1), . . . , i(s), i(r))−u(i(1), . . . , i(s))

)=

1

r

r−1∑s=0

(u(i(1), . . . , i(s), i(s+1))−u(i(1), . . . , i(s))

)=

1

ru(i(1), . . . , i(r))

=arir

=arπr(S)r

.

All that remains for us is to prove that Pr[g(Y)≤ g(z)

]≥ 1− 1/c.

Recall that g(z) = mincaτπτ (S), gmaxk−1 . Let us proceed by considering two cases depending on the value of

g(z). If g(z) = gmaxk−1 , then Pr[g(Y)≤ g(z)

]= 1 trivially. This is because by definition gmaxk−1 is the maximum

value that the function can take on any vector of length k− 1, and by monotonicity, any vector of size r− 1

such as Y since r≤ k. On the other hand, when g(z) = caτπτ (S), we can apply Markov’s inequality and bound

the desired probability, i.e.,

Pr[g(Y)≥ caτπτ (S)

]≤ E [g(Y)]

caτπτ (S)

≤ 1

c

where we used E[g(Y)] = ar−1πr(S)

≤ arπr(S) ≤ aτπτ (S)

. Since Pr[g(Y)≤ caτπτ (S)

]≥ 1−Pr

[g(Y)≥ caτπτ (S)

], it

follows that Pr[g(Y)≤ caτπτ (S)

]≥ 1− 1/c, as desired.

We have shown that u(S∗ ∪πr(S))−u(S∗)≤ (1− 1/c)−1arπr(S)/r.

Combining with (34), we obtain

u(S)≤ caτπτ (S) +

(1− 1

c

)−1(a1π1(S)

1+a2π2(S)

2+ · · ·+

akπk(S)k

).

Applying Lemma 9 to aτπτ (S), we obtain that

u(S)≤ 2c1

τ

τ∑r=1

arπr(S) +

(1− 1

c

)−1(a1π1(S)

+a2π2(S)

2+ · · ·+

akπk(S)k

)

(2c+

(1− 1

c

)−1)(a1π1(S)

+a2π2(S)

2+ · · ·+

akπk(S)k

)which completes the proof by taking c= 2.

G. Proof of Lemma 7

Before proving Lemma 7, we prove that our sketch function vj as defined in (12) satisfies a simple mono-

tonicity property. This property will be useful in the proof of Lemma 7.

Proposition 3. Suppose vj is a sketch function for a stochastic monotone submodular function uj as

defined in (12) and let S = i1, i2, . . . , i|S| ⊆N such that for all r ∈ 1,2, . . . , |S|, πr(S, j) = ir. Then, the

following inequalities hold for all r ∈ 1,2, . . . , |S|:

vj(S)≥ vj(S \ ir)≥ vj(S)−arir,jr.

Page 42: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization42

Proof of Proposition 3 Fix some r ∈ 1,2, . . . , |S|, and for all t 6= r, define νt such that πνt(S\ir, j) = it.

That is, νt denotes item it’s new ‘rank’ in the set S \ ir. Note that 1≤ νt ≤ |S| − 1 and that:

vj(S \ ir) =∑t6=r

aνtit,jνt

. (35)

We show via induction on t that for all t 6= r, νt ≤ t, i.e., removal of an item cannot hurt the ‘rank’ of

another item. The claim is trivially true when t= 1 since νt ≥ 1. Consider an arbitrary t > 1, and suppose

that the inductive hypothesis is true up to t− 1. Let us consider two cases: first, if t < r, then by definition

πt(S, j) = πt(S \ ir, j) = it and so the inductive claim holds since νt = t. Second, suppose that t > r:

assume by contradiction that νt > t. By the inductive hypothesis, it must be the case that πt(S \ ir, j) ∈

it, it+1, . . . , i|S|—indeed, for all t′ < t, we have that νt′ ≤ t′. However, we know by definition of π that for

all i∈ it+1, . . . , i|S|, it must be true that:

atit,j >ati,j .

Therefore, if νt > t, then πt(S \ ir, j) ∈ it+1, . . . , i|S|—this would be a violation of the definition of π.

Hence, the inductive hypothesis follows.

Now, in order to prove the proposition, we go back to (35),

vj(S \ ir) =∑t6=r

aνtit,jνt

≤∑t6=r

aνtiνt ,j

νt

=

|S|−1∑t=1

atit,jt

≤ v(S).

The crucial step above is the second inequality. There, we used the fact that νt ≤ t, and therefore, if νt = q,

then aqiq,j ≥ aqit,j

by definition of iq for all 1≤ q= νt ≤ |S|− 1. The third inequality comes from changing the

index from νt to t. In summary, we have shown that v(S)≥ v(S \ ir) which is one half the proposition. In

order to prove the other half, that is v(S \ir)≥ v(S)−arir,j/r, we utilize the result from Lemma 8, namely

that:aνtit,jνt≥atit,jt,

which is true because νt ≤ t. To conclude the proposition, we have that:

v(S) =

|S|∑t=1

atit,jt

=∑t6=r

atit,jt

+arir,jr

≤∑t6=r

aνtit,jνt

+arir,jr

= v(S \ ir) +arir,jr.

We are now ready to prove the main lemma.

Page 43: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization43

(Proof of Lemma 7) We need to show that the greedy algorithm described in Algorithm 1 returns an

assignment S = (S1, S2, . . . , Sm) that is a 12-approximation to the optimum assignment O = (O1,O2, . . . ,Om)

that maximizes v(S′) =∑m

j=1 vj(S′j) where the function vj is as defined in (12). If the sketch function vj is

submodular, then one can simply apply the well-known result by Lehmann et al. (2006) for the submodular

welfare maximization problem to show that the greedy algorithm yields the desired approximation factor.

However, despite its simplicity, the sketch function vj is not necessarily submodular, so we cannot directly

use the existing proof for submodular welfare maximization as a black-box.

Before proving the result, we introduce some pertinent notation. Recall that our algorithm proceeds in

rounds such that at each time step t, exactly one item i ∈ A is added to a partition j ∈ P . Let S(t) =

(S1(t), S2(t), . . . , Sm(t)) denote the assignment at the end of time step t, i.e., Sj(t) is the set of items assigned

to partition j ∈M at the end of t unique assignments. For notational convenience, let S(0) = (∅,∅, . . . ,∅).Suppose that O(t) = (O1(t),O2(t), . . . ,Om(t)) denote the optimal (constrained) assignment such that for

every j ∈M , Sj(t)⊆Oj(t), i.e., this assignment deviates from S only in the set of items that are unassigned

at the end of time step t. Finally, suppose that at round t+1, if our algorithm assigns item i∈N to partition

j ∈M , then the added welfare is ∆(t+ 1) := a|Sj(t)|+1i,j /(|Sj(t)|+ 1).

The basic idea behind our proof is similar to that of Theorem 12 in (Lehmann et al. 2006). Namely, we

show that v(O(t))≤ v(O(t+ 1)) + ∆(t+ 1) for all t∈ 0,1, . . . , `− 1, where ` is the total number of rounds

the algorithm proceeds for. By cascading this argument, we can show the desired approximation guarantee,

i.e.,

v(O(0))≤ v(O(1)) + ∆(1) (36)

≤ · · ·

≤ v(O(t)) +

t∑r=1

∆(r)

≤ · · ·

≤ v(O(`)) +

`∑r=1

∆(r)

= v(O(`)) + v(S(`))

= 2v(S). (37)

The first five equations above come from an application of the claimed inequality v(O(t))≤ v(O(t+ 1)) +

∆(t+ 1) for all t∈ 0,1, . . . , `− 1. The penultimate and final equations follow from: (a) O(`) = S(`) = S by

definition, and (b) the total welfare generated by the solution S is simply the sum of welfare added in each

round, i.e.,∑`

r=1 ∆(r). Finally, this argument can be used to conclude the proof since O(0) is the same as

the unconstrained optimum assignment O by definition.

All that remains for us is to prove the claim v(O(t))≤ v(O(t+ 1)) + ∆(t+ 1) for all t ∈ 0,1, . . . , `− 1.In (Lehmann et al. 2006), this claim followed from submodularity. However, since this is no longer a valid

approach in our setting, we use a more subtle argument based on the monotonicity result from Proposition 3.

Suppose that in round t+ 1, our algorithm assigns item i to partition j and let |Sj(t+ 1)| = r so that

∆(t + 1) = ari,j/r. Moreover, suppose that in the constrained optimum solution O(t), item i is assigned

Page 44: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization44

to partition j′ and integer parameter r′ is such that πr′(Oj′(t), j′) = i. A crucial observation here is that

r′ > |Sj′(t)|. Indeed, since Sj′(t)⊆Oj′(t), if r′ ≤ |Sj′(t)|, then it would be the case that8

ar′

i,j′ >ar′

πr′ (Sj′ (t),j′),j′ .

This is naturally a contradiction since Algorithm 1 greedily assigns the item with the maximum marginal

benefit at each round and we know that item i was still unassigned at the end of round t. Consider the

assignment O(t+ 1), we have that:

v(O(t+ 1))≥ v(O(t)) +(vj(Oj(t)∪i)− vj(Oj(t))

)−(vj′(Oj′(t))− vj′(Oj′(t) \ i)

). (38)

Starting with the assignment O(t), if we move item i from partition j′ to partition j, the resulting assignment

has a welfare that is denoted by the right hand side of the above inequality. Now, since the resulting

assignment also subsumes S(t+ 1), its welfare cannot be larger than O(j+ 1). Consider the term, vj(Oj(t)∪

i) − vj(Oj(t)) from the RHS of (38)—this is non-negative by the monotonicity argument laid out in

Proposition 3. Similarly, consider the other term from the RHS, namely vj′(Oj′(t))− vj′(Oj′(t) \ i)—this

is upper bounded by ar′

i,j′/r′ as per Proposition 3 and our definition of r′. Further, according to Lemma 8,

we have that:ar′

i,j′

r′≤

a|Sj′ (t)|+1

i,j′

|Sj′(t)|+ 1,

since we proved earlier that r′ >Sj′(t). Putting all these ingredients together, we arrive upon the desired

claim that v(O(t))≤ v(O(t+ 1)) + ∆(t+ 1) for all t∈ 0,1, . . . , `− 1:

v(O(t+ 1))≥ v(O(t)) +(vj(Oj(t)∪i)− vj(Oj(t))

)−(vj′(Oj′(t))− vj′(Oj′(t) \ i)

)≥ v(O(t)) + (0)−

ar′

i,j′

r′(39)

≥ v(O(t))−a|Sj′ (t)|+1

i,j′

|Sj′(t)|+ 1(40)

≥ v(O(t))−ari,jr

(41)

= v(O(t))−∆(t+ 1).

Equation (39) is a product of the monotonicity claims from Proposition 3. Equation (40) is due to the fact

that r′ > |Sj′(t)| and due to Lemma 8. Finally, the penultimate inequality (41) comes from the property of

the greedy algorithm. At round t+ 1, since the greedy algorithm assigned item i to partition j as opposed to

partition j′, it must have been the case that a|Sj′ (t)|+1

i,j′ /(|Sj′(t)|+ 1)≤ ari,j/r. This concludes our proof.

H. Sample Average Approximation Algorithms

H.1. NP-Hardness of Sample-Based Stochastic Optimization

We now present an example of a stochastic submodular optimization problem with a rather simple utility

function where employing sample based algorithms may subsequently result in a discrete optimization that

8 For convenience, we assume no ties here although the proof can easily be extended to the case with ties as long asa consistent tie-breaking rule is used.

Page 45: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization45

is NP-Hard. On the other hand, test score algorithms avoid the additional overhead brought about by

solving secondary optimization problems. More concretely, consider the problem of maximizing a stochastic

monotone submodular function subject to a cardinality constraint where g(x) = maxx1, x2, . . . , xn. For

every i ∈ N , the distribution Pi is defined as follows: let Xi be a random variable such that Xi = 1 with

probability pi and Xi = 0 with probability 1− pi for some sufficiently small probabilities (pi)i∈N .

Consider the sample average approximation approach which first computes a collection of T independent

sample vectors (X(t)1 ,X

(t)2 , . . . ,X(t)

n )Tt=1, where X(t)i ∼ Pi. For a given cardinality parameter k, the SAA

method would look to compute a subset S∗ ⊆ N in order to maximize the number of ‘covered indices’ t,

i.e., arg maxS⊆N∑T

t=1 1∃i ∈ S : X(t)i = 1, where 1 is the indicator function that evaluates to one when

the condition inside is true and is zero otherwise. However, this is equivalent to the well-studied maximum

coverage problem which is known to be NP-Hard. Note that for the same instance, a test score algorithm

based on replication scores would return the optimum solution with high probability since the test scores

would be monotonically increasing in the probability pi. In the following section, we delve deeper into the

sample errors due to test score and SAA methods

H.2. Error Probability for Finite Samples

We discuss the use of sample averages for estimating test scores for the simple example introduced in Exam-

ple 1 and the numerical results provided in Section 5.1. Our goal is to characterize the probability of error

in identifying an optimal set of items due to use of sample averages for approximating replication test scores

for the aforementioned simple example. The simplicity of this example allows us to derive tight characteri-

zations of the required number of samples for the probability of error to be within a prescribed bound. We

also conduct a similar analysis for the sample averaging approach (SAA) that amounts to enumerating and

estimating value of each feasible set of items, and compare with the test score based approach.

Recall that we consider a ground set of items N that consists of type-A and type-B items that reside in two

disjoint nonempty sets A and B, respectively, such that N =A∪B. For each i∈A, Xi = a with probability

1, and for each i ∈ B, Xi = b/p with probability p, and Xi = 0 otherwise, where a, b > 0 and p ∈ (0,1] are

parameters. We assume that b/p > a so that individual performance of a type-B item is larger than that of

any type-A item conditional on the type-B item achieving performance b/p. We may think of type-B items

as of high-risk, high-return items when p is small. We assume that for given k, |A| ≥ k and |B| ≥ k.

We consider the best-shot utility function u(S) = E[maxxi | i ∈ S], which want to maximize over sets

S ∈ 2N of cardinality |S| = k. Clearly, we can distinguish k + 1 equivalence cases for sets S with respect

to the value of the utility function: class r defined by having r type-B items and k − r type-A items, for

r ∈ 0,1, . . . , k. Let Ck,r denote all sets of cardinality k that are of class r.

For each S ∈Ck,r, we have

u(S) = E[maxX(r), a]

where X(r) is the largest order statistic of individual performance of type-B items,

Pr[X(r) = b/p] = 1−Pr[X(r) = 0] = 1− (1− p)r.

Page 46: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization46

Indeed, we have

u(S) = a(1− p)r +b

p(1− (1− p)r).

Since we assumed that b/p > a, we have that u(S) is increasing in the class of set S, achieving the largest

value for r= k, i.e. when all items are of type B.

In our analysis, we will make use of the well-known Hoeffding’s inequality (Hoeffding 1963) to bound the

probability of the event that a sum of independent random variables with bounded supports deviates from

its expected value by more than a given amount.

Proposition 4 (Hoeffding’s inequality). Let X1,X2, . . . ,XT be independent random variables such

that Xi ∈ [αi, βi] with probability 1 for all i∈ 1,2, . . . , T. Then, for every x≥ 0,

Pr[X1 +X2 + · · ·+XT −E[X1 +X2 + · · ·+XT ]≥ x]≤ exp

(− 2x2T 2∑T

i=1(βi−αi)2

).

Test scores Consider sample average estimators of replication test scores defined as follows:

ai =1

T

T∑t=1

maxX(1,t)i ,X

(2,t)i , . . . ,X

(k,t)i

where X(j,t)i are independent over i, j, and t and X

(j,t)i has distribution Pi. Indeed, by denoting X

((k),t)i the

largest order statistic of Pi, we can write

ai =1

T

T∑t=1

X((k),t)i .

Indeed, for our example, for every i ∈A, we have ai = a. On the other hand, for every i ∈B, we have that

X((k),t)i is equal to b/p with probability 1 − (1 − p)k and is equal to 0 otherwise. Thus, for every i ∈ B,

ai = E[ai] = (b/p)(1− (1− p)k). In what follows, we assume that (b/p)(1− (1− p)k) > a, i.e. E[ai] < E[aj ]

for every i ∈ A and j ∈ B. In this case, in absence of estimation noise, the replication test score based

algorithm correctly identifies an optimum set of items to be a set k type-B items. We declare an error event

to occur if aj < ai for some items i ∈ A and j ∈ B, and denote with pe the probability of this event, i.e.

pe := Pr[∪i∈A,j∈Baj < ai].By the Hoeffding’s inequality, for any type-A item i and type-B item j, we have

Pr[aj < ai] = Pr

[1

T

T∑t=1

X((k),t)i <a

]≤ exp(−2(1− (1− p)k− ap/b)2T ).

By the union bound, we have

pe ≤ |A||B| exp(−2p2

((1− (1− p)k)/p− a/b

)2T).

Hence, for pe ≤ δ to hold, for given δ ∈ (0,1], it suffices that the total number of samples m := nkT is such

that

m≥ nk

2p2 ((1− (1− p)k)/p− a/b)2log

(|A||B|δ

). (42)

Under given assumptions |A|+ |B|= n and |A|, |B| ≥ k, we have |A||B| ≤ n2/4, so in (42), we can replace

log(|A||B|/δ) with 2 log(n/2)+log(1/δ) to obtain a sufficient number of samples. Note that ((1−(1−p)k)/p−a/b)2 = (k− a/b)2(1 + o(1)) for small p. Hence, we have m= Ω(1/p2).

Page 47: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization47

SAA approach Consider now a stochastic average approximation method that amounts to enumerating

all feasible sets and then choosing the one that has the best estimated value: for each S ⊆ N such that

|S|= k, estimating u(S) with the sample average u(S) defined as

u(S) =1

T

T∑t=1

maxX(t)i | i∈ S

where X(t)i are independent random variables over i and t and X

(t)i ∼ Pi for all S ∈ 2N and t∈ 1,2 . . . , T.

For every class-0 set S, whose all elements are of type A, we have u(S) = a with probability 1. For every

class-r set S, with 1≤ r < k, we have u(S)≥ a. For every class-r set S, with 0≤ r≤ k, we have

u(S) = a

(1− XS

T

)+b

p

XS

T

where XS ∼Bin(T,1− (1− p)r).

Comparing u(S)> u(S′) for any two sets S and S′ is equivalent to XS >XS′ . By the Hoeffding’s inequality,

for an two sets S and S′ such that E[XS]>E[XS′ ], we have

Pr[XS ≤XS′ ]≤ exp

(−1

2(E[XS]−E[XS′ ])

2T

). (43)

We declare an error event to occur if u(S)< u(S′) for every class k set S and some class r < k set S′ and

denote with pe the probability of this event. Then, by the union bound, we have

pe = Pr[XS <XS′ for every S ∈Ck,k and some S′ ∈∪0≤r<kCk,r]

≤ Pr[∪S′∈∪0≤r<kCk,rXSk <XS′]

≤k−1∑r=0

|Ck,r|Pr[XSk ≤XSr ]

(k−1∑r=0

|Ck,r|

)Pr[XSk ≤XSk−1

]

=

((n

k

)−(|B|k

))Pr[XSk ≤XSk−1

]

where Si denotes an arbitrarily fixed set in Ck,i.

Combining with (43), we have

pe ≤((

n

k

)−(|B|k

))exp

(−1

2p2(1− p)2(k−1)T

). (44)

Note that the error exponent in (44) is due to discriminating a class k set from a class k− 1 set. In order

to have pe ≤ δ, for given δ ∈ (0,1], it suffices for the total number of samples m := nT to be such that

m≥ 2n

p2(1− p)2(k−1)log

((n

k

)−(|B|k

). (45)

Note that in (45) we can replace(n

k

)−(|B|k

)with

(n

k

), which is tight for |B|= Θ(k). Furthermore, we can

use the well known inequalities k(log(nk

))≤ log

((n

k

))≤ k

(log(nk

)+ 1). Thus, the logarithmic term in (45)

contributes a factor of k to the sufficient number of samples. Note also that m= Ω(1/p2).

Page 48: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization48

Summary The analysis of the estimation error for the SAA approach requires to consider discrimination

of a set with all type-B items and a set that has at least one type-A item. On the other hand, for the

approach based on using replication test scores, we only need to consider discrimination of a set with all

type-B items and a set with all type-A items. For both approaches, we obtain that the error exponent scales

as Θ(p2) for small p. The SAA approach can require a larger number of samples than the replication test

score approach, which is demonstrated by numerical results in Section 5.1.

I. Proof of Proposition 1

Let X1,X2, . . . ,Xn be independent random variables with distributions P1, P2, . . . , Pn, respectively, and let

X := (X1,X2, . . . ,Xn). Without loss of generality, assume that items are enumerated in decreasing order of

mean test scores, i.e. E[X1]≥E[X2]≥ · · · ≥E[Xn]. Let S = i1, i2, . . . , ik be an arbitrary subset of items in

N . Then, we have

u(S) = E[g(MS(X))]

= E[[g(MS(X))− g(MS\ik(X))] + [g(MS\ik(X))− g(MS\ik−1,ik(X))] + · · ·+ [g(Mi1(X))− g(φ, . . . , φ)]]

= [u(S)−u(S \ ik)] + [u(S \ ik)−u(S \ ik−1, ik)] + · · ·+ [u(i1)−u(∅)]

≤ u(ik) +u(ik−1) + · · ·+u(i1)

=∑i∈S

E[Xi]

≤k∑i=1

E[Xi] (46)

where the first inequality follows by the submodularity of function u, the second inequality is by the assump-

tion that items are enumerated in decreasing order of their mean test scores.

By Jensen’s inequality, for every (x1, x2, . . . , xk)∈Rk+, we have

1

k

k∑i=1

xi =1

k

k∑i=1

(xri )1/r ≤

(1

k

k∑i=1

xri

)1/r

.

Hence, we havek∑i=1

E[Xi]≤ k1−1/rE

( k∑i=1

Xri

)1/r . (47)

From (46) and (47), for every S ⊆N such that |S|= k,

u(M) = E

( k∑i=1

Xri

)1/r≥ 1

k1−1/rE

(∑i∈S

Xri

)1/r=

1

k1−1/ru(S).

The tightness can be established as follows. Let N consist of two disjoint subsets of items M and R, where

M is a set of k items whose each individual performance is of value 1 + ε with probability 1, for parameter

ε > 0, and R is a set of k items whose each individual performance is of value a with probability 1/a and of

value 0 otherwise, for parameter a≥ 1. Then, we note that

u(M) = k1/r(1 + ε)

Page 49: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization49

and

u(OPT)≥ u(R) = E

(∑i∈R

Xri

)1/r

≥ aPr

[∑i∈R

Xi > 0

]

= a

(1−

(1− 1

a

)k)≥ a

(1− e−k/a

).

Hence, it follows thatu(M)

u(OPT)≤ (1 + ε)

1

k1−1/rk/a

1− e−k/a.

The tightness claim follows by taking a such that k= o(a), in which case (k/a)/(1− e−k/a) = 1 + o(1).

J. Proof of Proposition 2

Proof of Claim (a) If k is a constant, then there is no r satisfying both conditions r = o(1) and r > 1.

Hence, it suffices to consider k = ω(1) and show that the following statement holds: for any given θ > 0,

there exists an instance for which greedy selection in decreasing order of quantile test scores cannot give a

constant-factor approximation.

Consider the distributions of random variables Xi defined as follows:

1. Let Xi be equal to a with probability 1 for 1≤ i≤ k. For each of these items, the quantile test score is

equal to a and the replication score is equal to ak1/r.

2. Let Xi be equal to 0 with probability 1−1/n, and equal to bθn/k with probability 1/n for k+1≤ i≤ 2k.

Note that in the limit as n grows large, each of these items has quantile test score of value b and

replication score of value bθ.

3. Let Xi be equal to 0 with probability 1− θ/k and equal to c with probability θ/k for 2k+ 1≤ i≤ 3k.

For each of these items, the quantile test score is equal to c and the replication test score is less than

or equal to cθ1/r.

4. Let Xi be equal to 0 for 3k+ 1≤ i≤ n.

If θ is a constant, i.e., θ =O(1), we can easily check that greedy selection in decreasing order of quantile

test scores cannot give a constant-factor approximation with a= b= 1 and c= 2. Under this condition, the

selected set of items is 2k+ 1, . . . ,3k. However, we have

E

[(∑3ki=2k+1X

ri

)1/r]E

[(∑k

i=1Xri

)1/r] =

E

[(∑3ki=2k+1X

ri

)1/r]k1/r

(∑3ki=2k+1 E [Xr

i ])1/r

k1/r

= 2

k

)1/r

= o(1),

Page 50: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization50

which is because k= ω(1), θ=O(1), and r= o(log(k)).

Since r > 1, if θ goes to infinity as n goes to infinity, i.e. for θ= ω(1), we have

E

[(∑3ki=2k+1X

ri

)1/r]E

[(∑2ki=k+1X

ri

)1/r] ≤(∑3k

i=2k+1 E [Xri ])1/r

θ

= 2θ(1−r)/r

= o(1).

Therefore, the greedy selection in decreasing order of quantile test scores has a vanishing utility compared

to the optimal value.

Proof of Claim (b) Let T (X,S) be a subset of S such that i∈ T (X,S) if, and only if, Xi ≥ P−1i (1−1/k),

for i ∈ S. Let amax = maxi∈S ai and amin = mini∈S ai. We will show that there exist constants q and p such

that

pamin ≤E

(∑i∈S

Xri

)1/r≤ qamax.

Since (x+ y)1/r ≤ x1/r + y1/r for all x, y≥ 0 and r > 1, we have

E

(∑i∈S

Xri

)1/r = E

∑i∈T (X,S)

Xri +

∑i∈S\T (X,S)

Xri

1/r

≤ E

∑i∈T (X,S)

Xri

1/r

+

∑i∈S\T (X,S)

Xri

1/r

≤ E

∑i∈T (X,S)

Xi +

∑i∈S\T (X,S)

Xri

1/r

≤ E

∑i∈T (X,S)

Xi +

∑i∈S\T (X,S)

(amax)r

1/r

≤(E [|T (X,S)|] + k1/r

)amax

= (1 + k1/r)amax.

By the Minkowski inequality,(∑

i∈AE [Xi]p)1/p ≤E

[(∑i∈AX

pi

)1/p]for all A⊆ S. Thus, we have

E

(∑i∈S

Xpi

)1/p = E

∑i∈T (X,S)

Xpi +

∑i∈S\T (X,S)

Xpi

1/p

≥ E

∑i∈T (X,S)

Xpi

1/p

=∑A⊆S

PrT (X,S) =AE

∑i∈A

Xpi

)1/p∣∣∣∣∣∣T (X,S) =A

≥∑A⊆S

Pr[T (X,S) =A]

(∑i∈A

E[Xi|i∈ T (X,S)]p

)1/p

Page 51: A Test Score Based Approach to Stochastic Submodular … · 2019-05-10 · A Test Score Based Approach to Stochastic Submodular Optimization Shreyas Sekar Harvard Business School,

Sekar, Vojnovic, and Yun: A Test Score Based Approach to Stochastic Submodular Optimization51

≥∑A⊆S

PrT (X,S) =A|A|1/pamin

≥(1− (1− 1/k)k

)amin

≥ (1− 1/e)amin.

Therefore, the greedy selection in decreasing order of quantile test scores gives a constant-factor approxi-

mation of the optimal value.