Optimal Feedback in Contests
George Georgiadis (Northwestern Kellogg)
with
Jeff Ely ● Sina Moghadas Khorasani ● Luis Rayo
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 1 / 27
Introduction
Motivation
Contests can be an effective way to motivate effort
Workplace tournaments
Innovation races
All-pay auctions
Legal & political battles
Athletic events
We are interested in settings where the designer has an informational
advantage over participants about how they are doing mid-contest
This paper:
Characterizes optimal dynamic contest when the designer chooses when
the contest ends, how a prize is allocated, and a real-time feedback policy
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 2 / 27
Introduction
In a Nutshell: Framework
At t = 0, the principal chooses:
1 A rule specifying when the contest ends;
2 A rule specifying how the prize is allocated; and
3 A real-time feedback policy
At every t > 0, each agent chooses to work or shirk
Effort generates binary output (a Poisson success)
The principal observes successes but not efforts
Each agent observes his effort (but may learn more via feedback)
Objective: Design a contest to maximize effort given a $1 prize
or equivalently, maximize the number of successes
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 3 / 27
Introduction
In a Nutshell: Framework
At t = 0, the principal chooses:
1 A rule specifying when the contest ends;
2 A rule specifying how the prize is allocated; and
3 A real-time feedback policy
At every t > 0, each agent chooses to work or shirk
Effort generates binary output (a Poisson success)
The principal observes successes but not efforts
Each agent observes his effort (but may learn more via feedback)
Objective: Design a contest to maximize effort given a $1 prize
or equivalently, maximize the number of successes
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 3 / 27
Introduction
In a Nutshell: Framework
At t = 0, the principal chooses:
1 A rule specifying when the contest ends;
2 A rule specifying how the prize is allocated; and
3 A real-time feedback policy
At every t > 0, each agent chooses to work or shirk
Effort generates binary output (a Poisson success)
The principal observes successes but not efforts
Each agent observes his effort (but may learn more via feedback)
Objective: Design a contest to maximize effort given a $1 prize
or equivalently, maximize the number of successes
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 3 / 27
Introduction
Examples
Promotion tournaments
Innovation races
2006 Netflix Prize
$1M prize for developing an algorithm that predicts user film ratings
with at least 10% better accuracy than Netflix’ own algorithm
Proof-of-work / cryptocurrency mining (e.g., bitcoin)
Agents try to solve a cryptographic puzzle to win newly mined currency
By design, solution (i.e., output) follows a Poisson process
Mining new currency is inflationary ⇒ want to motivate a fixed amount
of effort with a small a prize as possible (i.e., dual problem to ours)
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 4 / 27
Introduction
In a Nutshell: Optimal Contest
Cyclical structure:
Initially, principal sets a provisional deadline T ∗
If one or more agents succeed by T ∗, the contest ends at T ∗
Otherwise, the deadline is extended to t = 2T ∗
If no agent succeeds by 2T ∗, deadline extended until 3T ∗. And so on.
When the contest ends, prize is awarded according to egalitarian rule
i.e., prize is shared equally among all agents who have succeeded
Optimal feedback policy:
Each agent is kept apprised of his own success
A deadline extension signals that no agent has succeeded thus far
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 5 / 27
Introduction
In a Nutshell: Optimal Contest
Cyclical structure:
Initially, principal sets a provisional deadline T ∗
If one or more agents succeed by T ∗, the contest ends at T ∗
Otherwise, the deadline is extended to t = 2T ∗
If no agent succeeds by 2T ∗, deadline extended until 3T ∗. And so on.
When the contest ends, prize is awarded according to egalitarian rule
i.e., prize is shared equally among all agents who have succeeded
Optimal feedback policy:
Each agent is kept apprised of his own success
A deadline extension signals that no agent has succeeded thus far
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 5 / 27
Introduction
In a Nutshell: Optimal Contest
Cyclical structure:
Initially, principal sets a provisional deadline T ∗
If one or more agents succeed by T ∗, the contest ends at T ∗
Otherwise, the deadline is extended to t = 2T ∗
If no agent succeeds by 2T ∗, deadline extended until 3T ∗. And so on.
When the contest ends, prize is awarded according to egalitarian rule
i.e., prize is shared equally among all agents who have succeeded
Optimal feedback policy:
Each agent is kept apprised of his own success
A deadline extension signals that no agent has succeeded thus far
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 5 / 27
Introduction
Empirical Relevance
Features of optimal design echo stylized facts from actual contests,
findings from field experiments & empirical analyses of contests
I. Cyclical structure: 2006 Netflix Prize
30-day cycles until at least one team achieved 10% target
II. Egalitarian allocation rule: Lim, Ahearne and Ham (2009)
Sales contests with either one $300 prize or five $60 prizes (↑ sales)
III. Frequent feedback about own performance: Gross (2017)
Logo-design competitions: Private feedback improves submissions
Exposing performance differences discourages continued participation
IV. Withhold feedback about rivals: Fershtman and Gneezy (2011)
60-meter footrace: Laggards would quit when running side-by-side
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 6 / 27
Introduction
Related Literature
Static tournaments / contests:
Lazear & Rosen (’81), Green & Stokey (’83), Nalebuff & Stiglitz (’83)
Optimal prize allocation: Moldovanu & Sela (’01), Drugov & Ryvkin
(’18, ’19), Olszewski & Siegel (’20)
“Turning down the heat”: Fang, Noe & Strack (’18), Letina, Liu &
Netzer (’20)
Dynamic contests:
Taylor (’95), Benkert & Letina (’20)
Feedback in contests:
“Reveal intermediate progress?”: Yildirim (’05), Lizzeri, Meyer &
Persico (’05), Aoyagi (’10), Ederer (’10), Goltsman & Mukherjee (’19)
Contests for experimentation: Halac, Kartik & Liu (’17)
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 7 / 27
Model
Model (1/4): Players & Timing
Players: A principal and n ≥ 2 agents
At t = 0, the principal designs a contest comprising
i. a rule specifying when the contest will end,
ii. a rule for allocating a $1 prize, and
iii. a feedback policy
At every t > 0, each agent
receives a message per the feedback policy, and
chooses to work or shirk; i.e., ai,t ∈ {0,1}
When the contest ends, prize is awarded according to allocation rule
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 8 / 27
Model
Model (1/4): Players & Timing
Players: A principal and n ≥ 2 agents
At t = 0, the principal designs a contest comprising
i. a rule specifying when the contest will end,
ii. a rule for allocating a $1 prize, and
iii. a feedback policy
At every t > 0, each agent
receives a message per the feedback policy, and
chooses to work or shirk; i.e., ai,t ∈ {0,1}
When the contest ends, prize is awarded according to allocation rule
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 8 / 27
Model
Model (1/4): Players & Timing
Players: A principal and n ≥ 2 agents
At t = 0, the principal designs a contest comprising
i. a rule specifying when the contest will end,
ii. a rule for allocating a $1 prize, and
iii. a feedback policy
At every t > 0, each agent
receives a message per the feedback policy, and
chooses to work or shirk; i.e., ai,t ∈ {0,1}
When the contest ends, prize is awarded according to allocation rule
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 8 / 27
Model
Model (2/4): Agents’ Output & “Who observes what”
Each agent’s output takes the form of a Poisson “success”:
Conditional on not having succeeded by t, an agent succeeds during
(t, t + dt) with probability ai,tdt
Who observes what:
Principal observes successes but not efforts
Each agent observes his own effort but not successes
⋆ Wolog agents can observe their own success but not those of rivals
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 9 / 27
Model
Model (2/4): Agents’ Output & “Who observes what”
Each agent’s output takes the form of a Poisson “success”:
Conditional on not having succeeded by t, an agent succeeds during
(t, t + dt) with probability ai,tdt
Who observes what:
Principal observes successes but not efforts
Each agent observes his own effort but not successes
⋆ Wolog agents can observe their own success but not those of rivals
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 9 / 27
Model
Model (3/4): Principal’s Choice Variables
i. A termination rule is a stopping time w.r.t each agent’s success time
e.g., contest may end at prespecified deadline, upon first success, etc
ii. A prize allocation rule specifies each agent’s share of the prize qi as
a function of the agents’ success times
e.g., prize may be awarded to first agent to succeed, randomly, etc
iii. A feedback policy specifies the message sent to each agent at every
instant as a function of the agents’ success times and past messages
e.g., each agent may be kept apprised of whether he has succeeded
Alternatives: Random feedback, feedback about others’ successes,
feedback about feedback, etc
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 10 / 27
Model
Model (3/4): Principal’s Choice Variables
i. A termination rule is a stopping time w.r.t each agent’s success time
e.g., contest may end at prespecified deadline, upon first success, etc
ii. A prize allocation rule specifies each agent’s share of the prize qi as
a function of the agents’ success times
e.g., prize may be awarded to first agent to succeed, randomly, etc
iii. A feedback policy specifies the message sent to each agent at every
instant as a function of the agents’ success times and past messages
e.g., each agent may be kept apprised of whether he has succeeded
Alternatives: Random feedback, feedback about others’ successes,
feedback about feedback, etc
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 10 / 27
Model
Model (3/4): Principal’s Choice Variables
i. A termination rule is a stopping time w.r.t each agent’s success time
e.g., contest may end at prespecified deadline, upon first success, etc
ii. A prize allocation rule specifies each agent’s share of the prize qi as
a function of the agents’ success times
e.g., prize may be awarded to first agent to succeed, randomly, etc
iii. A feedback policy specifies the message sent to each agent at every
instant as a function of the agents’ success times and past messages
e.g., each agent may be kept apprised of whether he has succeeded
Alternatives: Random feedback, feedback about others’ successes,
feedback about feedback, etc
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 10 / 27
Model
Model (4/4): Payoffs
Given a contest, each agent’s expected payoff is
ui ,t = maxai,t∈{0,1}
E [qi − c ∫τ
0ai ,tdt] ,
where c ∈ (1/n, 1).
Principal chooses a contest to maximize total effort
max E [n
∑i=1∫
τ
0ai ,tdt]
s.t. {ai ,t} is an equilibrium effort profilen
∑i=1
qi ≤ 1.
subject to a resource constraint.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 11 / 27
Model
Model (4/4): Payoffs
Given a contest, each agent’s expected payoff is
ui ,t = maxai,t∈{0,1}
E [qi − c ∫τ
0ai ,tdt] ,
where c ∈ (1/n, 1).
Principal chooses a contest to maximize total effort
max E [n
∑i=1∫
τ
0ai ,tdt]
s.t. {ai ,t} is an equilibrium effort profilen
∑i=1
qi ≤ 1.
subject to a resource constraint.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 11 / 27
Model
Remarks
i. Robustness of the main result to modeling assumptions
Wolog, each agent can observe whether he has succeeded
The principal can maximize the # of successes (instead of total effort)
Agents can be uncertain about the total # of participants
ii. Constant hazard rate of success
Success during any interval depends only on effort during that interval
Extension to the case in which hazard rate increases with past efforts
Captures the idea that problem-solving ability accumulates with effort
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 12 / 27
Model
Remarks
i. Robustness of the main result to modeling assumptions
Wolog, each agent can observe whether he has succeeded
The principal can maximize the # of successes (instead of total effort)
Agents can be uncertain about the total # of participants
ii. Constant hazard rate of success
Success during any interval depends only on effort during that interval
Extension to the case in which hazard rate increases with past efforts
Captures the idea that problem-solving ability accumulates with effort
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 12 / 27
Benchmark: No Feedback Contests
Benchmark: No-feedback Contests
Warm up: Contests without feedback and deterministic deadline
Proposition.
Optimal no-feedback contest ends at deterministic deadline T ega, and
the prize is shared equally among all agents who succeed by deadline.
In equilibrium, all agents work continuously throughout [0,T ega]
Intuition:
Principal has a limited “stock” of incentives to offer ($1 prize)
If an IC is slack at any t, contest wastes incentives, hence is suboptimal
In the proposed design, agents’ incentive constraints bind at all times
Optimal contest is essentially unique
Others exist but they are economically equivalent to the above
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 13 / 27
Benchmark: No Feedback Contests
Benchmark: No-feedback Contests
Warm up: Contests without feedback and deterministic deadline
Proposition.
Optimal no-feedback contest ends at deterministic deadline T ega, and
the prize is shared equally among all agents who succeed by deadline.
In equilibrium, all agents work continuously throughout [0,T ega]
Intuition:
Principal has a limited “stock” of incentives to offer ($1 prize)
If an IC is slack at any t, contest wastes incentives, hence is suboptimal
In the proposed design, agents’ incentive constraints bind at all times
Optimal contest is essentially unique
Others exist but they are economically equivalent to the above
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 13 / 27
Benchmark: No Feedback Contests
Benchmark: No-feedback Contests
Warm up: Contests without feedback and deterministic deadline
Proposition.
Optimal no-feedback contest ends at deterministic deadline T ega, and
the prize is shared equally among all agents who succeed by deadline.
In equilibrium, all agents work continuously throughout [0,T ega]
Intuition:
Principal has a limited “stock” of incentives to offer ($1 prize)
If an IC is slack at any t, contest wastes incentives, hence is suboptimal
In the proposed design, agents’ incentive constraints bind at all times
Optimal contest is essentially unique
Others exist but they are economically equivalent to the above
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 13 / 27
Benchmark: No Feedback Contests
Benchmark: No-feedback Contests
Warm up: Contests without feedback and deterministic deadline
Proposition.
Optimal no-feedback contest ends at deterministic deadline T ega, and
the prize is shared equally among all agents who succeed by deadline.
In equilibrium, all agents work continuously throughout [0,T ega]
Intuition:
Principal has a limited “stock” of incentives to offer ($1 prize)
If an IC is slack at any t, contest wastes incentives, hence is suboptimal
In the proposed design, agents’ incentive constraints bind at all times
Optimal contest is essentially unique
Others exist but they are economically equivalent to the above
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 13 / 27
Benchmark: No Feedback Contests
Towards an optimal no-feedback contest
Wolog, we can focus on contests in which:
a. An agent wins the prize with positive probability only if he succeeds
b. Working continuously on some interval [0,Ti ] is IC for agent i
Fix a contest & eq’m, and for each agent define the reward function
Ri ,t = E [qi ∣ succeeding at t] ,
which is agent’s expected share of prize conditional on succeeding at t
The egalitarian contest with deadline T has reward functions
Regai ,t = E [ 1
1 + (#rivals who succeed by T)] ,
which are time-invariant.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 14 / 27
Benchmark: No Feedback Contests
Towards an optimal no-feedback contest
Wolog, we can focus on contests in which:
a. An agent wins the prize with positive probability only if he succeeds
b. Working continuously on some interval [0,Ti ] is IC for agent i
Fix a contest & eq’m, and for each agent define the reward function
Ri ,t = E [qi ∣ succeeding at t] ,
which is agent’s expected share of prize conditional on succeeding at t
The egalitarian contest with deadline T has reward functions
Regai ,t = E [ 1
1 + (#rivals who succeed by T)] ,
which are time-invariant.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 14 / 27
Benchmark: No Feedback Contests
Towards an optimal no-feedback contest
Wolog, we can focus on contests in which:
a. An agent wins the prize with positive probability only if he succeeds
b. Working continuously on some interval [0,Ti ] is IC for agent i
Fix a contest & eq’m, and for each agent define the reward function
Ri ,t = E [qi ∣ succeeding at t] ,
which is agent’s expected share of prize conditional on succeeding at t
The egalitarian contest with deadline T has reward functions
Regai ,t = E [ 1
1 + (#rivals who succeed by T)] ,
which are time-invariant.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 14 / 27
Benchmark: No Feedback Contests
Agents’ Problem
Given reward function Ri ,t , agent i ’s expected payoff
ui ,0 = maxai,t
E∫τ
0Ri ,te
− ∫ t0 ai,sds − cai ,t dt
Lemma 1.
Working continuously throughout [0,Ti ] is IC if and only if
e−tRi ,t´¹¹¹¹¹¹¸¹¹¹¹¹¹¹¶MB at t
≥ c®
direct MC
+∫Ti
te−s Ri ,sds
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶strategic MC
for all t.
Observation: IC binds at all times only if Ri ,t is time-invariant
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 15 / 27
Benchmark: No Feedback Contests
Agents’ Problem
Given reward function Ri ,t , agent i ’s expected payoff
ui ,0 = maxai,t
E∫τ
0Ri ,te
− ∫ t0 ai,sds − cai ,t dt
Lemma 1.
Working continuously throughout [0,Ti ] is IC if and only if
e−tRi ,t´¹¹¹¹¹¹¸¹¹¹¹¹¹¹¶MB at t
≥ c®
direct MC
+∫Ti
te−s Ri ,sds
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶strategic MC
for all t.
Observation: IC binds at all times only if Ri ,t is time-invariant
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 15 / 27
Benchmark: No Feedback Contests
Egalitarian Contest
Egalitarian = prize is shared equally among those who succeed
Define T ega to be the longest an agent is willing to work for if contest
is egalitarian and he expects his rivals to work for T ega units of time.
Proposition.
Any egalitarian contest with deadline T ≥ T ega has a pure-strategy
equilibrium in which all agents work throughout [0,T ega]
In any such contest, IC binds for all agents and all t ≤ T ega
Remains to show that no other contest motivates total effort ≥ nT ega
i.e, formalize argument that if IC is ever slack, contest wastes incentives
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 16 / 27
Benchmark: No Feedback Contests
Egalitarian Contest
Egalitarian = prize is shared equally among those who succeed
Define T ega to be the longest an agent is willing to work for if contest
is egalitarian and he expects his rivals to work for T ega units of time.
Proposition.
Any egalitarian contest with deadline T ≥ T ega has a pure-strategy
equilibrium in which all agents work throughout [0,T ega]
In any such contest, IC binds for all agents and all t ≤ T ega
Remains to show that no other contest motivates total effort ≥ nT ega
i.e, formalize argument that if IC is ever slack, contest wastes incentives
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 16 / 27
Benchmark: No Feedback Contests
Egalitarian Contest
Egalitarian = prize is shared equally among those who succeed
Define T ega to be the longest an agent is willing to work for if contest
is egalitarian and he expects his rivals to work for T ega units of time.
Proposition.
Any egalitarian contest with deadline T ≥ T ega has a pure-strategy
equilibrium in which all agents work throughout [0,T ega]
In any such contest, IC binds for all agents and all t ≤ T ega
Remains to show that no other contest motivates total effort ≥ nT ega
i.e, formalize argument that if IC is ever slack, contest wastes incentives
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 16 / 27
Benchmark: No Feedback Contests
Heuristic: Any other contest wastes incentives
Any contest with different reward functions that motivates work
until T ega must have e−tRi ,t > e−tRegai ,t for at least some interval
Egalitarian contest: IC at t ′ requires that e−t′
Regai ,t′ ≥ c + 1
Alternative contest: IC at t ′ requires that e−t′
Ri ,t′ ≥ c + 1 + 2
It must therefore be the case that e−tRi ,t > e−tRegai ,t for all t < t ′
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 17 / 27
Benchmark: No Feedback Contests
Heuristic: Any other contest wastes incentives
Recall Ri ,t is agent i ’s share of the prize conditional on succeeding at t
So any feasible contest must satisfy the “budget constraint”
∑i∫
T
0e−tRi ,tdt
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Pr{prize is awarded}
≤ 1 − e−nT´¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¶
Pr{at least one agent succeeds}
(BC)
Observation: (BC) binds in proposed contest (by the choice of T ega)
Since e−tRi ,t ≥ e−tRegai ,t for all t and the inequality is strict in at least
some interval, this alternative contest violates (BC)
i.e., this alternative contest is “too expensive”
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 17 / 27
Optimal Contest
Contests with arbitrary Feedback Policy
Key Lemma: Sufficient condition for optimality.
A contest is guaranteed to be optimal if in equilibrium:
i. The prize is awarded with probability 1
ii. Each agent earns zero rents
The principal’s objective can be rewritten as
E [n
∑i=1∫
τ
0ai ,tdt] = 1
c[ ∑
i
E [qi ]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Total Surplus≤1
−∑i
ui ,0
´¹¹¹¹¸¹¹¹¹¹¶Rents≥0
]
If a contest attains those bounds, it must be optimal
Note: Optimal no-feedback contest fails both criteria
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 18 / 27
Optimal Contest
Contests with arbitrary Feedback Policy
Key Lemma: Sufficient condition for optimality.
A contest is guaranteed to be optimal if in equilibrium:
i. The prize is awarded with probability 1
ii. Each agent earns zero rents
The principal’s objective can be rewritten as
E [n
∑i=1∫
τ
0ai ,tdt] = 1
c[ ∑
i
E [qi ]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Total Surplus≤1
−∑i
ui ,0
´¹¹¹¹¸¹¹¹¹¹¶Rents≥0
]
If a contest attains those bounds, it must be optimal
Note: Optimal no-feedback contest fails both criteria
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 18 / 27
Optimal Contest
Contests with arbitrary Feedback Policy
Key Lemma: Sufficient condition for optimality.
A contest is guaranteed to be optimal if in equilibrium:
i. The prize is awarded with probability 1
ii. Each agent earns zero rents
The principal’s objective can be rewritten as
E [n
∑i=1∫
τ
0ai ,tdt] = 1
c[ ∑
i
E [qi ]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Total Surplus≤1
−∑i
ui ,0
´¹¹¹¹¸¹¹¹¹¹¶Rents≥0
]
If a contest attains those bounds, it must be optimal
Note: Optimal no-feedback contest fails both criteria
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 18 / 27
Optimal Contest
Contests with arbitrary Feedback Policy
Key Lemma: Sufficient condition for optimality.
A contest is guaranteed to be optimal if in equilibrium:
i. The prize is awarded with probability 1
ii. Each agent earns zero rents
The principal’s objective can be rewritten as
E [n
∑i=1∫
τ
0ai ,tdt] = 1
c[ ∑
i
E [qi ]
´¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶Total Surplus≤1
−∑i
ui ,0
´¹¹¹¹¸¹¹¹¹¹¶Rents≥0
]
If a contest attains those bounds, it must be optimal
Note: Optimal no-feedback contest fails both criteria
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 18 / 27
Optimal Contest
Optimal Contest (with Feedback)
Mpronto ∶= feedback policy that keeps agents apprised of own success
For some T ∗ (which I will pin down later), define
τ∗ = inf {t ∶ t = kT ∗, k ∈ N and at least one agent has succeeded}
i.e., contest runs in cycles of length T ∗ and it is terminated at the end
of the first cycle in which one or more agents have succeeded
Proposition 3.
The contest with termination rule τ∗, egalitarian prize allocation rule,
and feedback policy Mpronto is optimal.
In equilibrium, each agent works until he succeeds and earns no rents
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 19 / 27
Optimal Contest
Optimal Contest (with Feedback)
Mpronto ∶= feedback policy that keeps agents apprised of own success
For some T ∗ (which I will pin down later), define
τ∗ = inf {t ∶ t = kT ∗, k ∈ N and at least one agent has succeeded}
i.e., contest runs in cycles of length T ∗ and it is terminated at the end
of the first cycle in which one or more agents have succeeded
Proposition 3.
The contest with termination rule τ∗, egalitarian prize allocation rule,
and feedback policy Mpronto is optimal.
In equilibrium, each agent works until he succeeds and earns no rents
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 19 / 27
Optimal Contest
Optimal Contest (with Feedback)
Mpronto ∶= feedback policy that keeps agents apprised of own success
For some T ∗ (which I will pin down later), define
τ∗ = inf {t ∶ t = kT ∗, k ∈ N and at least one agent has succeeded}
i.e., contest runs in cycles of length T ∗ and it is terminated at the end
of the first cycle in which one or more agents have succeeded
Proposition 3.
The contest with termination rule τ∗, egalitarian prize allocation rule,
and feedback policy Mpronto is optimal.
In equilibrium, each agent works until he succeeds and earns no rents
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 19 / 27
Optimal Contest
Proof Sketch (1/2)
Each agent’s payoff at t,
ui ,t = max E∫τ∗
t[(1 − pi ,s)Ri ,s − c]ai ,sds
where pi ,t is agent’s belief that he has succeeded.
The feedback policy Mpronto implies that pi ,t ∈ {0,1}While an agent has not succeeded, pi,t = 0.
As soon as he succeeds, pi,t = 1, and he stops working
Given Mpronto , we can rewrite
ui ,t = max E∫τ∗
t(Ri ,s − c)ai ,sds
A contest yields no rents to the agents only if Ri,t = c for all i and t
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 20 / 27
Optimal Contest
Proof Sketch (1/2)
Each agent’s payoff at t,
ui ,t = max E∫τ∗
t[(1 − pi ,s)Ri ,s − c]ai ,sds
where pi ,t is agent’s belief that he has succeeded.
The feedback policy Mpronto implies that pi ,t ∈ {0,1}While an agent has not succeeded, pi,t = 0.
As soon as he succeeds, pi,t = 1, and he stops working
Given Mpronto , we can rewrite
ui ,t = max E∫τ∗
t(Ri ,s − c)ai ,sds
A contest yields no rents to the agents only if Ri,t = c for all i and t
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 20 / 27
Optimal Contest
Proof Sketch (1/2)
Each agent’s payoff at t,
ui ,t = max E∫τ∗
t[(1 − pi ,s)Ri ,s − c]ai ,sds
where pi ,t is agent’s belief that he has succeeded.
The feedback policy Mpronto implies that pi ,t ∈ {0,1}While an agent has not succeeded, pi,t = 0.
As soon as he succeeds, pi,t = 1, and he stops working
Given Mpronto , we can rewrite
ui ,t = max E∫τ∗
t(Ri ,s − c)ai ,sds
A contest yields no rents to the agents only if Ri,t = c for all i and t
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 20 / 27
Optimal Contest
Proof Sketch (2/2)
Agent’s reward conditional on succeeding at t, Ri ,t , depends on how
many rivals he expects to succeed by the end of the current cycle
This number M ∼ Binom(n − 1,1 − e−T∗
), and so
Ri,t = E [ 1
1 +M] = 1 − e−nT
∗
n(1 − e−T∗)
Pick T ∗ such that Ri ,t = c, and hence ui ,t = 0 for all i and t
Each agent weakly prefers to work until he succeeds or the contest ends
Finally, the contest does not end until at least one agent succeeds
Therefore, this contest awards prize w.p 1 and yields 0 rents to agents
(i.e., it satisfies the criteria of the Key Lemma); hence it is optimal.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 21 / 27
Optimal Contest
Proof Sketch (2/2)
Agent’s reward conditional on succeeding at t, Ri ,t , depends on how
many rivals he expects to succeed by the end of the current cycle
This number M ∼ Binom(n − 1,1 − e−T∗
), and so
Ri,t = E [ 1
1 +M] = 1 − e−nT
∗
n(1 − e−T∗)
Pick T ∗ such that Ri ,t = c, and hence ui ,t = 0 for all i and t
Each agent weakly prefers to work until he succeeds or the contest ends
Finally, the contest does not end until at least one agent succeeds
Therefore, this contest awards prize w.p 1 and yields 0 rents to agents
(i.e., it satisfies the criteria of the Key Lemma); hence it is optimal.
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 21 / 27
Optimal Contest
Discussion: Ingredients of optimal contest
1 To extract all rents, the feedback policy must be Mpronto
Suppose pi,t ∈ (0,1) yet (1 − pi,t)Ri,t = c. Take an interval s.t pi,t > 0
Agent can pause effort during first half of interval so pprivatei,t < peqm
i,t
Then in the second half, (1 − pprivatei,t )Ri,t > c; i.e., agent earns rents
2 The role of the egalitarian prize allocation rule:
Given Mpronto , contest yields no rents if and only if Ri,t = c for all i , t
Symmetric & time-invariant reward functions ⇐Ô egalitarian rule
3 The role of the cyclical structure:
Optimal cycle T ∗ ensures Ri,t = E [1/(1 + (#rivals who succeed))] = c
Cycles “replenish” incentives by informing agents no rival has succeeded
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 22 / 27
Optimal Contest
Discussion: Ingredients of optimal contest
1 To extract all rents, the feedback policy must be Mpronto
Suppose pi,t ∈ (0,1) yet (1 − pi,t)Ri,t = c. Take an interval s.t pi,t > 0
Agent can pause effort during first half of interval so pprivatei,t < peqm
i,t
Then in the second half, (1 − pprivatei,t )Ri,t > c; i.e., agent earns rents
2 The role of the egalitarian prize allocation rule:
Given Mpronto , contest yields no rents if and only if Ri,t = c for all i , t
Symmetric & time-invariant reward functions ⇐Ô egalitarian rule
3 The role of the cyclical structure:
Optimal cycle T ∗ ensures Ri,t = E [1/(1 + (#rivals who succeed))] = c
Cycles “replenish” incentives by informing agents no rival has succeeded
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 22 / 27
Optimal Contest
Discussion: Ingredients of optimal contest
1 To extract all rents, the feedback policy must be Mpronto
Suppose pi,t ∈ (0,1) yet (1 − pi,t)Ri,t = c. Take an interval s.t pi,t > 0
Agent can pause effort during first half of interval so pprivatei,t < peqm
i,t
Then in the second half, (1 − pprivatei,t )Ri,t > c; i.e., agent earns rents
2 The role of the egalitarian prize allocation rule:
Given Mpronto , contest yields no rents if and only if Ri,t = c for all i , t
Symmetric & time-invariant reward functions ⇐Ô egalitarian rule
3 The role of the cyclical structure:
Optimal cycle T ∗ ensures Ri,t = E [1/(1 + (#rivals who succeed))] = c
Cycles “replenish” incentives by informing agents no rival has succeeded
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 22 / 27
Extension: Increasing Hazard Rate
Increasing Hazard Rate
So far, we have assumed a constant hazard rate of success
If an agent works for t units, he succeeds with probability F(t) = 1− e−t
Assume now F is an arbitrary distribution with increasing hazard rate:
λt ∶=f (t)
1 − F(t)is (weakly) increasing
Meant to capture knowledge accumulation or a notion of progress,
whereby effort today increases the likelihood of success in the future
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 23 / 27
Extension: Increasing Hazard Rate
Increasing Hazard Rate
So far, we have assumed a constant hazard rate of success
If an agent works for t units, he succeeds with probability F(t) = 1− e−t
Assume now F is an arbitrary distribution with increasing hazard rate:
λt ∶=f (t)
1 − F(t)is (weakly) increasing
Meant to capture knowledge accumulation or a notion of progress,
whereby effort today increases the likelihood of success in the future
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 23 / 27
Extension: Increasing Hazard Rate
Increasing Hazard Rate: Optimal Contest
Proposition 4.
Assume λt ∈ (c,nc) for all t. There is an optimal contest with:
1 Stochastic cyclical structure: Each cycle ends with rate γ(λt)
2 At the end of each cycle, if a success has occurred, contest ends and
prize is awarded according to EGA rule; otherwise, a new cycle starts
3 Feedback policy Mpronto ; i.e., agents are apprised of own success
To extract all rents, we must have λtRi ,t = c for all i , t
Since λt is increasing, incentives must be frontloaded (i.e., Ri,t ↓)
Stochastic cycles: Reward from succeeding earlier in a cycle is larger
than doing so later, because agent will share prize with fewer others.
By choosing the distribution of cycle length, can fine tune Ri,t = c/λtEly, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 24 / 27
Extension: Increasing Hazard Rate
Increasing Hazard Rate: Optimal Contest
Proposition 4.
Assume λt ∈ (c,nc) for all t. There is an optimal contest with:
1 Stochastic cyclical structure: Each cycle ends with rate γ(λt)
2 At the end of each cycle, if a success has occurred, contest ends and
prize is awarded according to EGA rule; otherwise, a new cycle starts
3 Feedback policy Mpronto ; i.e., agents are apprised of own success
To extract all rents, we must have λtRi ,t = c for all i , t
Since λt is increasing, incentives must be frontloaded (i.e., Ri,t ↓)
Stochastic cycles: Reward from succeeding earlier in a cycle is larger
than doing so later, because agent will share prize with fewer others.
By choosing the distribution of cycle length, can fine tune Ri,t = c/λtEly, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 24 / 27
Extension: Increasing Hazard Rate
Increasing Hazard Rate: Optimal Contest
Proposition 4.
Assume λt ∈ (c,nc) for all t. There is an optimal contest with:
1 Stochastic cyclical structure: Each cycle ends with rate γ(λt)
2 At the end of each cycle, if a success has occurred, contest ends and
prize is awarded according to EGA rule; otherwise, a new cycle starts
3 Feedback policy Mpronto ; i.e., agents are apprised of own success
To extract all rents, we must have λtRi ,t = c for all i , t
Since λt is increasing, incentives must be frontloaded (i.e., Ri,t ↓)
Stochastic cycles: Reward from succeeding earlier in a cycle is larger
than doing so later, because agent will share prize with fewer others.
By choosing the distribution of cycle length, can fine tune Ri,t = c/λtEly, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 24 / 27
Application
Proof-of-Work Protocol
Proof-of-work is at the backbone of many cryptocurrencies
To add a set of transactions to the blockchain, a “miner” must first
solve a cryptographic puzzle
Doing so requires expending computing power (i.e., electricity)
As a reward, winner earns newly mined crypto-coins
By design:
Output is binary and follows a Poisson process
Arrival rate λ is determined by network and miner’s computing power
The first miner to solve the puzzle wins; i.e., contest is first-to-succeed
The idea is to make it costly to make changes to the blockchain
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 25 / 27
Application
Proof-of-Work Protocol
Proof-of-work is at the backbone of many cryptocurrencies
To add a set of transactions to the blockchain, a “miner” must first
solve a cryptographic puzzle
Doing so requires expending computing power (i.e., electricity)
As a reward, winner earns newly mined crypto-coins
By design:
Output is binary and follows a Poisson process
Arrival rate λ is determined by network and miner’s computing power
The first miner to solve the puzzle wins; i.e., contest is first-to-succeed
The idea is to make it costly to make changes to the blockchain
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 25 / 27
Application
Bitcoin: Back-of-the-envelope calculations
Some numbers for Bitcoin (as of Feb. 11, 2021):
Prize: 6.25 Bitcoins ×$47,860 ≃ $300K
State-of-the-art miner: Antminer S19 Pro (lists for $3,769)
In expectation, this miner solves the puzzle once every 26.5 years
In Illinois, it consumes electricity at rate of approx. $2,720 / year
Rents = λ × R − c = 1
26.5 years× $300K − $2,720 ≃ $8,600 / year
Dual problem: Minimize prize that implements a fixed total effort
Optimal cyclical egalitarian design:
Implements the same computing effort with prize of only 1.5 Bitcoin
Trade-off: Extract rents vs. entry and incentives to invest in equipment
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 26 / 27
Application
Bitcoin: Back-of-the-envelope calculations
Some numbers for Bitcoin (as of Feb. 11, 2021):
Prize: 6.25 Bitcoins ×$47,860 ≃ $300K
State-of-the-art miner: Antminer S19 Pro (lists for $3,769)
In expectation, this miner solves the puzzle once every 26.5 years
In Illinois, it consumes electricity at rate of approx. $2,720 / year
Rents = λ × R − c = 1
26.5 years× $300K − $2,720 ≃ $8,600 / year
Dual problem: Minimize prize that implements a fixed total effort
Optimal cyclical egalitarian design:
Implements the same computing effort with prize of only 1.5 Bitcoin
Trade-off: Extract rents vs. entry and incentives to invest in equipment
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 26 / 27
Application
Bitcoin: Back-of-the-envelope calculations
Some numbers for Bitcoin (as of Feb. 11, 2021):
Prize: 6.25 Bitcoins ×$47,860 ≃ $300K
State-of-the-art miner: Antminer S19 Pro (lists for $3,769)
In expectation, this miner solves the puzzle once every 26.5 years
In Illinois, it consumes electricity at rate of approx. $2,720 / year
Rents = λ × R − c = 1
26.5 years× $300K − $2,720 ≃ $8,600 / year
Dual problem: Minimize prize that implements a fixed total effort
Optimal cyclical egalitarian design:
Implements the same computing effort with prize of only 1.5 Bitcoin
Trade-off: Extract rents vs. entry and incentives to invest in equipment
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 26 / 27
Discussion
Discussion
Contest design with endogenous feedback
Cyclical structure
Egalitarian prize allocation rule
Each agent is always apprised of own success, but is informed of
rivals’ successes only periodically
Future work
Decreasing hazard rate
Heterogeneous agents
More general production functions (e.g., continuous output / effort)
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 27 / 27
Discussion
Discussion
Contest design with endogenous feedback
Cyclical structure
Egalitarian prize allocation rule
Each agent is always apprised of own success, but is informed of
rivals’ successes only periodically
Future work
Decreasing hazard rate
Heterogeneous agents
More general production functions (e.g., continuous output / effort)
Ely, Georgiadis, Khorasani and Rayo Optimal Feedback in Contests Northwestern Kellogg 27 / 27