Upload
smart-society-project
View
75
Download
3
Embed Size (px)
Citation preview
Diversity-Aware Recommendation for Human Collectives
Pavlos Andreadis, Sofia Ceppi, Michael Rovatsos, Subramanian RamamoorthySchool of Informatics, University of Edinburgh
Robust Autonomy and Decisions group
CISA Agents group
(FOCAS) (ICT-2011.9.10), as a Collaborative Project (generic), under the 7th Framework programme, Grant agreement n. 600854.
The SmartSociety project is supported by the European Commission, in the area "FET Proactive: Fundamentals of Collective Adaptive Systems"
ECAI, DIVERSITY WorkshopHague – August, 29 2016
2
Sharing Economy Applications
Requests Potential allocation
3
Ridesharing Example
13:35
14:00
S
D
13:35
14:00
12:00
17:00
Requests Potential allocation
Arrival:
Arrival:
4
Diversity-Aware Recommendation
5
Diversity-Aware Recommendation
6
Diversity-Aware Recommendation
7
Problems to Address
Selecting set of solutions
Aiding user coordination
8
Selecting Set of Solutions
Multiple criteria
Goal: Adaptive trade-of of system-level utility and fairness
system-level utility:
fairness:Social Welfare
number of Allocated Passengers
number of Drivers
9
Aiding User Coordination Users select according to solution utility
How? Taxation
Taxation scheme depends on user selection behaviour
– Noiseless
– Constant noise
– Logit noise
modify
Goal: Sponsor a solution using minimal taxation
10
Constraints: feasibility
Generating the Recommendation Set
MILPsystem
Constraints: MILPsystem +
MILPfirst
MILPothersk-1 x
Constraints: MILPfirst + + taxation constraint
11
Experiment Design
Metrics: System utility; Fairness; Num Passengers allocated; Num Drivers w. passengers
Evaluations performed after user selections.
Num of users (10, 20); percentage of which drivers (20, 30, 40 %);
Utility threshold (50, 75, 100 %);
User selection model (constant, logit); For logit noise, probability (60, 80 %).
100 experiment instances per configuration.
VS VS
with rejection
Set Recommendation Benchmark Allocation Benchmark
12
Set Recommendation Benchmark,Logit Noise
(no
n-)
We can outperform the benchmark in
terms of both system utility and fairness.
13
Set Recommendation Benchmark,Logit Noise
(no
n-)
We can outperform the benchmark in
terms of both system utility and fairness.
14
Set Recommendation Benchmark,Logit Noise
(no
n-)
We can outperform the benchmark in
terms of both system utility and fairness.
15
Set Recommendation Benchmark,Logit Noise
(no
n-)
We can outperform the benchmark in
terms of both system utility and fairness.
16
Allocation Benchmark,Logit Noise
We can allow users to have a choice at no
cost to the system or users.
with rejection(n
on
-)
17
Allocation Benchmark,Logit Noise
with rejection(n
on
-)
We can allow users to have a choice at no
cost to the system or users.
18
We presented a methodology for aiding the coordination of user collectives, in the absence of agent communication.
Ongoing work Expand uncertainty to consider beliefs over preferences;
Incorporate active learning procedures in the MILPs;
Examine robustness to varying degrees of incorrect assumptions.
Set Recommendation requires explicitly handling the uncertainty in user behaviour;
Our procedure can match the performance of a direct allocation (given rejection);
We can allow users to have a choice at no cost to the system;
We allow for adaptively trading-of system-level utility and fairness.
Conclusions
19
Why Diversity-Aware?
We present users with options, and we are robust to innacurate
representations of their preferences. Further, we are able to learn from
their choices. We can achieve this at no cost to the users or the collective,
and we can adaptively trade-of between collective and user-specific
criteria.