42
1 Monte-Carlo Planning: Policy Improvement Alan Fern

1 Monte-Carlo Planning: Policy Improvement Alan Fern

Embed Size (px)

Citation preview

Page 1: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

1

Monte-Carlo Planning:Policy Improvement

Alan Fern

Page 2: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

2

Monte-Carlo Planning

h Often a simulator of a planning domain is availableor can be learned from data

2

Fire & Emergency ResponseConservation Planning

Page 3: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

3

Large Worlds: Monte-Carlo Approach

h Often a simulator of a planning domain is availableor can be learned from data

h Monte-Carlo Planning: compute a good policy for an MDP by interacting with an MDP simulator

3

World Simulato

r RealWorld

action

State + reward

Page 4: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

4

MDP: Simulation-Based Representationh A simulation-based representation gives: S, A, R, T, I:

5 finite state set S (|S|=n and is generally very large)5 finite action set A (|A|=m and will assume is of reasonable size)

5 Stochastic, real-valued, bounded reward function R(s,a) = rg Stochastically returns a reward r given input s and a

5 Stochastic transition function T(s,a) = s’ (i.e. a simulator)g Stochastically returns a state s’ given input s and ag Probability of returning s’ is dictated by Pr(s’ | s,a) of MDP

5 Stochastic initial state function I. g Stochastically returns a state according to an initial state distribution

These stochastic functions can be implemented in any language!

Page 5: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

5

Outline

h You already learned how to evaluate a policy given a simulator5 Just run the policy multiple times for a finite

horizon and average the rewards

h In next two lectures we’ll learn how to use the simulator in order to select good actions in the real world

Page 6: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

6

Monte-Carlo Planning Outline

h Single State Case (multi-armed bandits)5 A basic tool for other algorithms

h Monte-Carlo Policy Improvement5 Policy rollout5 Policy Switching

h Monte-Carlo Tree Search5 Sparse Sampling5 UCT and variants

Today

Page 7: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

7

Single State Monte-Carlo Planning

h Suppose MDP has a single state and k actions5 Can sample rewards of actions using calls to simulator5 Sampling action a is like pulling slot machine arm with

random payoff function R(s,a)

s

a1 a2 ak

R(s,a1) R(s,a2) R(s,ak)

Multi-Armed Bandit Problem

Page 8: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

Multi-Armed Bandits

h We will use bandit algorithms as components for multi-state Monte-Carlo planning5 But they are useful in their own right

h Pure bandit problems arise in many applications

h Applicable whenever: 5 We have a set of independent options with unknown

utilities5 There is a cost for sampling options or a limit on total

samples5 Want to find the best option or maximize utility of our

samples

Page 9: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

Multi-Armed Bandits: Examples

h Clinical Trials5 Arms = possible treatments5 Arm Pulls = application of treatment to inidividual5 Rewards = outcome of treatment5 Objective = determine best treatment quickly

h Online Advertising5 Arms = different ads/ad-types for a web page 5 Arm Pulls = displaying an ad upon a page access5 Rewards = click through5 Objective = find best add quickly (the maximize clicks)

Page 10: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

10

Simple Regret Objective

h Different applications suggest different types of bandit objectives.

h Today minimizing simple regret will be the objective5 Simple Regret Minimization (informal):

quickly identify arm with close to optimal expected reward

s

a1 a2 ak

R(s,a1) R(s,a2) R(s,ak)

Multi-Armed Bandit Problem

Page 11: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

11

Simple Regret Objective: Formal Definition

Protocol: at time step n

1. Pick an “exploration” arm , then pull it and observe reward

2. Pick an “exploitation” arm index that currently looks best (if algorithm is stopped at time it returns ) ( are

random variables). h Let be the expected reward of truly best arm

h Expected Simple Regret (: difference between and expected reward of arm selected by our strategy at time n

Page 12: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

12

UniformBandit Algorith (or Round Robin)

UniformBandit Algorithm: h At round n pull arm with index (k mod n) + 1 h At round n return arm (if asked) with largest average reward

5 I.e. is the index of arm with best average so far

h This bound is exponentially decreasing in n! 5 So even this simple algorithm has a provably small simple regret.

Theorem: The expected simple regret of Uniform after n arm pulls is upper bounded by O for a constant c.

Bubeck, S., Munos, R., & Stoltz, G. (2011). Pure exploration in finitely-armed and continuous-armed bandits. Theoretical Computer Science, 412(19), 1832-1852

Page 13: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

13

Can we do better?

Algorithm -GreedyBandit : (parameter )h At round n, with probability pull arm with best average reward

so far, otherwise pull one of the other arms at random. h At round n return arm (if asked) with largest average reward

Theorem: The expected simple regret of -Greedy for after n arm pulls is upper bounded by O for a constant c that is larger than the constant for Uniform(this holds for “large enough” n).

Tolpin, D. & Shimony, S, E. (2012). MCTS Based on Simple Regret. AAAI Conference on Artificial Intelligence.

Often is more effective than UniformBandit in practice.

Page 14: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

14

Monte-Carlo Planning Outline

h Single State Case (multi-armed bandits)5 A basic tool for other algorithms

h Monte-Carlo Policy Improvement5 Policy rollout5 Policy Switching

h Monte-Carlo Tree Search5 Sparse Sampling5 UCT and variants

Today

Page 15: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

Policy Improvement via Monte-Carlo

h Now consider a very large multi-state MDP.

h Suppose we have a simulator and a non-optimal policy 5 E.g. policy could be a standard heuristic or based on intuition

h Can we somehow compute an improved policy?

15

World Simulator

+ Base Policy

RealWorld

action

State + reward

Page 16: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

16

Policy Improvement Theorem

h Definition: The Q-value function gives the expected future reward of starting in state s, taking action , and then following policy until the horizon h.5 How good is it to execute after taking action in state

h Define:

h Theorem [Howard, 1960]: For any non-optimal policy the policy is strictly better than .

5 So if we can compute at any state we encounter, then we can execute an improved policy

h Can we use bandit algorithms to compute

Page 17: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

17

Policy Improvement via Banditss

a1 a2 ak

SimQ(s,a1,π,h) SimQ(s,a2,π,h) SimQ(s,ak,π,h)

h Idea: define a stochastic function SimQ(s,a,π,h) that we can implement and whose expected value is Qπ(s,a,h)

h Then use Bandit algorithm to select (approximately) the action with best Q-value (i.e. the action )

How to implement SimQ?

Page 18: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

18

Policy Improvement via Bandits

SimQ(s,a,π,h) q = R(s,a) simulate a in s

s = T(s,a)

for i = 1 to h-1 q = q + R(s, π(s)) simulate h-1 steps

s = T(s, π(s)) of policy

Return q

s …

a1

a2

Trajectory under p

Sum of rewards = SimQ(s,a1,π,h)

ak

Sum of rewards = SimQ(s,a2,π,h)

Sum of rewards = SimQ(s,ak,π,h)

Page 19: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

19

Policy Improvement via Bandits

SimQ(s,a,π,h) q = R(s,a) simulate a in s

s = T(s,a)

for i = 1 to h-1 q = q + R(s, π(s)) simulate h-1 steps

s = T(s, π(s)) of policy

Return q

h Simply simulate taking a in s and following policy for h-1 steps, returning discounted sum of rewards

h Expected value of SimQ(s,a,π,h) is Qπ(s,a,h)

So averaging across multiple runs of SimQ quickly converges to Qπ(s,a,h)

Page 20: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

20

Policy Improvement via Banditss

a1 a2 ak

SimQ(s,a1,π,h) SimQ(s,a2,π,h) SimQ(s,ak,π,h)

h Now apply your favorite bandit algorithm for simple regret

h UniformRollout : use UniformBandit

Parameters: number of trials n and horizon/height h

h -GreedyRollout : use -GreedyBandit

Parameters: number of trials n, and horizon/height h( often is a good choice)

Page 21: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

21

UniformRollout

s

a1 a2ak

q11 q12 … q1w q21 q22 … q2w qk1 qk2 … qkw

… … … … … … … … …

SimQ(s,ai,π,h) trajectories

Each simulates taking action ai then following π for h-1 steps.

Samples of SimQ(s,ai,π,h)

h Each action is tried roughly the same number of times (approximately times)

Page 22: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

22

𝝐−𝐆𝐫𝐞𝐞𝐝𝐲𝐑𝐨𝐥𝐥𝐨𝐮𝐭

s

a1 a2ak

q11 q12 … q1u q21 q22 … q2v qk1

… … … … …

• For we might expect it to be better than UniformRollout for same value of n.

h Allocates a non-uniform number of trials across actions (focuses on more promising actions)

Page 23: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

23

Executing Rollout in Real World

… …s

a1 a2 ak

… … … … … … … …

a1 a2 ak

… … … … … … … …

a2 ak

run policy rollout run policy rollout

Real worldstate/action sequence

Simulated experience

How much time does each decision take?

Page 24: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

24

Policy Rollout: # of Simulator Calls

• Total of n SimQ calls each using h calls to simulator and policy

• Total of hn calls to the simulator and to the policy (dominates time to make decision)

a1 a2ak

… … … … … … … … …

SimQ(s,ai,π,h) trajectories

Each simulates taking action ai then following π for h-1 steps.

s

Page 25: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

25

Practical Issues: Accuracy

h Selecting number of trajectories 5 n should be at least as large as the number of available actions

(so each is tried at least once)5 In general n needs to be larger as the randomness of the simulator

increases (so each action gets tried a sufficient number of times)5 Rule-of-Thumb : start with n set so that each action can be tried

approximately 5 times and then see impact of decreasing/increasing n

h Selecting height/horizon h of trajectories5 A common option is to just select h to be the same as the horizon of

the problem being solved 5 Suggestion: setting h = -1 in our framework, which will run all

trajectories until the simulator hits a terminal state5 Using a smaller value of h can sometimes be effective if enough

reward is accumulated to give a good estimate of Q-values

In general, larger values are better, but this increases time.

Page 26: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

26

Practical Issues: Speedh There are three ways to speedup decision making time

1. Use a faster policy

Page 27: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

27

Practical Issues: Speedh There are three ways to speedup decision making time

1. Use a faster policy 2. Decrease the number of trajectories n

h Decreasing Trajectories:5 If n is small compared to # of actions k, then performance could be

poor since actions don’t get tried very often5 One way to get away with a smaller n is to use an action filter

h Action Filter: a function f(s) that returns a subset of the actions in state s that rollout should consider 5 You can use your domain knowledge to filter out obviously bad actions5 Rollout decides among the remaining actions returned by f(s)5 Since rollout only tries actions in f(s) can use a smaller value of n

Page 28: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

28

Practical Issues: Speed

h There are three ways to speedup either rollout procedure1. Use a faster policy2. Decrease the number of trajectories n3. Decrease the horizon h

h Decrease Horizon h:5 If h is too small compared to the “real horizon” of the problem, then the

Q-estimates may not be accurate5 Can get away with a smaller h by using a value estimation heuristic

h Heuristic function: a heuristic function v(s) returns an estimate of the value of state s5 SimQ is adjusted to run policy for h steps ending in state s’ and returns

the sum of rewards up until s’ added to the estimate v(s’)

Page 29: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

29

Multi-Stage Rollout

h A single call to Rollout[π,h,w](s) yields one iteration of policy improvement starting at policy π

h We can use more computation time to yield multiple iterations of policy improvement via nesting calls to Rollout5 Rollout[Rollout[π,h,w],h,w](s) returns the action for state s resulting

from two iterations of policy improvement5 Can nest this arbitrarily

h Gives a way to use more time in order to improve performance

Page 30: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

30

Multi-Stage Rollout

a1 a2ak

… … … … … … … … …

Trajectories of SimQ(s,ai,Rollout[π,h,w],h)

Each step requires nh simulator callsfor Rollout policy

• Two stage: compute rollout policy of “rollout policy of π”

• Requires (nh)2 calls to the simulator for 2 stages

• In general exponential in the number of stages

s

Page 31: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

31

Example: Rollout for Solitaire [Yan et al. NIPS’04]

h Multiple levels of rollout can payoff but is expensive

Player Success Rate Time/Game

Human Expert 36.6% 20 min

(naïve) Base Policy

13.05% 0.021 sec

1 rollout 31.20% 0.67 sec

2 rollout 47.6% 7.13 sec

3 rollout 56.83% 1.5 min

4 rollout 60.51% 18 min

5 rollout 70.20% 1 hour 45 min

Page 32: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

32

Rollout in 2-Player Games

s

a1 a2ak

q11 q12 … q1w q21 q22 … q2w qk1 qk2 … qkw

… … … … … … … … …h SimQ simply uses the base policy to select

moves for both players until the horizon

h Rollout is biased toward playing well against

h Is this ok?

p1p2

Page 33: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

33

Another Useful Technique: Policy Switching

h Suppose you have a set of base policies {π1, π2,…, πM}

h Also suppose that the best policy to use can depend on the specific state of the system and we don’t know how to select.

h Policy switching is a simple way to select which policy to use at a given step via a simulator

Page 34: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

34

Another Useful Technique: Policy Switching

s

Sim(s,π1,h) Sim(s,π2,h) Sim(s,πM,h)

h The stochastic function Sim(s,π,h) simply samples the h-horizon value of π starting in state s

h Implement by simply simulating π starting in s for h steps and returning discounted total reward

h Use Bandit algorithm to select best policy and then select action chosen by that policy

π 1 π 2

πM

Page 35: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

35

PolicySwitchingPolicySwitch[{π1, π2,…, πM},h,n](s)

1. Define bandit with M arms giving rewards Sim(s,πi,h)

2. Let i* be index of the arm/policy selected by your favorite bandit algorithm using n trials

3. Return action πi*(s) sπ 1 π 2

πM

v11 v12 … v1w v21 v22 … v2w vM1 vM2 … vMw

… … … … … … … … …

Sim(s,πi,h) trajectories

Each simulates following πi for h steps.

Discounted cumulative rewards

Page 36: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

36

Executing Policy Switching in Real World

… …s

1 𝜋2 𝜋k

… … … … … … … …

𝜋1 𝜋2 𝜋k

… … … … … … … …

𝜋2(s) 𝜋k(s’)

run policy rollout run policy rollout

Real worldstate/action sequence

Simulated experience

Page 37: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

37

Policy Switching: Quality

h Let denote the ideal switching policy 5 Always pick the best policy index at any state

h The value of the switching policy is at least as good as the best single policy in the set5 It will often perform better than any single policy in set.5 For non-ideal case, were bandit algorithm only picks

approximately the best arm we can add an error term to the bound.

Theorem: For any state s, .

Page 38: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

Policy Switching in 2-Player Games

Suppose we have a two sets of polices, one for each player.

Max Policies (us) :

Min Policies (them) : }

These policy sets will often be the same, when players have the same actions sets.

Policies encode our knowledge of what the possible effective strategies might be in the game

But we might not know exactly when each strategy will be mosteffective.

Page 39: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

MaxiMin Policy Switching

….

….

….

…. …. …. …. ….

….

Build GameMatrix

 

Game Simulator

Current State s

Each entry gives estimated value (for max player)of playing a policy pair against one another

Each value estimated by averaging across w simulated games.

Page 40: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

MaxiMin Switching

….

….

….

…. …. …. …. ….

….

Build GameMatrix

 

Game Simulator

Current State s

MaxiMin Policy

Select action

Can switch between policies based on state of game!

Page 41: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

MaxiMin Switching

….

….

….

…. …. …. …. ….

….

Build GameMatrix

 

Game Simulator

Current State s

Parameters in Library Implementation:

Policy Sets: , }

Sampling Width w : number of simulations per policy pair

Height/Horizon h : horizon used for simulations

Page 42: 1 Monte-Carlo Planning: Policy Improvement Alan Fern

42

Policy Switching: Quality

h MaxiMin policy switching will often do better than any single policy in practice

h The theoretical guarantees for basic MaxiMin policy switching are quite weak5 Tweaks to the algorithm can fix this

h For single-agent MDPs, policy switching is guaranteed to improve over the best policy in the set.