Monte-Carlo method for Two-Stage SLP Lecture 5 Leonidas Sakalauskas Institute of Mathematics and...

Preview:

Citation preview

Monte-Carlo method for Two-Stage SLP

Lecture 5

Leonidas SakalauskasInstitute of Mathematics and InformaticsVilnius, Lithuania EURO Working Group on Continuous Optimization

Content Introduction Monte Carlo estimators Stochastic Differentiation -feasible gradient approach for two-stage SLP Interior-point method for two stage SLP Testing optimality Convergence analysis Counterexample

Two-stage stochastic optimization problem

min],|[min)( m

y RyhxTyWyqExcxF

,, nxbAx

assume vectors q, h and matrices W, T random in general, and, consequently, depending on an elementary event.

subject to the feasible set

nRxbxAxD ,

where

Two-stage stochastic optimization problem with complete recourse will be

( ) ( , ) minnx D

F x c x E Q x

],|[min),( my RyhxTyWyqxQ

It can be derived, that under the assumption on the existence of a solution to the second stage problem in and continuity of measure P, the objective function is smoothly differentiable and its gradient is expressed as ),()( xEgxFx

where

is given by the set of solutions of the dual problem

*),( uTcxg

],0|)[(max)( * sTTu

T RuqWuuxThuxTh

Monte-Carlo samples

We assume here that the Monte-Carlo samples of a certain size N are provided for any

),,...,,( 21 NyyyY and the sampling estimator of the objective function can be computed :

Aad sampling variance can be computed also which is useful to evaluate the accuracy of estimator

1

1( ) ( , )

Nj

j

F x f x yN

22

1

1( ) ( , ) ( )

1

Nj

j

D x f x y F xN

nRx

The gradient is evaluated using the same random sample:

1

1( ) ( , ),

Nj

j

g x g x yN

nRDx

Gradient

Covariance matrix

1

1( ) , ,

N Tj j

j

A x g x y g x g x y g xN n

We use the sampling covariance matrix

later on for normalising of the gradient estimator.

Approaches of stochastic gradient

We examine several estimators for stochastic gradient: Analytical approach (AA); Finite difference approach (FD); Simulated perturbation stochastic

approach (SPSA); Likelihood ratio approach (LR).

Analytical approach (AA)Gradient is expressed as

where

is given by the a set of solutions of the dual problem

( ) ,ixF x E g x

1 *,g x c T u

],0|)[(max)( * mTTu

T uqWuuxThuxTh

Gradient search procedure

Let some initial point be given.

The random sample of a certain initial size N0 be generated at this point, and Monte-Carlo estimates be computed.

The iterative stochastic procedure of gradient search could be used further:

x D R n0

)(~1 ttt xgxx

– feasible direction approach

Let us define the set of feasible directions as follows:

1( ) 0, 0, 0ni n j jV x g Ag g if x

Gradient projection

Denote, as projection of vector g onto the set U.

Since the objective function is differentiable, the solution

is optimal if

x D

0V

F x

Ug

Assume a certain multiplier to be given. Define the function by

0 )(: xVx

Thus , when

for any

x g D ( ),x g

0,1

ˆ( ) min , min( )j

jx

g jj n

xg

g

01 jnj g

,g V x x D

Now, let a certain small value be given. 0 Then we introduce the function

, ,

, if

and define the ε - feasible set

: ( )x V x jj

gnj

x gxgj

ˆ,minmaxˆ)(0

1 01 jnj g

0)( gx )0(1 jnj g

1( ) 0, 0, 0 ( )ni n j j xV x g Ag g if x g

The starting point can be obtained as the solution of the deterministic linear problem:

The iterative stochastic procedure of gradient search could be used further:

where is the step-length multiplier and

is the projection of gradient estimator to the ε -feasible set.

0 0

,( , ) arg min[ | , , , ].m n

x yx y c x q y A x b W y T x h y R x R

1 ( )t t t tx x G x

( )t ttxG

( )t ttV x

G G x

Monte-Carlo sample size problem

There is no a great necessity to compute estimators with a high accuracy on starting the optimisation, because then it suffices only to approximately evaluate the direction leading to the optimum.

Therefore, one can obtain not so large samples at the beginning of the optimum search and, later on, increase the size of samples so as to get the estimate of the objective function with a desired accuracy just at the time of decision making on finding the solution to the optimisation problem.

We propose a following version for regulating the sample size in practice:

maxmin1

1 ,,)(

~())(()(

~(

),,(maxmin NNn

xGxAxG

nNnFishnN

ttTt

tt

Statistical testing of the optimality hypothesis

The optimality hypothesis could be accepted for some point xt with significance , if the following condition is satisfied

12 ( ) ( ( )) ( ( )) ( ( ))

( , , )t t T t t

tt

N n G x A x G xT Fish n N n

n

Next, we can use the asymptotic normality again and decide that the objective function is estimated with a permissible accuracy , if its confidence bound does not exceed this value:

tt NxD /)(~

1

Computer simulation

Two-stage stochastic linear optimisation problem.

Dimensions of the task are as follows: the first stage has 10 rows and 20 variables; the second stage has 20 rows and 30 variables.

http://www.math.bme.hu/~deak/twostage/ l1/20x20.1/

(2006-01-20).

Dwo stage stochasticprograming

The estimate of the optimal value of the objective function given in the database is 182.94234 0.066

N0=Nmin=100 Nmax=10000. Maximal number of iterations ,

generation of trials was broken when the estimated confidence interval of the objective function exceeds admissible value .

Initial data were as follows:

= =0.95; 0.99, 0.1; 0.2; 0.5; 1.0.

max 100t

Frequency of stopping under admissible interval

0

20

40

60

80

100

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

1

0,5

0,2

0,1

Change of the objective function under admissible interval

182

182,5

183

183,5

184

184,5

1 12 23 34 45 56 67 78 89 100

0,1

0,2

0,5

1

Change of confidence interval under admissible interval

01

2345

67

1 8 15 22 29 36 43 50 57 64 71 78 85 92 99

0,1

0,2

0,5

1

Change of the Monte-Carlo sample size under admissible interval

0

200000

400000

600000

800000

1000000

1200000

1400000

1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96

0,1

0,2

0,5

1

Change of the Hotelling statistics under admissible interval

012345

6789

10

1 11 21 31 41 51 61 71 81 91

0,1

0,2

0,5

1

Histogram of ratio under admissible interval 1

jt

tj

N

N

0

5

10

15

20

25

30

8 10 12 14 16 18 20 22 24 26 28 30 32 34 36

0,1

0,2

0,5

1

Wrap-Up and Conclisions The stochastic adaptive method has been

developed to solve stochastic linear problems by a finite sequence of Monte-Carlo sampling estimators

The method is grounded by adaptive regulation of the size of Monte-Carlo samples and the statistical termination procedure, taking into consideration the statistical modeling accuracy

The proposed adjustment of sample size, when it is taken inversely proportional to the square of the norm of the Monte-Carlo estimate of the gradient, guarantees the convergence a. s. at a linear rate

Recommended