26

Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

EE787 Autumn 2018 Jong-Han Kim

Optimization

Jong-Han Kim

EE787 Fundamentals of machine learning

Kyung Hee University

1

Page 2: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Optimization problems and algorithms

2

Page 3: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Optimization problem

minimize f(�)

I � 2 Rd is the variable or decision variable

I f : Rd ! R is the objective function

I goal is to choose � to minimize f

I �? is optimal means that for all �, f(�) � f(�?)

I f? = f(�?) is the optimal value of the problem

I optimization problems arise in many �elds and applications, including machine

learning

3

Page 4: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Optimality condition

4 2 0 2 4

0

5

10

15

20optimal stationary pointnon-optimal stationary point

I let's assume that f is di�erentiable, i.e., partial derivatives @f(�)

@�iexist

I if �? is optimal, then rf(�?) = 0

I rf(�) = 0 is called the optimality condition for the problem

I there can be points that satisfy rf(�) = 0 but are not optimal

I we call points that satisfy rf(�) = 0 stationary points

I not all stationary points are optimal

4

Page 5: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Solving optimization problems

I in some cases, we can solve the problem analytically

I e.g., least squares: minimize f(�) = kX� � yk2

I optimality condition is rf(�) = 2XT (X� � y) = 0

I this has (unique) solution �? = (XTX)�1XT y = Xyy(when columns of X are linearly independent)

I in other cases, we resort to an iterative algorithm that computes a sequence

�1; �2; : : : with, hopefully, f(�k)! f? as k!1

5

Page 6: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Iterative algorithms

I iterative algorithm computes a sequence �1; �2; : : :

I �k is called the kth iterate

I �1 is called the starting point

I many iterative algorithms are descent methods, which means

f(�k+1) < f(�k); k = 1; 2; : : :

i.e., each iterate is better than the previous one

I this means that f(�k) converges, but not necessarily to f?

6

Page 7: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Stopping criterion

I in practice, we stop after a �nite number K of steps

I typical stopping criterion: stop if krf(�k)k � � or k = kmax

I � is a small positive number, the stopping tolerance

I kmax is the maximum number of iterations

I in words: we stop when �k is almost a stationary point

I we hope that f(�K) is not too much bigger than f?

I or more realistically, that �K is at least useful for our application

7

Page 8: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Non-heuristic and heuristic algorithms

I in some cases we know that f(�k)! f?, for any �1

I in words: we'll get to a solution if we keep iterating

I called non-heuristic

I other algorithms do not guarantee that f(�k)! f?

I we can hope that even if f(�k) 6! f?, �k is still useful for our application

I called heuristic

8

Page 9: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Convex functions

4 2 0 2 4

0

1

2

3

4

5

6

7

Huber

convex4 2 0 2 4

0.0

0.5

1.0

1.5

2.0

2.5

3.0

deadzone

convex4 2 0 2 4

0

1

2

3log Huber

non-convex

I a function f : Rd ! R is convex if for any �, ~�, and � with 0 � � � 1,

f(�� + (1� �)~�) � �f(�) + (1� �)f(~�)

I roughly speaking, f has `upward curvature'

I for d = 1, same as f 00(�) � 0 for all �

9

Page 10: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Convex optimization

I optimization problem

minimize f(�)

is called convex if the objective function f is convex

I for convex optimization problem, rf(�) = 0 only for � optimal, i.e.,

all stationary points are optimal

I algorithms for convex optimization are non-heuristic

I i.e., we can solve convex optimization problems (exactly, in principle)

10

Page 11: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Convex ERM problems

I regularized empirical risk function f(�) = L(�) + �r(�), with � � 0,

L(�) =1

n

nX

i=1

p(�Txi � yi); r(�) = q(�1) + � � �+ q(�d)

I f is convex if loss penalty p and parameter penalty q functions are convex

I convex penalties: square, absolute, tilted absolute, Huber

I non-convex penalties: log Huber, squareroot

11

Page 12: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method

12

Page 13: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method

I assume f is di�erentiable

I at iteration �k, create a�ne (Taylor) approximation of f valid near �k

f(�; �k) = f(�k) +rf(�k)T (� � �k)

I f(�; �k) � f(�) for � near �k

I choose �k+1 to make f(�k+1; �k) small, but with k�k+1 � �kk not too large

I choose �k+1 to minimize f(�; �k) + 12hkk� � �kk2

I hk > 0 is a trust parameter or step length or learning rate

I solution is �k+1 = �k � hkrf(�k)

I roughly: take step in direction of negative gradient

13

Page 14: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method update

I choose �k+1 to as minimizer of

f(�k) +rf(�k)T (� � �k) +1

2hkk� � �kk2

I rewrite as

f(�k) +1

2hkk(� � �k) + hkrf(�k)k2 �

hk

2krf(�k)k2

I �rst and third terms don't depend on �

I middle term is minimized (made zero!) by choice

� = �k � hkrf(�k)

14

Page 15: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

How to choose step length

I if hk is too large, we can have f(�k+1) > f(�k)

I if hk is too small, we have f(�k+1) < f(�k) but progress is slow

I a simple scheme:

I if f(�k+1) > f(�k), set hk+1 = hk=2, �k+1 = �k (a rejected step)

I if f(�k+1) � f(�k), set hk+1 = 1:2hk (an accepted step)

I reduce step length by half if it's too long; increase it 20% otherwise

15

Page 16: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method summary

choose an initial �1 2 Rd and h1 > 0 (e.g., �1 = 0, h1 = 1)

for k = 1; 2; : : : ; kmax

1. compute rf(�k); quit if krf(�k)k is small enough

2. form tentative update �tent = �k � hkrf(�k)

3. if f(�tent) � f(�k), set �k+1 = �tent, hk+1 = 1:2hk

4. else set hk := 0:5hk and go to step 2

16

Page 17: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method convergence

I (assuming some technical conditions hold) we have

krf(�k)k ! 0 as k!1

I i.e., the gradient method always �nds a stationary point

I for convex problems

I gradient method is non-heuristic

I for any starting point �1, f(�k)! f? as k!1

I for non-convex problems

I gradient method is heuristic

I we can (and often do) have f(�k) 6! f?

17

Page 18: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Example: Convex objective

4 2 0 2 4

4

2

0

2

4

�1

�2

0 5 10 15 200.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

k

f(�k)

I f(�) = 13

�phub(�1 � 1) + phub(�2 � 1) + phub(�1 + �2 � 1)

I f is convex

I optimal point is �? = (2=3; 2=3), with f? = 1=9

18

Page 19: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Example: Convex objective

0 5 10 15 20

1013

1011

109

107

105

103

101

101

k

f(�k)� f?

0 5 10 15 20

106

105

104

103

102

101

100

k

krf(�k)k

I f(�k) is a decreasing function of k, (roughly) exponentially

I krf(�k)k ! 0 as k!1

19

Page 20: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Example: Non-convex objective

4 2 0 2 4

4

2

0

2

4

�1

�2

0 10 20 30 40 50 601.5

2.0

2.5

3.0

3.5

4.0

k

f(�k)

I f(�) = 13

�plh(�1 + 3) + plh(2�2 + 6) + plh(�1 + �2 � 1)

I f is sum of log-Huber functions, so not convex

I gradient algorithm converges, but limit depends on initial guess

20

Page 21: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Example: Non-convex objective

4 2 0 2 4

4

2

0

2

4

�1

�2

-4-2

02

4

-4

-2

0

2

4

1

2

3

4

5

6

�1

�2

f(�k)

21

Page 22: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Example: Non-convex objective

0 10 20 30 40 50 60

1011

109

107

105

103

101

101

k

f(�k)� f?

0 10 20 30 40 50 6010

6

105

104

103

102

101

100

k

krf(�k)k

22

Page 23: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient method for ERM

23

Page 24: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Gradient of empirical risk function

I empirical risk is sum of terms for each data point

L(�) =1

n

nX

i=1

`(yi; yi) =1

n

nX

i=1

`(�Txi; yi)

I convex if loss function ` is convex in �rst argument

I gradient is sum of terms for each data point

rL(�) = rL(�) =1

n

nX

i=1

`0(�Txi; yi)xi

where `0(y; y) is derivative of ` with respect to its �rst argument y

24

Page 25: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Evaluating gradient of empirical risk function

I compute n-vector yk = X�k

I compute n-vector zk, with entries zki = `0(yki ; yi)

I compute d-vector rL(�k) = (1=n)XT zk

I �rst and third steps are matrix-vector multiplication, each costing 2nd �ops

I second step costs order n �ops (dominated by other two)

I total is 4nd �ops

25

Page 26: Optimization - jonghank.github.io · Optimization problem minimize f ( ) I 2 R d is the variable or decision variable I f : R d!R is the objective function I goal is to choose to

Validation

0 25 50 75 100 125 150 175 20010

5

104

103

102

101

100 train

test

k

loss

I can evaluate empirical risk on train and test while gradient is running

I optimization is only a surrogate for what we want

(i.e., a predictor that predicts well on unseen data)

I predictor is often good enough well before gradient descent has converged

26