29
Coordinate Descent Ryan Tibshirani Convex Optimization 10-725/36-725 1

Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Coordinate Descent

Ryan TibshiraniConvex Optimization 10-725/36-725

1

Page 2: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Last time: dual methods and ADMM

Dual methods operate on the dual of a problem that has the form

minx

f(x) subject to Ax = b

for convex f . The dual (sub)gradient methods chooses an initialu(0), and repeats for k = 1, 2, 3, . . .

x(k) ∈ argminx

f(x) + (u(k−1))TAx

u(k) = u(k−1) + tk(Ax(k−1) − b)

where tk are step sizes, chosen in standard ways

• Pro: decomposability in the first step. Con: poor convergenceproperties

• Can improve convergence by augmenting the Lagrangian, i.e.,add term ρ/2‖Ax− b‖2 to the first step. Perform blockwiseminimization ⇒ ADMM

2

Page 3: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Outline

Today:

• Coordinate descent

• Examples

• Implementation tricks

• Coordinate descent—literally

• Screening rules

3

Page 4: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Coordinate descent

We’ve seen some pretty sophisticated methods thus far

Our focus today is a very simple technique that can be surprisinglyefficient and scalable: coordinate descent, or more appropriatelycalled coordinatewise minimization

Q: Given convex, differentiable f : Rn → R, if we are at a point xsuch that f(x) is minimized along each coordinate axis, then havewe found a global minimizer?

I.e., does f(x+ δei) ≥ f(x) for all δ, i ⇒ f(x) = minz f(z)?

(Here ei = (0, . . . , 1, . . . 0) ∈ Rn, the ith standard basis vector)

4

Page 5: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

x1 x2

f

A: Yes! Proof:

∇f(x) =(∂f

∂x1(x), . . .

∂f

∂xn(x)

)= 0

Q: Same question, but for f convex (not differentiable) ... ?

5

Page 6: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

x1

x2

f

x1

x2

−4 −2 0 2 4

−4

−2

02

4

A: No! Look at the above counterexample

Q: Same question again, but now f(x) = g(x) +∑n

i=1 hi(xi), withg convex, differentiable and each hi convex ... ? (Nonsmooth parthere called separable)

6

Page 7: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

x1

x2

f

x1

x2

−4 −2 0 2 4

−4

−2

02

4

A: Yes! Proof: for any y,

f(y)− f(x) ≥ ∇g(x)T (y − x) +n∑i=1

[hi(yi)− hi(xi)]

=

n∑i=1

[∇ig(x)(yi − xi) + hi(yi)− hi(xi)]︸ ︷︷ ︸≥0

≥ 0

7

Page 8: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Coordinate descent

This suggests that for f(x) = g(x) +∑n

i=1 hi(xi) (with g convex,differentiable and each hi convex) we can use coordinate descentto find a minimizer: start with some initial guess x(0), and repeat

x(k)1 ∈ argmin

x1f(x1, x

(k−1)2 , x

(k−1)3 , . . . x(k−1)n

)x(k)2 ∈ argmin

x2f(x(k)1 , x2, x

(k−1)3 , . . . x(k−1)n

)x(k)3 ∈ argmin

x2f(x(k)1 , x

(k)2 , x3, . . . x

(k−1)n

). . .

x(k)n ∈ argminx2

f(x(k)1 , x

(k)2 , x

(k)3 , . . . xn

)for k = 1, 2, 3, . . .

Note: after we solve for x(k)i , we use its new value from then on!

8

Page 9: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Tseng (2001) proves that for such f (provided f is continuous oncompact set {x : f(x) ≤ f(x(0))} and f attains its minimum), anylimit point of x(k), k = 1, 2, 3, . . . is a minimizer of f1

Notes:

• Order of cycle through coordinates is arbitrary, can use anypermutation of {1, 2, . . . n}

• Can everywhere replace individual coordinates with blocks ofcoordinates

• “One-at-a-time” update scheme is critical, and “all-at-once”scheme does not necessarily converge

• For solving linear systems, recall this is exactly the differencebetween Gauss-Seidel and Jacobi methods

1Using real analysis, we know that x(k) has subsequence converging to x?

(Bolzano-Weierstrass), and f(x(k)) converges to f? (monotone convergence)9

Page 10: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Example: linear regression

Consider linear regression

minβ∈Rp

1

2‖y −Xβ‖22

where y ∈ Rn, and X ∈ Rn×p with columns X1, . . . Xp

Minimizing over βi, with all βj , j 6= i fixed:

0 = ∇if(β) = XTi (Xβ − y) = XT

i (Xiβi +X−iβ−i − y)

i.e., we take

βi =XTi (y −X−iβ−i)

XTi Xi

Coordinate descent repeats this update for i = 1, 2, . . . , p, 1, 2, . . .

10

Page 11: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Coordinate descent vs gra-dient descent for linear re-gression: 100 instances(n = 100, p = 20)

0 10 20 30 40

1e−

101e

−07

1e−

041e

−01

1e+

02

k

f(k)

−fs

tar

GDCD

Is it fair to compare 1 cycle of coordinate descent to 1 iteration ofgradient descent? Yes, if we’re clever:

βi ←XTi (y −X−iβ−i)

XTi Xi

=XTi r

‖Xi‖22+ βi

where r = y −Xβ. Therefore each coordinate update takes O(n)operations — O(n) to update r, and O(n) to compute XT

i r —and one cycle requires O(np) operations, just like gradient descent

11

Page 12: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

0 10 20 30 40

1e−

101e

−07

1e−

041e

−01

1e+

02

k

f(k)

−fs

tar

GDCDAccelerated GD

Same example, but nowwith accelerated gradientdescent for comparison

Is this contradicting the optimality of accelerated gradient descent?I.e., is coordinate descent a first-order method?

No. It uses much more than first-order information

12

Page 13: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Example: lasso regression

Now consider the lasso problem

minβ∈Rp

1

2‖y −Xβ‖22 + λ‖β‖1

Note that the non-smooth part is separable: ‖β‖1 =∑p

i=1 |βi|

Minimizing over βi, with βj , j 6= i fixed:

0 = XTi Xiβi +XT

i (X−iβ−i − y) + λsi

where si ∈ ∂|βi|. Solution is simply given by soft-thresholding

βi = Sλ/‖Xi‖22

(XTi (y −X−iβ−i)

XTi Xi

)Repeat this for i = 1, 2, . . . p, 1, 2, . . .

13

Page 14: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Example: box-constrained regression

Consider box-constrainted linear regression

minβ∈Rp

1

2‖y −Xβ‖22 subject to ‖β‖∞ ≤ s

Note this fits our framework, as 1{‖β‖∞ ≤ s} =∑n

i=1 1{|βi| ≤ s}

Minimizing over βi with all βj , j 6= i fixed: same basic steps give

βi = Ts

(XTi (y −X−iβ−i)

XTi Xi

)where Ts is the truncating operator:

Ts(u) =

s if u > s

u if − s ≤ u ≤ s−s if u < −s

14

Page 15: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Example: support vector machines

A coordinate descent strategy can be applied to the SVM dual:

minα∈Rn

1

2αT XXTα− 1Tα subject to 0 ≤ α ≤ C1, αT y = 0

Sequential minimal optimization or SMO (Platt 1998) is basicallyblockwise coordinate descent in blocks of 2. Instead of cycling, itchooses the next block greedily

Recall the complementary slackness conditions

αi(1− ξi − (Xβ)i − yiβ0

)= 0, i = 1, . . . n (1)

(C − αi)ξi = 0, i = 1, . . . n (2)

where β, β0, ξ are the primal coefficients, intercept, and slacks.Recall that β = XTα, β0 is computed from (1) using any i suchthat 0 < αi < C, and ξ is computed from (1), (2)

15

Page 16: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

SMO repeats the following two steps:

• Choose αi, αj that do not satisfy complementary slackness,greedily (using heuristics)

• Minimize over αi, αj exactly, keeping all other variables fixed

Using equality constraint,reduces to minimizing uni-variate quadratic over aninterval (From Platt 1998)

Note this does not meet separability assumptions for convergencefrom Tseng (2001), and a different treatment is required

Many further developments on coordinate descent for SVMs havebeen made; e.g., a recent one is Hsiesh et al. (2008)

16

Page 17: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Coordinate descent in statistics and ML

History in statistics:

• Idea appeared in Fu (1998), and again in Daubechies et al.(2004), but was inexplicably ignored

• Three papers around 2007, especially Friedman et al. (2007),really sparked interest in statistics and ML communities

Why is it used?

• Very simple and easy to implement

• Careful implementations can be near state-of-the-art

• Scalable, e.g., don’t need to keep full data in memory

Examples: lasso regression, lasso GLMs (under proximal Newton),SVMs, group lasso, graphical lasso (applied to the dual), additivemodeling, matrix completion, regression with nonconvex penalties

17

Page 18: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Pathwise coordinate descent for lasso

Here is the basic outline for pathwise coordinate descent for lasso,from Friedman et al. (2007), Friedman et al. (2009)

Outer loop (pathwise strategy):

• Compute the solution over a sequence λ1 > λ2 > . . . > λr oftuning parameter values

• For tuning parameter value λk, initialize coordinate descentalgorithm at the computed solution for λk+1 (warm start)

Inner loop (active set strategy):

• Perform one coordinate cycle (or small number of cycles), andrecord active set A of coefficients that are nonzero

• Cycle over coefficients in A until convergence

• Check KKT conditions over all coefficients; if not all satisfied,add offending coefficients to A, go back one step

18

Page 19: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Even when the solution is only desired at one value of λ, pathwisestrategy (λ1 > λ2 > . . . > λr = λ) is typically much more efficientthan directly performing coordinate descent at λ

Active set strategy takes advantage of sparsity; e.g., for very largeproblems, coordinate descent for lasso is much faster than it is forridge regression

With these strategies in place (and a few more tricks), coordinatedescent can be competitve with fastest algorithms for `1 penalizedminimization problems

Freely available via glmnet package in MATLAB or R

19

Page 20: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

What’s in a name?

The name coordinate descent is confusing. For a smooth functionf , the method that repeats

x(k)1 = x

(k−1)1 − tk,1 · ∇1f

(x(k−1)1 , x

(k−1)2 , x

(k−1)3 , . . . x(k−1)n

)x(k)2 = x

(k−1)2 − tk,2 · ∇2f

(x(k)1 , x

(k−1)2 , x

(k−1)3 , . . . x(k−1)n

)x(k)3 = x

(k−1)3 − tk,3 · ∇3f

(x(k)1 , x

(k)2 , x

(k−1)3 , . . . x(k−1)n

). . .

x(k)n = x(k−1)n − tk,n · ∇nf(x(k)1 , x

(k)2 , x

(k)3 , . . . x(k−1)n

)for k = 1, 2, 3, . . . is also (rightfully) called coordinate descent.When f = g + h, where g is smooth and h is separable, theproximal version of the above is also called coordinate descent

These versions are often easier to apply that exact coordinatewiseminimization, but the latter makes more progress per step

20

Page 21: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Convergence analyses

Theory for coordinate descent moves quickly. The list given belowis incomplete (may not be the latest and greatest). Warning: somereferences below treat coordinatewise minimization, some do not

• Convergence of coordinatewise minimization for solving linearsystems, the Gauss-Seidel method, is a classic topic. E.g., seeGolub and van Loan (1996), or Ramdas (2014) for a moderntwist that looks at randomized coordinate descent

• Nesterov (2010) considers randomized coordinate descent forsmooth functions and shows that it achieves a rate O(1/ε)under a Lipschitz gradient condition, and a rate O(log(1/ε))under strong convexity

• Richtarik and Takac (2011) extend and simplify these results,considering smooth plus separable functions, where now eachcoordinate descent update applies a prox operation

21

Page 22: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

• Saha and Tewari (2013) consider minimizing `1 regularizedfunctions of the form g(β) + λ‖β‖1, for smooth g, and studyboth cyclic coordinate descent and cyclic coordinatewise min.Under (very strange) conditions on g, they show both methodsdominate proximal gradient descent in iteration progress

• Beck and Tetruashvili (2013) study cyclic coordinate descentfor smooth functions in general. They show that it achieves arate O(1/ε) under a Lipschitz gradient condition, and a rateO(log(1/ε)) under strong convexity. They also extend theseresults to a constrained setting with projections

• The general case of smooth plus separable function is notwell-understood with respect to cyclic coordinate descent orcyclic coordinatewise minimization. It is also a question as towhether these two should behave similarly, and whether theaforementioned results are tight ...

22

Page 23: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Screening rules

In some problems, screening rules can be used in combination withcoordinate descent to further wittle down the active set. Screeningrules themselves have amassed a sizeable literature recently. Hereis an example, the SAFE rule for the lasso2:

|XTi y| < λ− ‖Xi‖2‖y‖2

λmax − λλmax

⇒ βi = 0, all i = 1, . . . p

where λmax = ‖XT y‖∞ (the smallest value of λ such that β = 0)

Note: this is not an if and only if statement! But it does give us away of eliminating features apriori, without solving the lasso

(There have been many advances in screening rules for the lasso,but SAFE is the simplest, and was the first)

2El Ghaoui et al. (2010), “Safe feature elimination in sparse learning”23

Page 24: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Why is the SAFE rule true? Construction comes from lasso dual:

maxu∈Rn

g(u) subject to ‖XTu‖∞ ≤ λ

where g(u) = 12‖y‖

22 − 1

2‖y − u‖22. Suppose that u0 is dual feasible

(e.g., take u0 = y · λ/λmax). Then γ = g(u0) is a lower bound onthe dual optimal value, so dual problem is equivalent to

maxu∈Rn

g(u) subject to ‖XTu‖∞ ≤ λ, g(u) ≥ γ

Now consider computing

mi = maxu∈Rn

|XTi u| subject to g(u) ≥ γ, for i = 1, . . . p

Then we would have

mi < λ ⇒ |XTi u| < λ ⇒ βi = 0, i = 1, . . . p

The last implication comes from the KKT conditions

24

Page 25: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

(From El Ghaoui et al. 2010)

25

Page 26: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Another dual argument shows that

maxu∈Rn

XTi u subject to g(u) ≥ γ

=minµ>0

−γµ+1

µ‖µy −Xi‖22

=‖Xi‖2√‖y‖22 − 2γ −XT

i y

where the last equality comes from direct calculation

Thus mi is given the maximum of the above quantity over ±Xi,

mi = ‖Xi‖2√‖y‖22 − 2γ + |XT

i y|, i = 1, . . . p

Lastly, subtitute γ = g(y · λ/λmax). Then mi < λ is precisely thesafe rule given on previous slide

26

Page 27: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

References

Early coordinate descent references in statistics and ML:

• I. Daubechies and M. Defrise and C. De Mol (2004), “Aniterative thresholding algorithm for linear inverse problemswith a sparsity constraint”

• J. Friedman and T. Hastie and H. Hoefling and R. Tibshirani(2007), “Pathwise coordinate optimization”

• W. Fu (1998), “Penalized regressions: the bridge versus thelasso”

• T. Wu and K. Lange (2008), “Coordinate descent algorithmsfor lasso penalized regression”

• A. van der Kooij (2007), “Prediction accuracy and stability ofregresssion with optimal scaling transformations”

27

Page 28: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Applications of coordinate descent:

• O. Banerjee and L. Ghaoui and A. d’Aspremont (2007),“Model selection through sparse maximum likelihoodestimation”

• J. Friedman and T. Hastie and R. Tibshirani (2007), “Sparseinverse covariance estimation with the graphical lasso”

• J. Friedman and T. Hastie and R. Tibshirani (2009),“Regularization paths for generalized linear models viacoordinate descent”

• C.J. Hsiesh and K.W. Chang and C.J. Lin and S. Keerthi andS. Sundararajan (2008), “A Dual Coordinate Descent Methodfor Large-scale Linear SVM”

• R. Mazumder and J. Friedman and T. Hastie (2011),“SparseNet: coordinate descent with non-convex penalties”

• J. Platt (1998), “Sequential minimal optimization: a fastalgorithm for training support vector machines”

28

Page 29: Coordinate Descent - Carnegie Mellon Universityryantibs/convexopt-S15/lectures/22-coord-desc.pdf · A coordinate descent strategy can be applied to the SVM dual: min 2Rn 1 2 TX~X~T

Theory for coordinate descent:

• A. Beck and L. Tetruashvili (2013), “On the convergence ofblock coordinate descent type methods”

• Y. Nesterov (2010), “Efficiency of coordinate descentmethods on huge-scale optimization problems”

• A. Ramdas (2014), “Rows vs columns for linear systems ofequations—randomized Kaczmarz or coordinate descent?”

• P. Richtarik and M. Takac (2011), “Iteration complexity ofrandomized block-coordinate descent methods for minimizinga composite function”

• A. Saha and A. Tewari (2013), “On the nonasymptoticconvergence of cyclic coordinate descent methods”

• P. Tseng (2001), “Convergence of a block coordinate descentmethod for nondifferentiable minimization”

29