53
Applied Statistics and Machine Learning Theory: Stability, CLT, and Sparse Modeling Bin Yu, IMA, June 26, 2013 6/26/13

Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Applied Statistics and Machine Learning

Theory: Stability, CLT, and Sparse Modeling

Bin Yu, IMA, June 26, 2013

6/26/13

Page 2: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 2

1. Reproducibility and statistical stability

2. In the classical world:

CLT – stability result

Asymptotic normality and score function for parametric inference

Valid bootstrap

3. In the modern world:

L2 estimation rate of Lasso and extensions

Model selection consistency of Lasso

Sample to sample variability meets heavy tail noise

Today’s plan

Page 3: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

2013: Information technology and data

6/26/13 3

“Although scientists have always comforted themselves with the thought that science is self-correcting, the immediacy and rapidity with which knowledge disseminates today means that incorrect information can have a profound impact before any corrective process can take place”.

“Reforming Science: Methodological and Cultural Reforms,” Infection and Immunity

Editorial (2012) by Arturo Casadevall (Editor in Chief, mBio) and Ferric C. Fang, (Editor in

Chief, Infection and Immunity)

Page 4: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

2013: IT and data

6/26/13 4

“A recent study* analyzed the cause of retraction for 788 retracted papers and found that error and fraud were responsible for 545 (69%) and 197 (25%) cases, respectively, while the cause was unknown in 46 (5.8%) cases (31).” -- Casadevall and Fang (2012)

* R Grant Steen (2011), J. Med. Ethics

Casadevall and Fang called for “Enhanced training in probability and statistics”

Page 5: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Scientific Reproducibility

6/26/13 5

Recent papers on reproducibility: Ioannidis, 2005; Kraft et al, 2009; Nosek et al, 2012; Naik, 2011; Booth, 2012; Donoho et al, 2009, Fanio et al, 2012…

Statistical stability is a minimum requirement for scientific reproducibility. Statistical stability: statistical conclusions should be stable to appropriate perturbations to data and/or models. Stable statistical results are more reliable.

Page 6: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Scientific reproducibility is our responsibility

6/26/13 6

} Stability is a minimum requirement for reproducibility

} Statistical stability: statistical conclusions

should be robust to appropriate perturbations to data

Page 7: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Data perturbation has a long history

6/26/13 7

Data perturbation schemes are now routinely employed to estimate bias, variance, and sampling distribution of an estimator.

Jacknife

… Quenouille (1949, 1956), Tukey (1958), Mosteller and Tukey (1977),or

Efron and Stein (1981), Wu (1986), Carlstein (1986), Wu (1990), …

Sub-sampling:

… Mahalanobis (1949), Hartigan (1969, 1970), Politis and Romano

(1992 , 94), Bickel, Götze, and van Zwet (1997), …

Cross-validation:

… Allen (1974), Stone (1974, 1977), Hall (1983), Li (1985), …

Bootstrap:

… fron (1979), Bickel and Freedman (1981), Beran (1984),

} 

Page 8: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Stability argument is central to limiting law results

6/26/13 8

One proof of CLT (bedrock of classical stat theory):

1. Universality of a limiting law for a normalized sum of iid through

a stability argument or Lindeberg’s swapping trick, i.e.,

a perturbation in a (normalized) sum by a random variable

with matching first and second moments does not change

the (normalized) sum distribution in the limit.

2. Finding the limit law via ODE.

cf. Lecture notes of Terence Tao at

http://terrytao.wordpress.com/2010/01/05/254a-notes-2-the-central-limit-theorem/

Page 9: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 9

A proof of CLT via Lindeberg’s swapping trick: Let

CLT

Page 10: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 10

CLT

Page 11: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 11

Then we get

CLT

For a complete proof, see http://terrytao.wordpress.com/2010/01/05/254a-notes-2-the-central-limit-theorem/

Page 12: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 12

Via Taylor expansion, for a nice parametric family , it can be seen that MLE , from n iid samples has the following approximation: where and is the score function with first moment or expected value zero:

Asymptotic normality of MLE

θn

√n(θn − θ) ≈ ((Y1 + . . . .+ Yn)/

√n

Yi = Uθ(Xi) Uθ

Uθ(x) =d log f(x, θ)

f(x, θ)

X1, . . . , Xn

Page 13: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 13

Lindeberg’s swapping trick proof of CLT means that MLE is asymptotically normal because we can swap in an independent normal random variable (of mean zero and variance Fisher information) in the sum of the score functions, without changing the sum much – there is stability.

Asymptotic normality of MLE comes from stability

Page 14: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 14

Why bootstrap works?

Page 15: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 15

A similar proof can be carried out using the Lindeberg’s swapping trick with the minor modification that the second moments are matched to the order. That is, let

Then we can see that the summands in the two summations

above have the same first moment 0, their second moments differ by , and their third moments are finite. Going through Tao’s proof shows that we get another term of order

from the second moment term which is ok.

A sketch of proof:

1/√n

Sn,i =(X1 − µn) + . . . (Xi − µn) + (X∗

i+1 − µn). . . + (X∗n − µn)√

n

Sn,i =(X1 − µn) + . . . (X∗

i − µn) + (X∗i+1 − µn). . . + (X∗

n − µn)√n

O(1/√n)

O(n−3/2)

Page 16: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Stability argument is central

6/26/13 16

Recent generalizations to obtain other universal limiting distributions

e.g. Wigner law under non-Gaussian assumptions and last passage

percolation (Chatterjee, 2006, Suidan, 2006, …)

Concentration results also assume stability-type conditions

In learning theory , stability is closely related to good generalization

performance (Devroye and Wagner, 1979, Kearns and Ron, 1999,

Bousquet and Elisseeff, 2002, Kutin and Niyogi, 2002, Mukherjee et al, 2006….)

Page 17: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 17 June 26, 2013

Compressed Sensing emphasize the choice of design matrix

(often iid Gaussian entries)

Theoretical studies: much work recently on Lasso in terms of

L2 error of parameter

model selection consistency L2 prediction error (not covered in this lecture)

Lasso and/or Compressed Sensing: much theory lately

Page 18: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 18 June 26, 2013

Estimation: Minimize loss function plus regularization term

Regularized M-estimation including Lasso

θλn − θ∗

!!!n"#$%Estimate

! arg min"!Rp

&Ln(!;Xn

1 )" #$ %

Loss function

+ "n r(!)" #$ %

Regularizer

'.

Goal: for some error metric (e.g. L2 error), bound in this metric the difference

in high-dim scaling as (n, p) tends to infinity.

θλn − θ∗

Page 19: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 19 June 26, 2013

Example 1: Lasso (sparse linear model)

Some past work: Tropp, 2004; Fuchs, 2000; Meinshausen/Buhlmann, 2005; Candes/Tao, 2005; Donoho, 2005; Zhao & Yu, 2006; Zou, 2006; Wainwright, 2006; Koltchinskii, 2007; Tsybakov et al., 2007; van de Geer, 2007; Zhang and Huang, 2008; Meinshausen and Yu, 2009, Bickel et al., 2008 ....

= +nS

wy X !!

Sc

n ! p

Set-up: noisy observations y = X!! + w with sparse !!

Estimator: Lasso program

!!!n" arg min

"

1

n

n"

i=1

(yi # xTi !)2 + "n

p"

j=1

|!j |

Some past work: Tropp, 2004; Fuchs, 2000; Meinshausen/Buhlmann, 2006; Candes/Tao, 2005; Donoho, 2005; Zhao & Yu, 2006; Zou, 2006; Wainwright, 2009; Koltchinskii, 2007; Tsybakov et al., 2007; van de Geer, 2007; Zhang and Huang, 2008; Meinshausen and Yu, 2009, Bickel et al., 2008, Neghban et al, 12.

Page 20: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 20 June 26, 2013

Example 2: Low-rank matrix approximation !! U D V T

r ! r

k ! m k ! r

r ! m

Set-up: Matrix !! " Rk"m with rank r # min{k, m}.

Estimator:

!! " argmin!

1

n

n"

i=1

(yi $ %%Xi, !&&)2 + !n

min{k,m}"

j=1

"j(!)

Some past work : Frieze et al, 1998; Achilioptas & McSherry, 2001; Fayzel & Boyd, 2001; Srebro et al, 2004; Drineas et al, 2005; Rudelson & Vershynin, 2006; Recht et al, 2007, Bach, 2008; Candes and Tao, 2009; Halko et al, 2009, Keshaven et al, 2009; Negahban & Wainwright, 2009, Tsybakov???

Page 21: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 21 June 26, 2013

Example 3: Structured (inverse) Cov. Estimation

Zero pattern of inverse covariance

1 2 3 4 5

1

2

3

4

5

1 2

3

45

Set-up: Samples from random vector with sparse covariance ! or sparseinverse covariance "! ! Rp"p.

Estimator:

!" ! arg min!

""1

n

n"

i=1

xixTi , "## $ log det(") + !n

"

i#=j

|"ij |1

Some past work :Yuan and Lin, 2006; d’Aspremont et al, 2007; Bickel & Levina, 2007; El Karoui, 2007; Rothman et al, 2007; Zhou et al, 2007; Friedman et al 2008; Ravikumar et al, 2008; Lam and Fan, 2009; Cai and Zhou, 2009

Page 22: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 22 June 26, 2013

Unified Analysis S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu. (2012) A unified framework for the analysis of regularized M-estimators. Statistical Science, 27(4): 538--557,

Many high-dimensional models and associated results case by case Is there a core set of ideas that underlie these analyses? We discovered that two key conditions are needed for a unified error bound: Decomposability of regularizer r (e.g. L1-norm) Restricted strong convexity of loss function (e.g Least Squares loss)

Page 23: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 23 June 26, 2013

Why can we estimate parameters?

Strong convexity of cost (curvature captured in Fisher Info when p fixed):

!L

!""

!

!L

!""

!

(a) High curvature: easy to estimate (b) Low curvature: harder

Asymptotic analysis of maximum likelihood estimator or OLS: 1.  Fisher information (Hessian matrix) non-degenerate or strong convexity in all directions. 2. Sampling noise disappears with sample size gets large

Page 24: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 24 June 26, 2013

In high-dim and when r corresponsds to structure (e.g. sparsity)

1 Restricted strong convexity (RSC) (courtesy of high-dim):

! loss functions are often flat in manydirections in high dim

! “curvature” needed only fordirections ! ! C in high dim

! loss function Ln(!) := Ln(!;Xn1 ) satisfies

Ln(!! + !) " Ln(!!)| {z }

Excess loss

" #$Ln(!!)| {z }

scorefunction

, !% & "(L) d2(!)| {z }squarederror

for all ! ! C.

2 Decomposability of regularizersmakes C small in high-dim:

! for subspace pairs (A,B")where A represents modelconstraints:

r(u+v) = r(u)+r(v) for all u ! A and v ! B"

! forces error ! = b!!n " !! to C

C

Page 25: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 25 June 26, 2013

In high-dim and when r corresponds to structure (e.g. sparsity)

When p >n as in the fMRI problem, LS is flat in many directions or it is impossible to have strong convexity in all directions. Regularization is needed. When is large enough to overcome the sampling noise, we have a deterministic situation: •  The decomposable regularization norm forces the estimation difference into a constraint or “cone” set. •  This constraint set is small relative to when p large.

•  Strong convexity is needed only over this constraint set.

. When predictors are random and Gaussian (dependence ok), strong convexity does hold for LS over the l1-norm induced constraint set.

λn

Rp

θλn − θ∗

Page 26: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 26 June 26, 2013

Main Result (Negahban et al, 2012)

Theorem (Negahban, Ravikumar, Wainwright & Y. 2009)

Say regularizer decomposes across pair (A,B!) with A ! B, and restrictedstrong convexity holds for (A,B!) and over C. With regularization constant

chosen such that !n " 2r"(#L("";Xn1 )), then any solution !"!n

satisfies

d(!"!n$ "") %

1

#(L)

"!(B)!n

#+

$

2!n

#(L)r($A!(""))

Estimation error Approximation error

Quantities that control rates:

restricted strong convexity parameter: !(L)

dual norm of regularizer: r!(v) := supr(u)=1

!v, u".

optimal subspace const.: !(B) = sup!"B\{0}

r(")/d(")

More work is required for each case: verify restrictive strong convexity and find to overcome sampling noise (concentration ineq.) λn

Page 27: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 27 June 26, 2013

Recovering existing result in Bickel et al 08

Example: Linear regression (exact sparsity)

Lasso program: min!!Rp

˘!y " X!!2

2 + "n!!!1}

RSC reduces to lower bound on restricted singular values of X # Rn"p

for a k-sparse vector, we have !!!1 $%

k !!!2.

Corollary

Suppose that true parameter !! is exactly k-sparse. Under RSC and with

"n ! 2"XT !n "", then any Lasso solution satisfies "!!"n

# !!"2 $ 1#(L)

%k "n.

Some stochastic instances: recover known results

Compressed sensing: Xij & N(0, 1) and bounded noise !#!2 $ $%

n

Deterministic design: X with bounded columns and #i & N(0, $2)

!XT #n

!# $r

2$2 log pn

w.h.p. =' !b!"n " !$!2 $ 8$%(L)

rk log p

n.

(e.g., Candes & Tao, 2007; Meinshausen/Yu, 2008; Zhang and Huang, 2008; Bickel et

al., 2009)

Page 28: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 28 June 26, 2013

Obtaining new result

Example: Linear regression (weak sparsity)

for some q ! [0, 1], say !! belongsto "q-“ball”

Bq(Rq) :=!! ! R

p |p"

j=1

|!j |q " Rq

#.

Corollary

For !! ! Bq(Rq), any Lasso solution satisfies (w.h.p.)

#$!!n$ !!#2

2 " O%#2Rq

& log p

n

'1"q/2(.

rate known to be minimax optimal (Raskutti et al., 2009)

Page 29: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 29 June 26, 2013

Effective sample size n/log(p) We lose at least a factor of log(p) for having to search over a set of p predictors for a small number of relevant predictors. For the static image data, we lose a factor of log(10921)=9.28 so the effective sample size is not n=1750, but 1750/9.28=188 For the movie data, we lose at least a factor of log(26000)=10.16 so the effective sample size is not n=7200, but 7200/10.16=708 The log p factor is very much a consequence of the light-tail Gaussian (or sub-Gaussian) noise assumption. We will lose more if the noise term has a heavier tail.

Page 30: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 30 June 26, 2013

Summary of unified analysis }  Recovered existing results: sparse linear regression with Lasso multivariate group Lasso sparse inverse cov. estimation

}  Derived new results: weak sparse linear model with Lasso low rank matrix estimation generalized linear models

Page 31: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 31 June 26, 2013

An MSE result for L2Boosting Peter Buhlmann and Bin Yu (2003). Boosting with the L2 Loss: Regression and Classification. J. Amer. Statist. Assoc. 98, 324-340. Not Note that the variance term is a measure of complexity and it increases by an exponentially diminishing amount when the iteration number m increases – and it is always bounded.

Page 32: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 32 June 26, 2013

Model Selection Consistency of Lasso Zhao and Yu (2006), On model selection consistency of Lasso, JMLR, 7, 2541-2563 Set-up: Linear regression model

n observations and p predictors

Assume (A): Knight and Fu (2000) showed L2 estimation consistency under (A) for fixed p.

Page 33: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 33 June 26, 2013

Model Selection Consistency of Lasso }  p small, n large (Zhao and Y, 2006), assume (A) and

Then roughly* Irrepresentable condition (1 by (p-s) matrix)

* Some ambiguity when equality holds.

}  Related work: Tropp(06), Meinshausen and Buhlmann (06), Zou (06), Wainwright (09)

Population version

model selection consistency

|sign((β1, . . . ,βs))(X�SXS)

−1X �SXSc | < 1

Page 34: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 34 June 26, 2013

Irrepresentable condition (s=2, p=3): geomery

}  r=0.4

}  r=0.6

Page 35: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 35 June 26, 2013

Model Selection Consistency of Lasso

}  Consistency holds also for s and p growing with n, assume

irrepresentable condition bounds on max and min eigenvalues of design matrix smallest nonzero coefficient bounded away from zero.

Gaussian noise (Wainwright, 09): Finite 2k-th moment noise (Zhao&Y,06):

Page 36: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 36 June 26, 2013

Consistency of Lasso for Model Selection }  Interpretation of Condition – Regressing the irrelevant predictors on the relevant

predictors. If the L1 norm of regression coefficients

(*)

}  Larger than 1, Lasso can not distinguish the irrelevant predictor from the relevant

predictors for some parameter values.

}  Smaller than 1, Lasso can distinguish the irrelevant predictor from the relevant

predictors.

}  Sufficient Conditions (Verifiable)

}  Constant correlation

}  Power decay correlation

}  Bounded correlation*

Page 37: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 37 June 26, 2013

Bootstrap and Lasso+mLS Or Lasso+Ridge Liu and Yu (2013) http://arxiv.org/abs/1306.5505

Because Lasso is biased, it is common to use it to select variables and then correct the bias with a modified OLS or a Ridge with a very small smoothing parameter. Then a residual bootstrap can be carried out to get confidence intervals for the parameters. Under the irrepresentable condition (IC), the Lasso+mLS and Lasso+Ridge are shown to have the parametric MSE of k/n (no log p term) and The residual bootstrap is shown to work as well. Even when the IC doesn’t hold, simulations shown that this bootstrap seems to work better than its comparison methods in the paper. See Liu and Yu (2013) for more details.

Page 38: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 38 June 26, 2013

L1 penalized log Gaussian Likelihood

Given n iid observations of X with Banerjee, El Ghaoui, d’Aspremont (08): by a block descent algorithm. \

Page 39: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 39 June 26, 2013

Model selection consistency

Ravikumar, Wainwright, Raskutti, Yu (2011) gives sufficient conditions for model selection consistency. Hessian: Define “model complexity”:

Page 40: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 40 June 26, 2013

Model selection consistency (Ravikumar et al, 11)

Assume the irrepresentable condition below holds 1. X sub-Gaussian with parameter and effective sample size Or 2. X has 4m-th moment, Then with high probability as n tends to infinity, the correct model is chosen.

Page 41: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 41

June 26, 2013

Success prob’s dependence on n and p (Gaussian)

§  Edge covariances as Each point is an average over 100 trials.

§  Curves stack up in second plot, so that (n/log p) controls model selection.

.1.0* =!ij

Page 42: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

page 42 June 26, 2013

Success prob’s dependence on“model complexity”K and n

§  Curves from left to right have increasing values of K.

§  Models with larger K thus require more samples n for same probability of success.

Chain graph with p = 120 nodes.

Page 43: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 43

For model selection consistency results for gradient and/or backward-forward algorithms, see works by T. Zhang:

}  Tong Zhang.

Sparse Recovery with Orthogonal Matching Pursuit under RIP,

IEEE Trans. Info. Th, 57:5215-6221, 2011.  

}  Tong Zhang. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations, IEEE Trans. Info. Th, 57:4689-4708, 2011.

Page 44: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Back to stability: robust statistics also aims at stability

6/26/13 44

Mean functions are fitted with loss. What if the “errors” have heavier tails? loss is commonly used in robust statistics to deal with heavy tail errors in regression. Will Loss add more stability? L1

L2

L1

Page 45: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Model perturbation is used in Robust Statistics

6/26/13 45

Tukey (1958), Huber (1964, …), Hampel (1968, …),

Bickel (1976, …), Rousseeuw (1979, … ), Portnoy (1979, ….), ….

"Overall, and in analogy with, for example, the stability aspects of differential equations or of numerical computations, robustness theories can be viewed as stability theories of statistical inference.”

- p. 8, Hampel, Rousseeuw, Ronchetti and Stahel (1986)

Page 46: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Seeking insights through analytical work

6/26/13 46

For high-dim data such as ours, removing some data units could change the outcomes of our model because of feature dependence.

This phenomenon is also seen in simulated data from Gaussian linear models in high-dim.

How does sample to sample variability interact with heavy tail errors?

Page 47: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Sample stability meets robust statistics in high-dim (El Karoui, Bean, Bickel, Lim and Yu, 2012)

6/26/13 47

Set-up: Linear regression model

For i=1,…,n:

We consider the random-matrix regime:

Due to invariance, WLOG, assume

Yn×1 = Xn×pβp×1 + �n×1

p/n → κ ∈ (0, 1)

β = argminβ∈Rp

i

ρ(Yi −X �iβ)

Σp = Ip,β = 0

Xi ∼ N(0,Σp), �i iid, E�i = 0

Page 48: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Sample stability meets robust statistics in high-dim (El Karoui, Bean, Bickel, Lim and Yu, 2012)

6/26/13 48

RESULT (in an important special case):

I. Let , then is distributed as

where

2. , let , indep of

Then satisfies

rρ(p, n) = ||β|| β rρ(p, n)U

U ∼ uniform(Sp−1)(1)

rρ(p, n) → rρ(κ)

rρ(κ)

Z �

proxc(ρ)(x) = argminy∈R[ρ(y) +(x− y)2

2c]

z� := �+ rρ(κ)Z

E{[proxc(ρ)]�} = 1− κ

E{[z� − proxc(z�)]2} = κr2ρ(κ)

Page 49: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Sample stability meets robust statistics in high-dim (continue)

6/26/13 49

limiting results --- normalization constant stablizes.

Sketch of proof: Invariance "leave-one-out" trick both ways (reminiscent of “swapping trick” for CLT) analytical derivations (prox functions) (reminiscent of proving normality in the limit for CLT)

Page 50: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Sample stability meets robust statistics in high-dim (continue)

6/26/13 50

Corollary of the result:

When , loss fitting (OLS) is better than

loss fitting (LAD) even when the error is double-exponential.

Ratio of Var (LAD)

and Var (OLS)

Blue – simulated

Magenta – analytic

κ = p/n > 0.3 L1L2

Page 51: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

Sample stability meets robust statistics in high-dim (continue)

6/26/13 51

Remarks:

MLE doesn't work here for a different reason than in cases where

penalized MLE works better than MLE. We have unbiased

estimators, a non-sparse situation and the question is about

variance.

Optimal loss function can be calculated (Bean et al, 2012)

Simulated model with design matrix from fMRI data and double- exp. error shows the same phenomenon: p/n > 0.3, OLS is better than LAD. Some insurance for using L2 loss function in fMRI project.

Results hold for more general settings.

Page 52: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 52

What of the future? The future of data analysis can involve great progress, the overcoming of real difficulties, and the provision of a great service to all fields of science and technology. Will it? That remains to us, to our willingness to take up the rocky road of real problems in preferences to the smooth road of unreal assumptions, arbitrary criteria, and abstract results without real attachments. Who is for the challenge? --- John W. Tukey (1962) “Future of Data Analysis”

John W. Tukey June 16, 1915 – July 26, 2000

Page 53: Applied Statistics and Machine Learning · Recent generalizations to obtain other universal limiting distributions e.g. Wigner law under non-Gaussian assumptions and last passage

6/26/13 53

Thank you for your sustained enthusiasm and questions!