54
Correlated variable selection and online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka, and R. Xiang

Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Correlated variable selection and online learning

Alan QiPurdue CS & Statistics

Joint work with F. Yan, S. Zhe, T.P. Minka, and R. Xiang

Page 2: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Outline

p (variables or features)

n ‣ EigenNet and NaNOS: Selecting correlated variables (p>>n)

n

p

‣ Virtual Vector Machine: Bayesian online learning (n>>p)

Page 3: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Outline

p (variables or features)

n ‣ EigenNet and NaNOS: Selecting correlated variables (p>>n)

n

p

‣ Virtual Vector Machine: Bayesian online learning (n>>p)

Page 4: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Correlated variable selection

p (variables or features)

n (samples)

Many variables (p>>n)

Page 5: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Uncorrelated variables

Lasso (Tibshirani 1994) works well when variables are uncorrelated

0 1 2 3

0.5

1

1.5

2

2.5

Lasso

class 2class 1

x1

x2

Page 6: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Correlated variables

Problem: Lasso selects only one out of two strongly correlated variables.

0 1 2 3

0.5

1

1.5

2

2.5

Lasso

x1

x2

Page 7: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Sparsity regularizers

• Lasso: variable selection by l1 regularization

• Elastic-net (Zou & Hastie 2005): selection of groups of variables by composite l1/2 regularization

highest data likelihood

feasible set

solution

Page 8: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Sparsity regularizers

• Lasso: variable selection by l1 regularization

• Elastic-net (Zou & Hastie 2005): selection of groups of variables by composite l1/2 regularization

• And many more: SCAD, etc...

Page 9: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Both Lasso and Elastic net ignore valuable correlation information

embedded in data.

Page 10: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

EigenNet

• Guide variable selection by data eigen-structure

• Select eigen-subpsace based on labeled information

Page 11: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

p(yi) =1

1 + exp(−yiwTxi)

p(w) ∝ exp(−λ1

d

|wd|)

Sparse conditional model

i = 1, . . . , n

yi xi

w

λ1

Page 12: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Generative model

vj sj

λ2

j = 1, . . . ,m

V = v1, . . . ,vm

vj :eigenvector of XTX

Page 13: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

i = 1, . . . , n

yi xi

w w̃

vj sj

λ2

j = 1, . . . ,m

λ1 λ3

Integration of models

Page 14: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

When variables are uncorrelated

data likelihood

vfeasible set

Page 15: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

When variables are correlated

v Eigenvector attracts classifier.

Page 16: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Adaptive composite regularization

When variables are uncorrelated

When variables are correlated

v v

Page 17: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

EigenNet vs Lasso

Both select the relevant uncorrelated variable.

x1

x2

Page 18: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

EigenNet vs Lasso

EigenNet selects both correlated variables.

x1

x2

Page 19: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Prediction with uncorrelated variables

• n: 10-80• p: 40• 8 variables are

relevant to the labels

0 20 40 60 800.1

0.2

0.3

0.4

0.5

0.6

# of training examples

test

erro

r rat

e

LassoElastic netBayesian lassoEigenNet

Results averaged over 10 runs

Page 20: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Prediction with correlated variables

• n: 10-80• p: 40• 8 variables are

relevant• Two groups of

4 correlated variables

0 20 40 60 800.2

0.25

0.3

0.35

0.4

0.45

# of training examples

test

erro

r rat

e

LassoElastic netBayesian lassoEigenNet

Results averaged over 10 runs

Page 21: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

0 10 20 30 40

True and estimated classifiers

0 10 20 30 40 500 10 20 30 40 50

Lasso Elastic net

EigenNet

Group 2

Group 1

Weights

Weights

Weights

Page 22: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

SNP-based AD classification

Lasso Elastic net ARD EigenNet0.3

0.35

0.4

Classification Error Rate

374 subjects2000 SNPs

Page 23: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Predicting ADAS-Cog score

Lasso Elastic net EigenNet6.5

7

7.5

8

8.5Root−Mean−Square Error

726 subjects14 imaging features

Page 24: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Joint graph and node selection

labels

Input variables

Constraints: e.g., biological pathways, structural symmetry

Page 25: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

New sparse Bayesian model

NaNOS (Zhe et al. 2013):

- Encode constraints (e.g., pathways) via graph Laplacians in a generalized spike and slab prior

- Automatically determine which constraints and features are useful

Page 26: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Results on synthetic data

22

24

26

29

PMSE

(a) Regression: PMSE

0.25

0.5

0.75

1

F1

(b) Regression: F1

5

10

15

20

Erro

r rat

e (%

)

(c) Classification: Error rate

0.4

0.6

0.8

1

F1

(d) Classification: F1

Fig. 4: Prediction errors and F1 scores for gene selection in Experiment 3.

EXP1−D1−D2 EXP2−D1−D2 EXP3

0.9

0.94

0.98

1

F1

(a) Regression: F1

EXP1−D1−D2 EXP2−D1−D2 EXP3

0.85

0.9

0.95

1

F1

(b) Classification: F1

Fig. 5: F1 scores for pathway selection. “EXP” stands for

“Experiment” and “D” stands for “Data model”.

For the binary response, we followed the same procedure as

for the continuous response to generate expression profiles Xand the parameters w. Then we sampled t from (2).

For each of the settings, we simulated 100 samples for

training and 100 samples for test. We repeated the simulation

50 times. To evaluate the predictive performance, we calculated

the prediction mean-squared error (PMSE) for regression and

the error rate for classification. To examine the accuracy of

gene and pathway selection, we also computed sensitivity

and specificity and summarized them in the F1 score, F1 =2 (sensitivity×specificity)/(sensitivity + specificity). The

bigger the F1 score, the higher the selection accuracy.

All the results are summarized in Figure 2, in which the

error bars represent the standard errors. For all the settings,

NaNOS gives smaller errors and higher F1 scores for gene

selection than the other methods, except that, for classification

of the samples from the second data model, NaNOS and group

lasso obtain the comparable F1 scores. All the improvements

are significant under the two-sample t-test (p < 0.05).

We also show the accuracy of group lasso, GSEA and

NaNOS for pathway selection in Figure 5. Again, NaNOS

achieves significantly higher selection accuracy. Because the

LL approach was developed for regression, we did not have

its classification results. While the LL approach uses the

topological information of all the pathways, they are merged

together into a global network for regularization. In contrast,

using a sparse prior over individual pathways , NaNOS can

explicitly select pathways relevant to the response, guiding

the gene selection. This may contribute to its improved

performance.

Experiment 2. For the second experiment, we did not

require all genes in relevant pathways to have effect on the

response. Specifically, we simulated expression levels of 100

transcription factors (TFs), each regulating 21 genes in a simple

regulatory network. We sampled the expression levels of the

TFs, the regulated genes, and their response in the same way as

in Experiment 1, except that we set

ρ = [1,1

√21

, . . . ,1

√21� �� �

10

, 0, . . . , 0� �� �

11

]

for the first data generation model and

ρ = [1,−1√21

,−1√21

,−1√21

,1

√21

, . . . ,1

√21� �� �

7

, 0, . . . , 0� �� �

11

] (24)

for the second data generation model. Note that the last eleven

zero elements in ρ indicate that the corresponding genes have

no effect on the response t, even in the four relevant pathways.

The results for both the continuous and binary responses are

summarized in Figures 3 and 5. For regression based on the first

data model, NaNOS and LL obtain the comparable F1 scores;

for all the other cases, NaNOS significantly outperforms the

alternative methods in terms of both prediction and selection

accuracy (p < 0.05).

Experiment 3. Finally, we simulated the data as in Experiment

2, except that we replaced√21 in the denominators in (24) with

21, to obtain a weaker regulatory effect of the TF. Again, as

shown in Figures 4 and 5, NaNOS outperforms the competing

methods significantly.

4.2 Application to expression dataNow we demonstrate the proposed method by analyzing gene

expression datasets for the cancer studies of diffuse large B

cell lymphoma (DLBCL) (Rosenwald et al., 2002), colorectal

cancer (CRC) (Ancona et al., 2006), and pancreatic ductal

adenocarcinoma (PDAC) (Badea et al., 2008). We used

the probeset-to-gene mapping provided in these studies. For

the CRC and PDAC datasets in which multiple probes were

mapped to the same genes, we took the average expression

level of these probes. We used the pathway information

from the KEGG pathway database (www.genome.jp/kegg/pathway.html) by mapping genes from the cancer

studies into the database, particularly in the categories of

Environmental Information Processing, Cellular Processes and

Organismal Systems.

Diffuse large B cell lymphoma. We used gene expression

profiles of 240 DLBCL patients from an uncensored study

6

22

24

26

29

PMSE

(a) Regression: PMSE

0.25

0.5

0.75

1

F1

(b) Regression: F1

5

10

15

20

Erro

r rat

e (%

)

(c) Classification: Error rate

0.4

0.6

0.8

1F1

(d) Classification: F1

Fig. 4: Prediction errors and F1 scores for gene selection in Experiment 3.

EXP1−D1−D2 EXP2−D1−D2 EXP3

0.9

0.94

0.98

1

F1

(a) Regression: F1

EXP1−D1−D2 EXP2−D1−D2 EXP3

0.85

0.9

0.95

1

F1

(b) Classification: F1

Fig. 5: F1 scores for pathway selection. “EXP” stands for

“Experiment” and “D” stands for “Data model”.

For the binary response, we followed the same procedure as

for the continuous response to generate expression profiles Xand the parameters w. Then we sampled t from (2).

For each of the settings, we simulated 100 samples for

training and 100 samples for test. We repeated the simulation

50 times. To evaluate the predictive performance, we calculated

the prediction mean-squared error (PMSE) for regression and

the error rate for classification. To examine the accuracy of

gene and pathway selection, we also computed sensitivity

and specificity and summarized them in the F1 score, F1 =2 (sensitivity×specificity)/(sensitivity + specificity). The

bigger the F1 score, the higher the selection accuracy.

All the results are summarized in Figure 2, in which the

error bars represent the standard errors. For all the settings,

NaNOS gives smaller errors and higher F1 scores for gene

selection than the other methods, except that, for classification

of the samples from the second data model, NaNOS and group

lasso obtain the comparable F1 scores. All the improvements

are significant under the two-sample t-test (p < 0.05).

We also show the accuracy of group lasso, GSEA and

NaNOS for pathway selection in Figure 5. Again, NaNOS

achieves significantly higher selection accuracy. Because the

LL approach was developed for regression, we did not have

its classification results. While the LL approach uses the

topological information of all the pathways, they are merged

together into a global network for regularization. In contrast,

using a sparse prior over individual pathways , NaNOS can

explicitly select pathways relevant to the response, guiding

the gene selection. This may contribute to its improved

performance.

Experiment 2. For the second experiment, we did not

require all genes in relevant pathways to have effect on the

response. Specifically, we simulated expression levels of 100

transcription factors (TFs), each regulating 21 genes in a simple

regulatory network. We sampled the expression levels of the

TFs, the regulated genes, and their response in the same way as

in Experiment 1, except that we set

ρ = [1,1

√21

, . . . ,1

√21� �� �

10

, 0, . . . , 0� �� �

11

]

for the first data generation model and

ρ = [1,−1√21

,−1√21

,−1√21

,1

√21

, . . . ,1

√21� �� �

7

, 0, . . . , 0� �� �

11

] (24)

for the second data generation model. Note that the last eleven

zero elements in ρ indicate that the corresponding genes have

no effect on the response t, even in the four relevant pathways.

The results for both the continuous and binary responses are

summarized in Figures 3 and 5. For regression based on the first

data model, NaNOS and LL obtain the comparable F1 scores;

for all the other cases, NaNOS significantly outperforms the

alternative methods in terms of both prediction and selection

accuracy (p < 0.05).

Experiment 3. Finally, we simulated the data as in Experiment

2, except that we replaced√21 in the denominators in (24) with

21, to obtain a weaker regulatory effect of the TF. Again, as

shown in Figures 4 and 5, NaNOS outperforms the competing

methods significantly.

4.2 Application to expression dataNow we demonstrate the proposed method by analyzing gene

expression datasets for the cancer studies of diffuse large B

cell lymphoma (DLBCL) (Rosenwald et al., 2002), colorectal

cancer (CRC) (Ancona et al., 2006), and pancreatic ductal

adenocarcinoma (PDAC) (Badea et al., 2008). We used

the probeset-to-gene mapping provided in these studies. For

the CRC and PDAC datasets in which multiple probes were

mapped to the same genes, we took the average expression

level of these probes. We used the pathway information

from the KEGG pathway database (www.genome.jp/kegg/pathway.html) by mapping genes from the cancer

studies into the database, particularly in the categories of

Environmental Information Processing, Cellular Processes and

Organismal Systems.

Diffuse large B cell lymphoma. We used gene expression

profiles of 240 DLBCL patients from an uncensored study

6

NaNOS: Lowerpredictionerror

Higherselectionaccuracy

Page 27: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Results on real data

2.45

2.55

2.65

2.75

PMSE

(a) Diffuse large B cell lymphoma12

16

20

Erro

r rat

e (%

)

(b) Colorectal cancer

9

11

13.5

Erro

r rat

e (%

)

(c) Pancreatic ductal adenocarcinoma

Fig. 6: Predictive performance on three gene expression studies of cancer.

CIITA

RFX

CREB

NFY

MHCI

MHCII

Ii/CD74

Nucleus

Endoplasmic Reticulum

MHCI MHCII

Cell membrane (a)

(b)

Securin

Mad1

Mad2

BubR1

Bub3 Bub1

APC/C

Cdc20

M phase

TNF

NOG

(c) Cell Membrane

Smad6/7

Smad1/5/8

TGF-

LTBP1

DCN

BMP

TGF R2

BMPR1

Smad2/3

Smurf1/2

Smad4

TGF R1

IFNG

Smad4

Transcription factor

EMT

BMPR2

Fig. 7: Examples of part of identified pathways. (a): the antigen processing and presentation pathway for DLBCL; (b): the cellcycle pathway for CRC; (c): the TGF-β signaling pathway for PDAC. Red and black boxes indicate selected and not selectedgenes, respectively.

in the Lymphoma and Leukemia Molecular Profiling Project(Rosenwald et al., 2002). From 7399 probes, we found 752genes and 46 pathways in the KEGG dataset. The mediansurvival time of the patients is 2.8 years after diagnosis andchemotherapy. We used the logarithm of survival times ofpatients as the response variable in our analysis.

We randomly split the dataset into 120 training and 120test samples 100 times and ran all the competing methods oneach partition. The test performance is visualized in Figure6.a. NaNOS significantly outperforms lasso, elastic net andgroup lasso. Although the results of the LL approach cancontain connected sub-networks, these sub-networks do notnecessarily correspond to (part of) a biological pathway. Forinstance, they may consist of components from multipleoverlapped pathways. In contrast, NaNOS explicitly selectsrelevant pathways. Four pathways had the selection posteriorprobabilities larger than 0.95 and they were consistently chosenin all the 100 splits. Two of these pathways are discussedbelow.

First, NaNOS selected the antigen processing and presentationpathway. The part of this pathway containing selected genes isvisualized in Figure 7.a. A selected regulator CIITA was shownto regulate two classes of antigens MHC I and II in DLBCL(Cycon et al., 2009). The loss of MHC II on lymphoma cells

– including the selected HLA-DMB, -DQB1, -DMA, -DRA, -DRB1, -DPA1, -DPB1, and -DQA1 – was shown to be relatedto poor prognosis and reduced survival in DLBCL patients(Rosenwald et al., 2002). The selected MHC I (e.g., HLA-A,-B,-C,-G) was reported to be absent from the cell surface,allowing the escape from immunosurveillance of lymphoma(Amiot et al., 1998). And the selected Ii/CD74 and HLA-DRBwere proposed to be monoclonal antibody targets for DLBCLdrug design (Dupire and Coiffier, 2010).

Second, NaNOS chose cell adhesion molecules (CAMs).Adhesive interactions between lymphocytes and the extracellularmatrix are essential for lymphocytes’ migration and homing.For example, the selected CD99 is known to be over-expressedin DLBCL and correlated with survival times (Lee et al., 2011),and LFA-1 (ITGB2/ITGAL) can bind to ICAM on the cellsurface and further lead to the invasion of lymphoma cells intohepatocytes (Terol et al., 1999).

Colorectal cancer. We applied our model to a colorectalcancer dataset (Ancona et al., 2006). It contains geneexpression profiles from 22 normal and 25 tumor tissues.We mapped 2455 genes from 22,283 probes into 67 KEGGpathways. The goal was to predict whether a tissue has thecolorectal cancer or not and select relevant pathways and genes.

We randomly split the dataset into 23 training and 24 testsamples 50 times and ran all the methods on each partition.

7

Page 28: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Predicting conversion from MCI to AD

0

13

25

38

50

ARD NaNOS

Error rate

Using symmetry constraint

%

ARD

NaNOS

Page 29: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Ongoing research: sparse multiview learning

Related work:

- Sparse latent factor models (West 2003, Carvalho et al. 2008, Bhattacharya and Dunson 2011)

- Canonical correlation analysis

Page 30: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Outline

p (variables or features)

n ‣ EigenNet: Selecting correlated variables (p>>n)

n

p

‣ Virtual Vector Machine: Bayesian online learning (n>>p)

Page 31: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Ubiquitous data streams

Page 32: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

How to handle massive data or data stream?

Online learning

Page 33: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Bayesian treatment

• Linear dynamic systems: Kalman filtering

• Nonlinear dynamic systems: Extended Kalman filtering, unscented Kalman filtering, particle filtering, assumed density filtering, online variational inference, etc.

Page 34: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Previous online learning work

• Stochastic gradient: Perceptron (Rosenblatt 1962)

• Natural gradient (Le Roux et al., 2007)

• Online Gaussian process classification (Csató & Opper, 2002)

• Passive-Aggressive algorithm (Crammer et al. 2006)

Page 35: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

What has been ignored?

Previous data points

Page 36: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Idea

Summarize all previous samples by representative virtual points & dynamically update them

Page 37: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Intuition

• Many data points are not important to classification - Prune them

• Many data points are similar - Group them

Page 38: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Apply this intuition in a Bayesian framework

: Approximate posterior of classifier : A virtual point in a small buffer : Gaussian Residue

Virtual vector machine virtual data likelihoods:

q(w) ∝ r(w)�

i

f(bi;w) (7)

r(w) ∼ N (mr,Vr) (8)

where bi is the ith virtual data point and f(bi;w) hasthe form of the original likelihood factors. The Gaus-sian r(w) is called the “residual”; it represents infor-mation from the real data that is not included in thevirtual points. Because the likelihood terms f are stepfunctions, the resulting distribution on w is a Gaus-sian modulated by a piecewise constant function. If� = 0, then q(w) is a truncated multivariate Gaussian.

From this augmented representation, we can extract afully Gaussian approximation simply by running EPover the virtual data points with prior r(w). Theresult will be an approximation to q(w) of the formq̃(w) ∼ N (mw,Vw) where

q̃(w) = r(w)�

i

f̃i(w) (9)

and f̃i(w) is Gaussian. The Gaussian q̃ is useful asa surrogate for computations on q. For example, toclassify a test point x, we use sign(mT

wx).

To compute q(w) from a stream of data, we applyBayesian online learning. That is, at each timestepwe have a q(w) computed from the data points untilnow, and we want to update q(w) in light of the newpoint. Let qnew(w) be the new approximation thatwe are trying to find. In the spirit of assumed-densityfiltering, we should try to minimize a distance measuresuch as:

D = KL(q+(w) || qnew(w)) (10)where q+(w) = q(w)f(xT ;w) (11)

Note that q+ is also of the form (7), but with one ad-ditional data point. So the problem reduces to takinga distribution of the form (7) and approximating itwith a distribution of the same form, having one fewervirtual point. An exact minimization of D over theparameters (B,mr,Vr) would be too costly. Insteadwe apply a heuristic method to do the reduction. Thefirst step is to run EP on q+ to get a fully Gaussianq̃+ with approximate factors f̃i. Then we consider ei-ther evicting a virtual point or merging two points, asdescribed below. Each of these possibilities is scoredand the one with the best score is chosen to be qnew.

To score a candidate qnew, we use an easy-to-computesurrogate for D. Intuitively, we want qnew to discardthe factors which can be well approximated by Gaus-sians, and keep the non-Gaussian information con-tained in q+. For example, a sharp truncation near

the middle of r(w) should be kept, while a truncationin the tail of r(w) could be discarded. This moti-vates maximizing the non-Gaussianity of qnew. Non-Gaussianity could be measured by the divergence be-tween qnew and q̃new:

D2 = KL(qnew(w) || q̃new(w)) (12)

This is difficult to compute exactly, but it can be ap-proximated well by adding up the term-by-term KLdivergences:

E =�

i

KL(f(bi;w)q̃new(w)/f̃i(w) || q̃new(w))

(13)

See appendix A for details. Because the changes wemake to q+ are sparse, we can efficiently compute Efor each qnew by precomputing all of the divergencesfrom q+ to q̃+ and then recomputing the divergencesonly for the factors that changed. Note that this er-ror measure assumes that the proposed qnew does notadd any new information to q+. We ensure this byconsidering only two kinds of changes.

The VVM algorithm is summarized in Algorithm 1.

Algorithm 1: Virtual Vector Machine

1. Initialize r(w) as the prior p(w).2. Initialize the virtual point set B to be the first

B incoming data points.3. For each incoming training data point xt:

a. Add xt into B.b. Run EP on B and r(w) to obtain q̃+(w).c. Score all possible evictions of virtual points.d. Find the k closest pairs of virtual points.

Test merging each of these pairs.For each merge, find the residual g(w)by solving an inverse ADF projection.

3.1 Eviction

In the simplest version of the algorithm, to reduce q+

you take one of the likelihoods f(bj ;w) and replaceit with the Gaussian f̃j(w) computed earlier by EP.The net effect is to remove bj from the cache and toupdate the residual as follows:

rnew(w) = r(w)f̃j(w) (14)

An interesting property of this eviction rule is thatif we run EP on qnew to get q̃new, there will bea fixed point where all f̃new

i (w) = f̃i(w). To seethis, note that q̃new(w) = rnew(w)

�i �=j f̃i(w) =

q(w) w

bi

r(w)

Page 39: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Dynamic update

Two operations to maintain a fixed buffer size:

• Eviction

• Merging

Page 40: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Eviction

Remove point with smallest impact on classifier posterior distribution

Page 41: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Eviction

Remove point with smallest impact on classifier posterior distribution

Point with biggest “Bayesian margin”

mTwx/

�xTVwx

Page 42: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Version space (posterior space)

The subset of all hypotheses that are consistent with the observed training examples

Four data points: hyperplanesVersion space: brown area

Page 43: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Deleting unimportant point

Version space: brown areaEP approximation: red ellipseFour data points: hyperplanes

Version space with three points after deleting one

point (with the largest distance to version space)

Page 44: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Merging

Merge similar points leading to smallest impact on classifier posterior distribution

Page 45: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Merging

Merge similar points leading to smallest impact on classifier posterior distribution

Page 46: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Merging similar points

Version space: brown areaEP approximation: red ellipseFour data points: hyperplanes

Version space with three points after merging two

similar points

Page 47: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

New algorithm

• Assumed density filtering (ADF):

- Project a posterior distribution to the exponential family given a data point

• Inverse ADF:

- Find a virtual point that will lead to a desired projection from two real points

b�

Page 48: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Estimation accuracy

• VVM achieves a smooth trade-off between computation cost and estimation accuracy.

• ADF uses a buffer of size one and EP uses the whole sequence of 300 data points.

Page 49: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Thyroid classification

Results averaged over 10 random permutations of data

Page 50: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Email spam classification

Results averaged over 10 random permutations.Buffer size: VVM 30 SOGP 143 PA 80

Page 51: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,
Page 52: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Conclusions

Page 53: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

Summary‣ Correlated variable selection (p>>n):

- Capture correlation between variables by eigenvectors or graph constraints

- Select both eigensubspace/graphs and features

- Handle both regression and classification

‣ Bayesian online learning (n>>p): - Combine parametric approximation with online

data summarization - Lead to efficient computation and accurate

estimation for nonlinear dynamic systems

Page 54: Correlated variable selection and online learningpeople.ee.duke.edu/~lcarin/SAHD_Qi.pdf · online learning Alan Qi Purdue CS & Statistics Joint work with F. Yan, S. Zhe, T.P. Minka,

• Funding agencies: - NSF: CAREER, Robust Intelligence, CDI, STC

- Microsoft, IBM, Eli Lilly

- Showalter foundation

- Indiana Clinical & Translational Science Institute- Purdue

Acknowledgment• My group: Z. Xu, S. Zhe, Y. Guo, S. A. Naqvi, D. Runyan, X. Tan, H.

Peng, Y. Yang, Y. Han, F. Yan

• Collaborators: T. P. Minka (Microsoft), S. Pyne (Harvard Medical school), G. Ceder (MIT), L. Shen (IU Medical school), A. Saykin (IU Medical school), J. P. Robinson (Purdue), K. Lee (U. of Toronto), F. Garip (Harvard), J. Wang (Eli Lily), P. Yu (Eli Lilly), H. Neven (Google)