Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
CMU-Q 15-381Lecture 24:
Supervised Learning 2
Teacher:
Gianni A. Di Caro
SUPERVISED LEARNING
2
minimize𝜽
1
𝑚
𝑖=1
𝑚
ℓ ℎ𝜃 𝒙(𝑖) , 𝑦(𝑖)
Given a collection of input features and outputs ℎ𝜃 𝒙(𝑖) , 𝑦(𝑖) , 𝑖 = 1,… ,𝑚 and a hypothesis
function ℎ𝜃, find parameters values 𝜽 that minimize the average empirical error:
We need to specify:
1. The hypothesis class 𝓗, 𝒉𝜽 ∈ 𝓗
2. The loss function ℓ
3. The algorithm for solving the optimization problem (often approximately)
4. A complete ML design: from data processing to learning to validation and testing
Labeled
Given
ErrorsPerformance
criteria
Hypotheses space
Hypothesis function
CLASSIFICATION AND REGRESSION
3
Complex
boundaries, relations
Which hypothesis class ℋ?
Classification: (Width, Lightness)
↓{Salmon, Sea bass} (discrete)
Regression: (Width, Lightness)
↓Weight (continuous)
ℎ𝜃 𝒙 : 𝑋 ⊆ ℝ2 → 𝑌 = {0,1} ℎ𝜃 𝒙 : 𝑋 ⊆ ℝ2 → 𝑌 ⊆ ℝ
Features:
Width, Lightness
PROBABILISTIC MODELS: DISCRIMINATIVE VS. GENERATIVE
4
Discriminative models:
Directly learn 𝑝 𝑦 𝒙)
Parametric hypothesis
Allow to discriminate between
classes / predicted outputs
Generative models / Probability distributions:
Learn 𝑝(𝒙, 𝑦), the probabilistic model that
describes the data, then use Bayes’ rule
Allow to generate data any relevant data
Regression and classification problems can be stated in probabilistic terms (later)
The mapping 𝑦 = ℎ𝜃 𝒙 that we are learning can be naturally interpreted as the
probability of the output being 𝑦 given the input data 𝒙 (under the selected
hypothesis ℎ and the learned parameter vector 𝜃)
𝑝 𝑦 𝒙) =𝑝 𝒙 𝑦)𝑝(𝑦)
𝑝(𝒙)=𝑝(𝒙, 𝑦)
𝑝(𝒙)𝑥2
𝑥1𝑥2
= salmon
= sea bass
𝑥1
GENERATIVE MODELS
5
A generative approach would proceed as follows:
1. By looking at the feature data about salmons, build
a model of a salmon
2. By looking at the feature data about sea basses,
build a model of a sea bass
A discriminative model, that learn learns 𝑝 𝑦 𝒙; 𝜽), can be used to label the
data, to discriminate the data, but not to generate the data
o E.g., a discriminative approach tries to find out
which (linear, in this case) decision boundary
allows for the best classification based on the
training data, and takes decisions accordingly
o Direct learning of the mapping from 𝑋 to 𝑌
3. To classify a new fish based on its features 𝒙, we can match it against the
salmon and the sea bass models, to see whether it looks more like the
salmons or more like the sea basses we had seen in the training set
1,2,3 is equivalent to model 𝑝 𝒙 𝑦), where 𝑦 = {𝜔1, 𝜔2}: the conditional
probability that the observed features 𝒙 are those of a salmon or a sea bass
GENERATIVE MODELS
6
𝑝 𝑦 𝒙) =𝑝 𝒙 𝑦)𝑝(𝑦)
𝑝(𝒙)=𝑝(𝒙, 𝑦)
𝑝(𝒙)
𝑝 𝒙 𝑦 = 𝜔1) models the distribution of salmon’s features
𝑝 𝒙 𝑦 = 𝜔2) models the distribution of sea bass’ features
𝑝(𝑦) can be derived from the dataset or from other sources
o E.g., 𝑝(𝜔1) = ratio of salmons in the dataset, 𝑝(𝜔2) = ratio of sea basses
Bayes rule:
𝑝 𝒙 = 𝑝 𝒙 𝑦 = 𝜔1)𝑝 𝑦 = 𝜔1 + 𝑝 𝒙 𝑦 = 𝜔2)𝑝(𝑦 = 𝜔2)
To make a prediction:
argmax𝑦
𝑝 𝑦 𝒙) = argmax𝑦
𝑝 𝒙 𝑦)𝑝(𝑦)
𝑝(𝒙)= argmax
𝑦𝑝 𝒙 𝑦)𝑝(𝑦)
𝑝𝑜𝑠𝑡𝑒𝑟𝑖𝑜𝑟 =𝑙𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑 × 𝑝𝑟𝑖𝑜𝑟
𝑒𝑣𝑖𝑑𝑒𝑛𝑐𝑒
Equivalent to: decide 𝜔1 if 𝑝 𝜔1 𝒙) > 𝑝 𝜔2 𝒙), otherwise decide 𝜔2
GENERATIVE MODELS AND BAYES DECISION RULE
7
Likelihood ratio
Two disconnected regions for class 2
Decide 𝜔1 if 𝑝 𝒙 𝜔1)𝑝 𝜔1 > 𝑝 𝒙 𝜔2)𝑝(𝜔2) otherwise decide 𝜔2
Decide 𝜔1 if 𝑝 𝒙 𝜔1)
𝑝 𝒙 𝜔2)>
𝑝(𝜔2)
𝑝(𝜔1)otherwise decide 𝜔2
GENERATIVE MODELS
8
Given the joint distribution we can generate any conditional or marginal probability
Sample from 𝑝(𝒙, 𝑦) to obtain labeled data points
Given the priors 𝑝(𝑦), sample a class or a predictor value
Given the class 𝑦, sample instance data 𝑝 𝒙 𝑦) of that class, or, given a
predictor variable sample an expected output
Downside: higher complexity, more parameters to learn
Density estimation problem:
Parametric (e.g., Gaussian densities)
Non-parametric (full density estimation)
9
LET’S GO BACK TO LINEAR REGRESSION…
Linear model as hypothesis:
𝑦 = ℎ 𝒙;𝒘 = 𝑤0 + 𝑤1𝑥1 + 𝑤2𝑥2 +⋯+ 𝑤𝑑𝑥𝑑 = 𝒘𝑇 ∙ 𝒙
𝒙 = (1, 𝑥1, 𝑥2, ⋯ , 𝑥𝑑 )
ℎ
Find 𝒘 that minimizes the deviation from the
desired answers: 𝑦(𝑖) ≈ ℎ 𝒙 𝑖 , ∀𝑖 in dataset
Loss function: Mean squared error (MSE)
ℓ =1
𝑚
𝑖=1
𝑚
𝑦 𝑖 − ℎ 𝒙 𝑖2
The model does not try to explain variation in observed 𝑦s for the data
10
STATISTICAL MODEL FOR LINEAR REGRESSION
A statistical model of linear regression: 𝑦 = 𝒘𝑇𝒙 + 𝜀
𝐸 𝑦 𝑥 = 𝒘𝑇𝒙
The model does explain variation
in observed 𝑦s for the data in
terms of a white Gaussian noise
𝜀 ~ 𝑁(0, 𝜎2 )
𝑦 ~ 𝑁(𝒘𝑇𝒙, 𝜎2 )
The conditional distribution of 𝑦 given 𝒙:
𝑝 𝑦 𝒙;𝒘, 𝜎) =1
𝜎 2𝜋exp −
1
2𝜎2𝑦 − 𝒘𝑇𝒙 2
Probability of the output being 𝑦 given the predictor 𝒙
11
STATISTICAL MODEL FOR LINEAR REGRESSION
Let’s consider the entire data set 𝔇, and let’s assume that all samples are
independent and identically distributed (i.i.d.) random variables
What is the joint probability of all training data? That is, the probability of observing
all the outputs 𝑦 in 𝔇 given 𝒘 and 𝜎?
𝑝 𝑦(1), 𝑦(2), ⋯ , 𝑦(𝑚) 𝒙(1), 𝒙(2), ⋯ , 𝒙(𝑚) ; 𝒘, 𝜎)
By iid:
𝑝 𝑦(1), 𝑦(2), ⋯ , 𝑦(𝑚) 𝒙(1), 𝒙(2), ⋯ , 𝒙(𝑚) ; 𝒘, 𝜎) =ෑ
𝑖=1
𝑚
𝑝 𝑦 𝑖 𝒙(𝑖); 𝒘, 𝜎)
Maximum likelihood estimation of the parameters 𝒘: parameter values
maximizing the likelihood of the predictions, the value of the parameters such
that the probability of observing the data in 𝔇 is maximized
𝐿(𝔇, 𝒘, 𝜎) = ς𝑖=1𝑚 𝑝 𝑦 𝑖 𝒙(𝑖); 𝒘, 𝜎) Likelihood function of predictions, the
probability of observing the outputs 𝑦 in 𝔇given 𝒘 and 𝜎
𝒘∗ = argmax𝒘
𝐿(𝔇, 𝒘, 𝜎)
12
STATISTICAL MODEL FOR LINEAR REGRESSION
Log-Likelihood:
𝑙 𝔇, 𝒘, 𝜎 = log(𝐿(𝔇, 𝒘, 𝜎)) = log ς𝑖=1𝑚 𝑝 𝑦 𝑖 𝒙(𝑖); 𝒘, 𝜎)
=
𝑖=1
𝑚
log 𝑝 𝑦 𝑖 𝒙(𝑖); 𝒘, 𝜎)
Using the conditional density: 𝑝 𝑦 𝒙;𝒘, 𝜎) =1
𝜎 2𝜋exp −
1
2𝜎2𝑦 − 𝒘𝑇𝒙 2
𝑙 𝔇, 𝒘, 𝜎 =
𝑖=1
𝑚
log1
𝜎 2𝜋exp −
1
2𝜎2𝑦 𝑖 −𝒘𝑇𝒙(𝑖)
2=
𝑖=1
𝑚
−1
2𝜎2𝑦 𝑖 −𝒘𝑇𝒙(𝑖)
2− 𝑐(𝜎)
= −1
2𝜎2
𝑖=1
𝑚
𝑦 𝑖 −𝒘𝑇𝒙(𝑖)2+ 𝑐(𝜎)
Does it look familiar?
Maximizing the predictive log-likelihood
with regard to 𝒘, is equivalent to
minimizing the MSE loss function
max𝒘
𝑙 𝔇, 𝒘, 𝜎 ~min𝒘
𝑀𝑆𝐸
More in general, least squares linear fit under Gaussian noise corresponds to
the maximum likelihood estimator of the data
13
NON-LINEAR, ADDITIVE REGRESSION MODELS
NON-LINEAR PROBLEMS?
14
Design a non-linear regressor / classifier
Modify the input data to make the problem linear
MAP DATA IN HIGHER DIMENSIONALITY FEATURE SPACES
15
MAP DATA IN HIGHER DIMENSIONALITY FEATURE SPACES
16
The property of the solution of SVMs (that are in terms of dot products between
feature vectors) allows to easily define a kernel function that implicitly perform
the desired transformation, allowing keeping using linear classifiers ….
The hyperplane is found in 𝒛-space, then
projected back in 𝒙-space, where is an ellipsis
17
NON-LINEAR, ADDITIVE REGRESSION MODELS
Main idea to model nonlinearities: Replace inputs to linear units with 𝑏 feature
(basis) functions 𝜙𝑗 𝒙 , 𝑗 = 1,⋯ , 𝑏, where 𝜙𝑗 𝒙 is an arbitrary function of 𝒙
𝑦 = ℎ 𝒙;𝒘 = 𝑤0 + 𝑤1𝜙1 𝒙 + 𝑤2𝜙2 𝒙 +⋯+ 𝑤𝑏𝜙𝑏 𝒙 = 𝒘𝑇 ∙ 𝝓(𝒙)
ℎ
𝑏
𝑏
Original
feature
input
New input Linear model
18
EXAMPLES OF FEATURE FUNCTIONS
Higher order polynomial with one-dimensional input, 𝒙 = (𝑥)
𝜙1 𝒙 = 𝑥, 𝜙2 𝒙 = 𝑥2, 𝜙3 𝒙 = 𝑥3, ⋯
Quadratic polynomial with two-dimensional inputs, 𝒙 = (𝑥1, 𝑥2)
𝜙1 𝒙 = 𝑥1, 𝜙2 𝒙 = 𝑥12, 𝜙3 𝒙 = 𝑥2, 𝜙4 𝒙 = 𝑥2
2, 𝜙3 𝒙 = 𝑥1𝑥2
Transcendent functions:
𝜙1 𝒙 = sin(𝑥), 𝜙2 𝒙 = cos(𝑥)
…
19
SOLUTION USING FEATURE FUNCTIONS
The same techniques (analytical gradient + system of equations, or gradient
descent) used for the plain linear case with MSE as loss function
𝝓 𝒙 𝑖 = (1, 𝜙1 𝒙 𝑖 , 𝜙2 𝒙 𝑖 , ⋯ , 𝜙𝑏(𝒙𝑖 ))
ℓ =1
𝑚
𝑖=1
𝑚
𝑦 𝑖 − ℎ 𝒙 𝑖2
To find min𝒘
ℓ we have to look where 𝛻𝒘 ℓ = 0
𝛻𝒘 ℓ = −2
𝑚
𝑖=1
𝑚
𝑦 𝑖 − ℎ 𝒙 𝑖 𝝓 𝒙 𝑖 = 𝟎
ℎ 𝒙 𝑖 ; 𝒘 = 𝑤0 + 𝑤1𝜙1 𝒙 𝑖 + 𝑤2𝜙2 𝒙 𝑖 +⋯+𝑤𝑏𝜙𝑏 𝒙 𝑖 = 𝒘𝑇 ∙ 𝝓(𝒙 𝑖 )
Results in a system of 𝑏 linear equations:
𝑤0
𝑖=1
𝑚
1𝜙𝑗 𝒙 𝑖 + 𝑤1
𝑖=1
𝑚
𝜙1 𝒙 𝑖 𝜙𝑗 𝒙 𝑖 + ⋯+𝑤𝑘
𝑖=1
𝑚
𝜙𝑘 𝒙 𝑖 𝜙𝑗 𝒙 𝑖 ⋯+𝑤𝑏
𝑖=1
𝑚
𝜙𝑏 𝒙 𝑖 𝜙𝑗 𝒙 𝑖
=
𝑖=1
𝑚
𝑦𝑖𝜙𝑗 𝒙 𝑖 ∀𝑗 = 1,⋯ , 𝑏
20
EXAMPLE OF SDG WITH FEATURE FUNCTIONS
One dimensional feature vectors and high-order polynomial: 𝒙 = 𝑥 , 𝜙𝑖 𝒙 = 𝑥𝑖
ℎ 𝒙;𝒘 = 𝑤0 +𝑤1𝜙1 𝒙 + 𝑤2𝜙2 𝒙 +⋯+ 𝑤𝑏𝜙𝑏 𝒙 = 𝑤0 +
𝑖=1
𝑏
𝑤𝑖 𝑥𝑖
On-line, single sample, (𝒙 𝑖 , 𝑦 𝑖 ), gradient update, ∀𝑗 = 1,⋯ , 𝑏
𝑤𝑗 = 𝑤𝑗 + 𝛼𝛻𝒘 ℓ ℎ 𝒙 𝑖 ; 𝒘 , 𝑦 𝑖 = 𝑤𝑗 + 𝛼 𝑦 𝑖 − ℎ 𝒙 𝑖 𝜙𝑗 𝒙 𝑖
Same form as in the linear regression model, with 𝒙𝑗(𝑖)
→ 𝜙𝑗 𝒙 𝑖
ELECTRICITY EXAMPLE
21
New data: it doesn’t look
linear anymore
22
NEW HYPOTHESIS
The complexity of the model grows: one parameter for each feature transformed
according to a polynomial of order 2 (at least 3 parameters vs. 2 of original hypothesis)
23
NEW HYPOTHESIS
At least 5 parameters (if we had multiple predicting features, all their order d
products should be considered, resulting into a number of additional parameters)
24
NEW HYPOTHESIS
The number of parameters is now larger than the data points, such that the
polynomial can almost precisely fit the data Overfitting
25
SELECTING MODEL COMPLEXITY
Dataset with 10 points, 1D features: which hypothesis class should we use?
Linear regression: 𝑦 = ℎ 𝑥;𝒘 = 𝑤0 + 𝑤1𝑥
Polynomial regression, cubic: 𝑦 = ℎ 𝑥;𝒘 = 𝑤0 + 𝑤1𝑥 + 𝑤2𝑥2 + 𝑤3𝑥
3
MSE for the loss functions
Which model would give the smaller error in terms of MSE / least squares fit?
26
SELECTING MODEL COMPLEXITY
Cubic regression provides a better fit to the data, and a smaller MSE
Should we stick with the hypothesis ℎ 𝑥;𝒘 = 𝑤0 + 𝑤1𝑥 + 𝑤2𝑥2 + 𝑤3𝑥
3 ?
Since a higher order polynomial seems to provide a better fit, why don’t we
use a polynomial of order higher than 3?
What is the highest order that makes sense for the given problem?
27
SELECTING MODEL COMPLEXITY
For 10 data points, a degree 9 polynomial gives a perfect fit (Lagrange
interpolation). Error is zero.
Is it always good to minimize (even reduce to zero) the training error?
Related (and more important) question: How do we (will) perform on new,
unseen data?
28
OVERFITTING
The 9-polynomial model totally fails the prediction for the new point!
Overfitting: Situation when the training error is low and the generalization error
is high. Causes of the phenomenon:
Highly complex hypothesis model, with a large number of parameters
(degrees of freedom)
Small data size (as compared to the complexity of the model)
The learned function has enough degrees of freedom to (over)fit all data perfectly
29
OVERFITTING
Empirical loss vs. Generalization loss
30
TRAINING AND VALIDATION LOSS
31
SPLITTING DATASET IN TWO
32
PERFORMANCE ON VALIDATION SET
33
PERFORMANCE ON VALIDATION SET
34
INCREASING MODEL COMPLEXITY
In this case, the small size of the dataset favors an easy overfitting by
increasing the degree of the polynomial (i.e., hypothesis complexity). For
a large multi-dimensional dataset this effect is less strong / evident
35
TRAINING VS. VALIDATION LOSS
36
MODEL SELECTION AND EVALUATION PROCESS
1. Break all available data into training and testing sets (e.g., 70% / 30%)
2. Break training set into training and validation sets (e.g., 70% / 30%)
3. Loop:
i. Set a hyperparameter value (e.g., degree of polynomial → model complexity)
ii. Train the model using training sets
iii. Validate the model using validation sets
iv. Exit loop if (validation errors keep growing && training errors go to zero)
4. Choose hyperparameters using validation set results: hyperparameter values
corresponding to lowest validation errors
5. (Optional) With the selected hyperparameters, retrain the model using all training
data sets
6. Evaluate (generalization) performance on the testing sets
(more on this next time)
37
MODEL SELECTION AND EVALUATION PROCESS
Dataset
Testing
setTraining
set
Internal
training setValidation
set
Model 1
⋮
Learn 1
Learn 2
Learn 𝑛
Validate 1
Validate 2
Validate 𝑛
Select
best
model
Model ∗
⋮ ⋮
Learn ∗
Model 2
Model 𝑛