35
A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data Author: Gustavo E. A. Batista Presenter: Hui Li University of Ottawa

A Study of the Behavior of Several Methods for Balancing Machine

  • Upload
    lyphuc

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Study of the Behavior of Several Methods for Balancing Machine

A Study of the Behavior of Several Methods for Balancing

Machine Learning Training Data

Author: Gustavo E. A. BatistaPresenter: Hui Li

University of Ottawa

Page 2: A Study of the Behavior of Several Methods for Balancing Machine

Contents

• Introduction• Why Learning from Imbalanced data sets

might be difficult?• 10 Methods• Experimental Evaluation• Conclusion

Page 3: A Study of the Behavior of Several Methods for Balancing Machine

Class Imbalance Problem• Problem

– Class Imbalance: examples in training data belonging to one class heavily outnumber the examples in the other class.

– Most learning systems assume the training sets to be balanced.

• Result: – influence the performance achieved by existing

learning systems.– The learning system may have difficulties to learn the

concept related to the minority class.

Page 4: A Study of the Behavior of Several Methods for Balancing Machine

Why Learning from Imbalanced Data Sets might be difficult?

• Domains– Medical record databases regarding a rare

disease– Continuous fault-monitoring tasks

• ML community seems to agree on the hypothesis that the class imbalance is the major hypothesis in inducing classifiers in imbalanced domains.

Page 5: A Study of the Behavior of Several Methods for Balancing Machine

Why Learning from Imbalanced Data Sets might be difficult? (Con’t)• However, standard ML algorithms are

capable of inducing good classifiers even in heavily imbalanced training sets, for example in sick data set.– Class imbalance is not the only problem

responsible for the decrease in performance in learning algorithm

– So, what’s the other factors???

Page 6: A Study of the Behavior of Several Methods for Balancing Machine

Why Learning from Imbalanced Data Sets might be difficult? (Con’t)

•Spare cases from the minority class may confuse a classifier like k-nearest Neighbor (k-NN)

Page 7: A Study of the Behavior of Several Methods for Balancing Machine

Motivation and Methods

• Answer: The degree of data overlapping among the classes

• Motivation:– Balance the training data– Remove noisy examples lying on the wrong side of the

decision border• Methods:

– Over-sampling method: Smote– Data cleaning methods: Tomek links, and Edited

Nearest Neighbor Rule

Page 8: A Study of the Behavior of Several Methods for Balancing Machine

Methods• Baseline Methods

– Random over-sampling– Random under-sampling

• Under-sampling Methods– Tomek links– Condensed Nearest Neighbor Rule– One-sided selection– CNN + Tomek links– Neighborhood Cleaning Rule

• Over-sampling Methods– Smote

• Combination of Over-sampling method with Under-sampling method– Smote + Tomek links– Smote + ENN

Page 9: A Study of the Behavior of Several Methods for Balancing Machine

Baseline Methods

• Baseline methods– Random over-sampling

• random replication of minority class examples• Can increase the likelihood of occurring overfitting

– Random over-sampling• random elimination of majority class examples• Can discard potentially useful data that could be

important for the induction process

Page 10: A Study of the Behavior of Several Methods for Balancing Machine

Four Groups of Negative Examples

• Noise examples• Borderline examples

– Borderline examples are unsafe since a small amount of noise can make them fall on the wrong side of the decision border.

• Redundant examples• Safe examples

Page 11: A Study of the Behavior of Several Methods for Balancing Machine

Tomek links [1]

• To remove both noise and borderline examples

• Tomek link– Ei, Ej belong to different classes, d (Ei, Ej) is the

distance between them.– A (Ei, Ej) pair is called a Tomek link if there is

no example El, such that d(Ei, El) < d(Ei, Ej) or d(Ej , El) < d(Ei, Ej).

Page 12: A Study of the Behavior of Several Methods for Balancing Machine

Tomek links

Page 13: A Study of the Behavior of Several Methods for Balancing Machine

Condensed Nearest Neighbor Rule (CNN rule) [2]

• To pick out points near the boundary between the classes• To find a consistent subset of examples.

– A subset E’ E is consistent with E if using a 1-nearest neighbor, ⊆E’ correctly classifies the examples in E

• Algorithm:– Let E be the original training set– Let E’ contains all positive examples from S and one randomly

selected negative example– Classify E with the 1-NN rule using the examples in E’– Move all misclassified example from E to E’

• Sensitive to noise. Noisy examples are likely to be misclassified, many of them will be added to the training set.

Page 14: A Study of the Behavior of Several Methods for Balancing Machine

Condensed Nearest Neighbor Rule (CNN rule)

Page 15: A Study of the Behavior of Several Methods for Balancing Machine

One-sided selection [3] vs CNN+Tomek links

• One-sided selection– Tomek links + CNN

• CNN + Tomek links– Proposed by the author– Finding Tomek links is computationally

demanding, it would be computationally cheaper if it was performed on a reduced data set.

Page 16: A Study of the Behavior of Several Methods for Balancing Machine

Neighborhood Cleaning Rule [4]• To remove majority class examples• Different from OSS, emphasize more data cleaning than

data reduction• Algorithm:

– Find three nearest neighbors for each example Ei in the training set– If Ei belongs to majority class, & the three nearest neighbors

classify it to be minority class, then remove Ei

– If Ei belongs to minority class, and the three nearest neighbors classify it to be majority class, then remove the three nearest neighbors

Page 17: A Study of the Behavior of Several Methods for Balancing Machine

Smote: Synthetic Minority Over-sampling Technique [6]

• To form new minority class examples by interpolating between several minority class examples that lie together.

• in ``feature space'' rather than ``data space'' • Algorithm: For each minority class example, introduce synthetic

examples along the line segments joining any/all of the k minority class nearest neighbors.

• Note: Depending upon the amount of over-sampling required, neighbors from the k nearest neighbors are randomly chosen.

• For example: if we are using 5 nearest neighbors, if the amount of over-sampling needed is 200%, only two neighbors from the five nearest neighbors are chosen and one sample is generated in the direction of each.

Page 18: A Study of the Behavior of Several Methods for Balancing Machine

Smote: Synthetic Minority Over-sampling Technique

• Synthetic samples are generated in the following way: – Take the difference

between the feature vector (sample) under consideration and its nearest neighbor.

– Multiply this difference by a random number between 0 and 1

– Add it to the feature vector under consideration.

Consider a sample (6,4) and let (4,3) be its nearest neighbor.

(6,4) is the sample for which k-nearest neighbors are being identified

(4,3) is one of its k-nearest neighbors.

Let:

f1_1 = 6  f2_1 = 4  f2_1 - f1_1 = -2

f1_2 = 4  f2_2 = 3  f2_2 - f1_2 = -1

The new samples will be generated as

(f1',f2') = (6,4) + rand(0-1) * (-2,-1)

rand(0-1) generates a random number between 0 and 1.

Page 19: A Study of the Behavior of Several Methods for Balancing Machine

Smote + Tomek links

• Problem with Smote: might introduce the artificial minority class examples too deeply in the majority class space.

• Tomek links: data cleaning• Instead of removing only the majority class

examples that form Tomek links, examples from both classes are removed

Page 20: A Study of the Behavior of Several Methods for Balancing Machine

Smote + Tomek links

Page 21: A Study of the Behavior of Several Methods for Balancing Machine

Smote + ENN

• ENN removes any example whose class label differs from the class of at least two of its three nearest neighbors.

• ENN remove more examples than the Tomek links does

• ENN remove examples from both classes

Page 22: A Study of the Behavior of Several Methods for Balancing Machine

Experimental Evaluation

• 10 methods• 13 UCI data sets which have different degrees of

imbalance

Page 23: A Study of the Behavior of Several Methods for Balancing Machine

First Stage

• Ran 4.5 over the original imbalanced data sets using 10-fold cross validation

Page 24: A Study of the Behavior of Several Methods for Balancing Machine

First Stage• Facts:

– In spite of a large degree of imbalance, the data sets Letter-a and Nursery obtained almost 100% AUC

• Conclude: – domains with non-overlapping classes do not seem to be problematic for

learning no matter the degree of imbalance– But when allied to highly overlapped classes, it can significantly decrease

the number of minority class examples correctly classified.– The relationship between training set size and performance

• For small imbalanced data sets, when a large degree of class overlapping exists and the class is further divided into subclusters, the minority class is poorly represented by an excessively reduced number of examples

• For large data sets, the effect of these complicating factors seems to be reduced, the minority class is better represented by a larger number of examples.

Page 25: A Study of the Behavior of Several Methods for Balancing Machine

Second Stage• Applied the over and under-sampling methods to the original data sets• Facts:

– Pruning rarely leads to an improvement in AUC for the original and balanced data sets.

– All best results (results in bold) were obtained by the over-sampling methods. – Over-sampling methods are better ranked than the under-sampling methods– Random over-sampling in particular is well-ranked among the remainder methods– Two of the proposed methods: Smote+Tomek and Smoke+ENN are generally

ranked among the best for data sets with a small number of positive examples• Explanation

– The loss of performance is directly related to the lack of minority class examples in conjunction with other complicating factors.

– Over sampling is the methods that most directly attack the problem of the lack minority class examples.

Page 26: A Study of the Behavior of Several Methods for Balancing Machine

Second Stage- results for the original and over sampled data sets

Page 27: A Study of the Behavior of Several Methods for Balancing Machine

Second Stage- results for the original and under sampled data sets

Page 28: A Study of the Behavior of Several Methods for Balancing Machine

Second Stage- performance ranking for original and

balanced data sets for pruned decision trees

•Light gray color: results obtained with over sampling methods•Dark gray color: results obtained with the original data sets•Methods marked with an asterisk obtained statistically inferior results when compared to the top ranked method

Page 29: A Study of the Behavior of Several Methods for Balancing Machine

Second Stage- performance ranking for original and

balanced data sets for unpruned decision trees

Page 30: A Study of the Behavior of Several Methods for Balancing Machine

Third Stage• To measure the syntactic complexity of the induced models.• Syntactic complexity is given by two main parameters:

– the mean number of induced rules – the mean number of conditions per rule

• Facts– Over sampling lead to an increase in the number of induced rules

compared to the one induced with the original data sets– Random over-sampling and Smote+ENN provide a smaller increase in the

mean number of rules– Smote+ENN provide a smaller increase in the mean number of conditions

per rule• Explanation

– Over-sampling increase the total number of training examples, which usually generates larger decision trees

Page 31: A Study of the Behavior of Several Methods for Balancing Machine

Third Stage- mean number of induced rules

•Best results are shown in bold•The best results obtained by an over-sampling method are highlighted in light gray color

Page 32: A Study of the Behavior of Several Methods for Balancing Machine

Third Stage- mean number of conditions per rule

Page 33: A Study of the Behavior of Several Methods for Balancing Machine

Conclusion• Class imbalance does not systematically hinder the performance of learning

systems.• Besides class imbalance, the degree of data overlapping among the classes is

another factor that lead to the decrease in performance of learning algorithms.• Experiments show that in general, over-sampling methods provide more

accurate results than under-sampling methods considering the are under the ROC curve.

• Random over-sampling is very competitive to more complex over-sampling methods.

• Random over-sampling usually produced the smallest increase in the mean number of induced rules, when compared among the over-sampling methods.

• Smote+ENN produced the smallest increase in the mean number of conditions per rule, when compared among the over-sampling methods.

Page 34: A Study of the Behavior of Several Methods for Balancing Machine

References• [1] Tomek, I. Two Modifications of CNN. IEEE Transactions on

Systems Man and Communications SMC-6 (1976), pp. 769-772• [2] Hart, P. E. The Condensed Nearest Neighbor Rule. IEEE

Transactions on Information Theory IT-14 (1968), 515-516• [3] Kubat, M., and Matwin, S. Addressing the Course of Imbalanced

Training Sets: One-sided Selection. In ICML (1997), pp. 179-186.• [4] Laurikkala, J. Improving Identification of Difficult Small Classes

by Balancing Class Distribution. Tech. Rep. A-2001-2, University of Tampere, 2001.

• [5] Wilson, D. L. Asymptotic Properties of Nearest Neighbor Rules Using Edited Data. IEEE Transactions on Systems, Man, and Communications 2, 3 (1972), 408-421.

• [6] Chawla, N. V., Bowyer, K. W., Hall, L. O., and Kegelmeyer, W. P. SMOTE: Synthetic Minority Over-sampling Technique. JAIR 16 (2002), 324-357

Page 35: A Study of the Behavior of Several Methods for Balancing Machine

• Thanks• Question?