66
Journal of Machine Learning Research 22 (2021) 1-66 Submitted 6/19; Revised 8/20; Published 1/21 A Unified Sample Selection Framework for Output Noise Filtering: An Error-Bound Perspective Gaoxia Jiang [email protected] School of Computer and Information Technology Shanxi University Taiyuan, 030006, PR China Wenjian Wang * [email protected] School of Computer and Information Technology, Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education Shanxi University Taiyuan, 030006, PR China Yuhua Qian [email protected] Institute of Big Data Science and Industry, Key Laboratory of Computational Intelligence and Chi- nese Information Processing of Ministry of Education Shanxi University Taiyuan, 030006, PR China Jiye Liang [email protected] Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education Shanxi University Taiyuan, 030006, PR China Editor: Isabelle Guyon Abstract The existence of output noise will bring difficulties to supervised learning. Noise filtering, aiming to detect and remove polluted samples, is one of the main ways to deal with the noise on outputs. However, most of the filters are heuristic and could not explain the fil- tering influence on the generalization error (GE) bound. The hyper-parameters in various filters are specified manually or empirically, and they are usually unable to adapt to the data environment. The filter with an improper hyper-parameter may overclean, leading to a weak generalization ability. This paper proposes a unified framework of optimal sample selection (OSS) for the output noise filtering from the perspective of error bound. The covering distance filter (CDF) under the framework is presented to deal with noisy out- puts in regression and ordinal classification problems. Firstly, two necessary and sufficient conditions for a fixed goodness of fit in regression are deduced from the perspective of GE bound. They provide the unified theoretical framework for determining the filtering effectiveness and optimizing the size of removed samples. The optimal sample size has the adaptability to the environmental changes in the sample size, the noise ratio, and noise variance. It offers a choice of tuning the hyper-parameter and could prevent filters from overcleansing. Meanwhile, the OSS framework can be integrated with any noise estimator *. Corresponding author. c 2021 Gaoxia Jiang, Wenjian Wang, Yuhua Qian and Jiye Liang. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v22/19-498.html.

A Uni ed Sample Selection Framework for Output Noise

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Journal of Machine Learning Research 22 (2021) 1-66 Submitted 6/19; Revised 8/20; Published 1/21
A Unified Sample Selection Framework for Output Noise Filtering: An Error-Bound Perspective
Gaoxia Jiang [email protected] School of Computer and Information Technology Shanxi University Taiyuan, 030006, PR China
Wenjian Wang∗ [email protected]
School of Computer and Information Technology, Key Laboratory of Computational Intelligence and
Chinese Information Processing of Ministry of Education
Shanxi University
Yuhua Qian [email protected]
Institute of Big Data Science and Industry, Key Laboratory of Computational Intelligence and Chi-
nese Information Processing of Ministry of Education
Shanxi University
Education
Abstract
The existence of output noise will bring difficulties to supervised learning. Noise filtering, aiming to detect and remove polluted samples, is one of the main ways to deal with the noise on outputs. However, most of the filters are heuristic and could not explain the fil- tering influence on the generalization error (GE) bound. The hyper-parameters in various filters are specified manually or empirically, and they are usually unable to adapt to the data environment. The filter with an improper hyper-parameter may overclean, leading to a weak generalization ability. This paper proposes a unified framework of optimal sample selection (OSS) for the output noise filtering from the perspective of error bound. The covering distance filter (CDF) under the framework is presented to deal with noisy out- puts in regression and ordinal classification problems. Firstly, two necessary and sufficient conditions for a fixed goodness of fit in regression are deduced from the perspective of GE bound. They provide the unified theoretical framework for determining the filtering effectiveness and optimizing the size of removed samples. The optimal sample size has the adaptability to the environmental changes in the sample size, the noise ratio, and noise variance. It offers a choice of tuning the hyper-parameter and could prevent filters from overcleansing. Meanwhile, the OSS framework can be integrated with any noise estimator
∗. Corresponding author.
c©2021 Gaoxia Jiang, Wenjian Wang, Yuhua Qian and Jiye Liang.
License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v22/19-498.html.
Jiang, Wang, Qian and Liang
and produces a new filter. Then the covering interval is proposed to separate low-noise and high-noise samples, and the effectiveness is proved in regression. The covering distance is introduced as an unbiased estimator of high noises. Further, the CDF algorithm is designed by integrating the cover distance with the OSS framework. Finally, it is verified that the CDF not only recognizes noise labels correctly but also brings down the prediction errors on real apparent age data set. Experimental results on benchmark regression and ordinal classification data sets demonstrate that the CDF outperforms the state-of-the-art filters in terms of prediction ability, noise recognition, and efficiency.
Keywords: output noise, generalization error bound, optimal sample selection, covering distance filtering, supervised learning
1. Introduction
In real-world applications, the performance of the learning model highly depends on data quality. The output noise has a negative impact on data quality. It might result in an unreasonable model and inaccurate prediction if the noise is ignored. From the perspective of statistical learning theory, the bound of generalization error(GE) or risk can be expressed as an equation increasing with the empirical risk (Bartlett and Mendelson, 2007; Rigollet, 2007; Oneto et al., 2015; Zhang et al., 2017). However, the true empirical risk might be underestimated due to noisy outputs or labels in supervised learning. Then the GE bound will be raised to some extent.
The outputs are usually polluted due to insufficient information, inexperienced labelers, errors in encoding or communication processes, etc (Wang et al., 2018; Han et al., 2019). The presence of noise may lead to various consequences, such as weakening the generaliza- tion ability of models (Tian and Zhu, 2015; Han et al., 2019), increasing the complexity of models (Sluban et al., 2014; Segata et al., 2010), and interfering with feature selec- tion (Shanab et al., 2012; Gerlach and Stamey, 2007). The noise on output is considered to be more important than that on input because there are many features but only one output (few outputs in multi-label learning) which has a decisive impact on learning (Frenay and Verleysen, 2014).The output noise denotes the mislabeling in classification, and it means that the real output has a deviation from the true output in regression.
Noise-robust models and noise filtering are dominant techniques in dealing with the output noise. The model which is robust to label noise can be constructed by robust losses, sample weighting, and ensemble methods (Patrini et al., 2017; Shu et al., 2019; Sabzevari et al., 2018). Deep learning and transfer learning are also verified to be effective in learning with label noises (Li et al., 2018; Lee et al., 2018). Researches indicate that many losses are not completely robust to label noise, and the performance of the noise-robust model is still influenced by output noise (Nettleton et al., 2010; Yao et al., 2018).
Noise filtering is a popular way to deal with output noise by removing noisy samples (Fre- nay and Verleysen, 2014). From the perspective of the output type, filtering algorithms are mainly designed for output noise in classification, and some of them are extended to the regression scenario. From the perspective of designing idea, most filters are based on nearest neighbor and ensemble learning. The main idea of a nearest neighbor-based filter is that a label is probably to be noisy if it is different from its neighbors’ (Cao et al., 2012; Saez et al., 2013). To obtain a more reliable detection result, ensemble-based filters employ base models to vote for each sample according to whether their predictions are consistent with
2
A Unified Sample Selection Framework for Output Noise Filtering
the real label (Khoshgoftaar and Rebours, 2007; Sluban et al., 2010; Yuan et al., 2018). In addition, there are a few filters based on model complexity (Gamberger et al., 1999; Sluban et al., 2014).
Although kinds of filters have been proposed to detect noisy samples, they might also make wrong recognitions and remove many noise-free samples (Frenay and Verleysen, 2014; Garcia et al., 2015). Then the original data distribution may be destroyed and the prediction ability of models trained on the filtered set might be weakened to some extent. Besides, most of the filters are heuristic and lack of theoretical foundations. Moreover, filters for regression have not yet been widely studied because the noise problem is more complex than that in classification. Specifically, the number of values to be predicted in classification is usually very low, whereas the output variable in regression is continuous, such that the number of possible values to predict is unlimited (Kordos and Blachnik, 2012; Kordos et al., 2013).
1.1 Related Work
Noise filtering approaches can be classified into nearest neighbor-based filter, ensemble- based filter and model complexity-based filter.
As k-nearest neighbor (kNN) classifier is sensitive to label noise (Garca et al., 2012), various nearest neighbor-based filters, such as edited nearest neighbor (ENN), all kNN (ANN) (Cao et al., 2012), and mutual nearest neighbors, were presented (Barandela and Gasca, 2000; Liu and Zhang, 2012). Samples misclassified by kNN is known as noises in ENN, and the noise recognition procedure is repeated for k = 1, · · · ,K in ANN. A sample x1 is a mutual nearest neighbor of x2, if x1 belongs to the k nearest neighbors of x2, and x2
is also one of the k nearest neighbors of x1 at the same time. All samples with an empty MNN set are deleted from the original data set.
Ensemble-based filter employs various classifiers to vote for samples. The removing cri- terion has two choices: a majority vote and a consensus vote (Frenay and Verleysen, 2014). The majority vote classifies a sample as mislabeled if a majority of these classifiers misclassi- fied it, and the consensus vote requires that all classifiers have misclassified it. Classification filtering (CF) may be the simplest among them. The cross-validation procedure is executed on a data set for a given model and misclassified samples are recognized as noises by CF. The noise is determined by multiple classifiers in majority voting filter (MVF) (Brodley and Friedl, 1999). Iterative-partitioning filter (IPF) partitions a data set into several subsets and removes samples by the ensemble criterion (Khoshgoftaar and Rebours, 2007). This process is repeated on the filtered data set until all the quantities of continuous three removing are less than a threshold. Samples are recognized as noises by high agreement random forest (HARF) if the predictions of all decision trees do not reach a high agreement level (Sluban et al., 2010). In order to improve the prediction performance of base classifier, iterative noise filter based on the fusion of classifiers (INFFC) builds base models on a data set with fewer noises and scores for all misclassified samples (Saez et al., 2016). Samples with scores beyond a threshold will be deleted by INFFC. In probabilistic sampling filter, clean samples have more chances to be selected than noisy ones. The degree of cleanliness is measured by the label confidence (Yuan et al., 2018).
It is known that the complexity of a model is likely to be raised if noises are added to the training set. Conversely, it might become lower if partial noisy samples are removed.
3
Jiang, Wang, Qian and Liang
Saturation-based filter (SF) exhaustively looks for examples which could make the maximum reduction of complexity (Gamberger et al., 1999). However, it takes a lot of time to complete the SF procedure because of the huge searching space and the complicated calculation of model complexity. Prune saturation filter simplifies SF by replacing the complexity indicator with the nodes number of random forest, and all decision trees have been pruned before saturation filtering (Sluban et al., 2014).
Above filters are empirically compared in relevant studies. The results from Saez et al. (2013) indicate that there is a notable relationship between the characteristics of the data and the efficacy of filters. Li et al. (2016) shows that filters can significantly reduce the noise ratio and enhance the model’s generalization ability. It confirms that CF, MVF, and IPF outperform ENN and ANN. Sluban et al. (2014) and Garcia et al. (2016) examined the performances of ensemble filters. The former’s result shows ensemble filters usually have better precision than elementary filters. The latter studied many possible ensembles and found that the use of ensembles increases the predictive performance in the noise identification, especially for the ensemble of HARF and MVF. However, the ensemble filters are prone to overcleansing and have a higher computational cost than the elementary filter.
The filtering for regression gets less attention than that for classification. Inspired by the feature selection, a filter based on mutual information (MI) is presented to decide which samples should belong to the training data set (Guillen et al., 2010). But its performance is not satisfactory and it takes a huge amount of time. The ENN evolves into a new version, ENN for regression (RegENN), which is to remove any sample if its neighbors’ output is far away from the prediction of a model trained on all samples except itself (Kordos et al., 2013). Another evolution is named as the discrete ENN (DiscENN) (Arnaiz-Gonzalez et al., 2016). It transforms the continues variables into classes by the discretization of the numerical output variable, then the filter for classification is applied to the transformed data set.
Existing filters for classification or regression focus on the noise (value or probability) estimation by means of various data partitions, models, and ensemble strategies. However, two basic problems are often missing or solved intuitively. (1) The influence of filtering on GE bound. Many filters may be good at estimating the noise, but they did not explain the influence of filtering on the generalization ability. There is no theoretical guarantee that a filter could reduce the GE bound. (2) The adaptive sample selection problem. It mainly refers to how to regulate the number of kept or removed samples according to the noise environment. Specifically, the filtering thresholds are empirically selected by the simulated results in many filters, including HARF, INFFC, MI, and RegENN. Then the filtering with the suggested threshold may be unsuitable for a new noisy data set.
1.2 Summary of Contributions
From the perspective of generalization error (GE) bound, three essential problems need to be answered in output noise filtering.
1. Whether a filter works or whether it could reduce the GE bound?
2. How many samples should be removed so as to obtain the least GE bound?
3. Which samples should be removed?
4
A Unified Sample Selection Framework for Output Noise Filtering
The first two problems are often out of consideration or discussed empirically in related works. We propose a unified framework of optimal sample selection (OSS) for effectiveness determination and optimal sample selection in output noise filtering from the perspective of GE bound. A noise estimator and a novel filter are presented to deal with noisy outputs in regression and ordinal classification problems. In summary, the main contributions of this paper are as follows.
• For Problem 1, the decision rule is presented to decide whether a filter could reduce the GE bound. It implies that not all filters are beneficial to the generalization ability although they could reduce the noise level. The decision rule applies to any filter when the noise estimation and model errors are prepared.
• For Problem 2, the proposed OSS framework provides the optimal sample size for filtering so as to obtain the least GE bound. It implies that not all noisy samples need to be removed even though noises almost have been estimated accurately. Essentially, the optimal sample size is the result of a comprehensive balance of multiple factors, including the sample size, model errors, and the noise level. Hence the optimal sample size has the adaptability to environmental changes in the sample size, the noise ratio, and noise variance. Meanwhile, the OSS framework can be integrated with any noise estimator and then produces a new filter. It offers a choice of tuning the filtering threshold and could prevent filters from overcleansing.
• For Problem 3, the proposed OSS framework suggests that samples with larger noises should be removed first. Further, high-noise and low-noise samples can be broadly separated by the covering interval, and the covering distance (CD) provides a prac- tical estimation of symmetric output noise. The characteristics of the CD estimator, including unbiasedness, absolute and relative deviations, are explored under popular noise distributions. Then the covering distance filtering (CDF) approach is proposed by integrating the CD with the OSS framework.
• It is empirically verified that the covering interval and the CD estimator are also ap- plicable to asymmetric mixed noise. Experimental results indicate that the proposed CDF filter is effective in regression and ordinal classification problems, and it outper- forms the state-of-the-art filters in terms of noise recognition, prediction ability, and efficiency. In the real problem of apparent age estimation, the CDF algorithm not only recognizes noisy age labels correctly but also brings down the prediction errors.
1.3 Organization
The rest of this paper is organized as follows. Section 2 describes the unified filtering frame- work in determining the effectiveness and optimizing the sample size from the perspective of GE bound in regression. The properties and applications are analyzed subsequently. In Section 3, the covering interval is introduced to separate low-noise samples from high-noise ones. The covering distance is proposed for estimating the output noise, and it is integrated with the OSS framework, generating the CDF filtering algorithm. All the proofs of Section 2 and Section 3 can be found in the appendix. In Section 4, the CDF filter is applied to a real apparent age data set in order to identify noisy age labels and improve the prediction
5
Jiang, Wang, Qian and Liang
ability. Section 5 shows the results of numerical experiments on benchmark data sets in regression and ordinal classification, and Section 6 concludes.
2. Optimal Sample Selection Framework for Noise Filtering
A general structural regression problem can be denoted by D = {(xi, yi), i = 1, 2, . . . , n}, where xi, yi are input and output of the i-th sample or instance, respectively. If output noise exists in the problem, yi may be unequal to the true output y0
i . Let y = m(x) be a model trained on data set D.
Definition 1 (Noise) The noise on the i-th sample
ei = yi − y0 i . (1)
Definition 2 (Error) The model error (or residual) on the i-th sample
ri = m(xi)− yi. (2)
Let DF be the filtered data set from D. The size of filtered data set nF is less than n. y = mF (x) denotes the model trained on data set DF .
Definition 3 (Relative size) The relative size of the filtered data set to the initial one is defined as
ρ = nF n . (3)
If all noises have been well estimated, they can be sorted by the absolute values. An intuitive idea is to remove samples whose noises are over a threshold. Indeed, the threshold corresponds to a relative size. In another word, the filtered data set with relative size ρ consists of nρ samples with (estimated) noises less than the threshold.
2.1 Main Results for Sample Selection
2.1.1 Effective Noise Filtering
This subsection analyzes the necessary and sufficient condition of effective noise filtering which brings down the generalization error (GE) bound by removing noisy samples.
Lemma 1 (Cherkassky et al., 1999) For regression problems with squared loss, the following GE bound holds with probability 1− η:
R(m,D) ≤ Remp(m,D) · ε(h, n, η), (4)
where R(m,D) is the prediction risk of leaner m trained on data set D, the empirical risk
Remp(m,D) = 1
(6)
is a function about the VC-dimension h, the sample size n, and the probability η.
6
A Unified Sample Selection Framework for Output Noise Filtering
Let ε(D) = ε(h, n, η) and ε(DF ) = ε(h, nF , η). The following theorem provides a necessary and sufficient condition of effective noise filtering, and it can serve as a decision rule for effective noise filtering.
Theorem 1 (A necessary and sufficient condition for effective noise filtering) The model trained on DF has a lower GE bound than that on D if and only if
EDF (e2 i )
ε(DF ) (1 + C)− C (7)
holds for a fixed goodness of fit (1− ∑ i[m(xi)−yi]2∑ i(yi−
∑ yi/n)2
the coefficient
C = ED(r2
It means that the ratio of noise levels EDF (e2i )
ED(e2i ) is required to be under a corresponding
bound for effective noise filtering. Conversely, the model trained on the filtered data set will have a higher risk. The proof of Theorem 1 can be found in Appendix A.1.
For simplicity, two definitions are given based on (7).
Definition 4 (Ratio of noise levels and its bound) The ratio of noise levels with re- spect to the relative size ρ is defined as
T (ρ) = EDF (e2
and the (upper) bound of T (ρ) is defined by
BT (ρ) = ε(D)
where ρ = #{DF }/#{D} = nF /n.
According to Theorem 1, the relationship between the ratio of noise levels and its bound decides whether a filter works or whether it could reduce the GE bound.
2.1.2 Optimal Sample Selection
This subsection proposes a necessary and sufficient condition for the optimal sample selec- tion which minimizes the GE bound of the model trained on filtered data set.
Theorem 2 (Optimal sample selection for effective noise filtering) For a fixed good- ness of fit in regression,
minRemp(mF , DF ) · ε(DF )⇔ max [BT (ρ)− T (ρ)] · ε(DF ), (12)
where the three components are defined in (9), (10) and (11).
7
Jiang, Wang, Qian and Liang
It indicates that the GE bound after filtering depends on the margin between T (ρ) and its bound BT (ρ) and the coefficient ε(DF ). The proof of Theorem 2 can be found in Appendix A.2. Note that the error bound in Lemma 1 depends on the probability η, i.e., the probability η is arbitrary. And so, the conclusions in Theorems 1 and 2 hold for any value of η. By (12), the objective function of effective noise filtering becomes
F(ρ) = [BT (ρ)− T (ρ)] · ε(DF ), (13)
and the optimal relative size
ρ∗ = arg maxF(ρ) = arg max [BT (ρ)− T (ρ)] · ε(DF ). (14)
According to Theorem 2, the number of kept samples should maximize the objective function F(ρ) so as to obtain the least GE bound.
In Theorem 2, only the component T (ρ) is related to the noise in DF , and a smaller T (ρ) is desired by the objective function. From (10), retaining low-noise samples means a small T (ρ). Therefore, high-noise samples should be removed so as to obtain a smaller T (ρ) and a lower GE bound for a given relative size.
Remark 1 The goodness of fit is assumed to be fixed in both Theorems 1 and 2. However, it is expected to be slightly enlarged after filtering as some hard-to-learn (noisy or specific) samples are removed in reality. In another word, the model error is likely to be reduced by the filtering. Strictly speaking, this reduction will bring a small positive shift to BT (ρ), i.e., BT (ρ) in (11) is an underestimated bound for T (ρ) if the assumption is removed in the proof of Theorem 1(see Proof A.1).
The true noise of retained samples is not always less than that of the removed ones due to the deviation of noise estimation. It implies that the estimated T (ρ) is usually optimistic (underestimated) in reality. Then the deviation of BT (ρ) will be counteracted by that of T (ρ) to some extent. In addition, the robustness of some regression models might reduce the variation of the goodness of fit in filtering, especially for large-scale data sets. Overall, the violation of the assumption about the goodness of fit is considered to have no essential influence on the theoretical results.
Theorem 1 means that not all filters are beneficial to the error bound although they can reduce noise level (T (ρ) < BT (ρ) < 1). If there does not exist an effective filtering for a data set, the optimal relative size ρ∗ will be equal to 1, i.e., no sample needs to be removed. The optimal relative size is determined by the three components related to the sample size, model errors, and the noise level. Hence the optimal sample size is the result of a comprehensive balance of multiple factors (not just minimize the noise level). This implies that not all noisy samples need to be removed even though noises almost have been estimated accurately.
Theorems 1 and 2 provide theoretical foundations for the determination of effective filtering and the optimal sample selection. They are directly available for noise filtering when the noise estimation and model errors are ready. From a broad perspective, the two theorems can be integrated with any noise estimator. In another word, they provide a unified framework, named as the optimal sample selection (OSS) framework, for the output noise filtering in regression. In addition, the effectiveness of the OSS framework partially depends on the accuracy of noise estimation in reality.
8
2.2 Properties of the three components
This subsection studies the properties of the three components in objective function F(ρ). They are the basis of the instructions of the proposed framework in Subsection 2.3.
2.2.1 Properties of T (ρ)
By (10), the T (ρ) curve depends on the noise and the sequence of sample removing. Assume that the true noises are known in the ideal situation, then all samples can be sorted by their absolute noise values in ascending order (the last is removed first). However, a few samples may be in the wrong order in reality due to the deviation in noise estimation. So T (ρ) can be calculated in different ways.
• The true T (ρ) from the God’s-eye view. All true noises are known and can be sorted in ascending order completely. It means that the true noise value and completely correct sequence are employed in this scheme.
• The real T (ρ) in a trial. The removing sequence based on the estimated noises should be partially consistent with the truth. The true noise value is adopted to validate the effectiveness of the proposed theorems. In other words, the partially correct sequence based on noise estimation and the true noise value are employed in this scheme.
• The real average T (ρ) in multiple trials. It is obtained by averaging the real T (ρ) values over many trials in order to analyze the property in the sense of expectation.
• The estimated T (ρ) in experiment. The estimated noise values and the correspond- ing sequence (partially correct) have to be used in real filtering algorithms.
It is easy to find that both the true and estimated T (ρ) monotonously increase with ρ. The real T (ρ) in a trial should have a larger variation than the average T (ρ). The first three kinds of T (ρ) are analyzed in this section, and the last is utilized in the real filtering algorithm in Section 3.
For simplicity, the noise for exploring the property of T (ρ) is randomly generated from a predefined distribution. Indeed, it is independent of any data set and model in this subsection. Simulated results of T (ρ) are shown in Figure 1. It can be observed that the true T (ρ) is equal to 0 for ρ < 1−NR and monotonously increases with ρ when ρ > 1−NR, where NR denotes the noise ratio. The average T (ρ) is similar to the true version in shape, but the former is larger. In addition, the average T (ρ) is more smooth than the real T (ρ). It is obvious that the true T (ρ) is less than the other two versions for any ρ. Further, the averaged T (ρ) is an increasing and convex function about ρ for the predefined noise distributions. The true T (ρ) curves in two sub-figures are similar in terms of monotonicity and concavity. The difference in noise distribution is reflected by the slope of the T (ρ) curve.
The true T (ρ) may be affected by the noise levels, including the noise ratio NR and the noise variance σ2, for a given type of noise distribution (the noise expectation is usually assumed to be zero). Note that σ2 is a general variance notation but not specialized for the Gaussian distribution. The true T (ρ) is theoretically analyzed under the assumption that the sample size n → +∞ so as to describe the properties in the form of partial derivative. The proportion relationship implied by the partial derivative holds for any sample size.
9
Jiang, Wang, Qian and Liang
Sorted absolute noises (Ground truth) 0 200 400 600 800 1000 1200
0
1
2
Sorted absolute noises (In reality) 0 200 400 600 800 1000 1200
0
1
2
T (;
) fo
(a) T (ρ) for Gaussian noises N(0, 0.52)
Sorted absolute noises (Ground truth) 0 200 400 600 800 1000 1200
0
0.5
1
Sorted absolute noises (In reality) 0 200 400 600 800 1000 1200
0
0.5
1
T (;
) fo
(b) T (ρ) for uniform noises U(−1, 1)
Figure 1: The ratio of noise levels T (ρ) is simulated in two situations. In the ideal (ground truth) situation, all (100%) noises are in the correct order, and the sorted noises are displayed in the top left of each sub-figure. In the real situation, most (75%) of the noises are in the correct order and the others (25%) are randomly permuted. The real order for sorted noises are displayed in the bottom left of each sub-figure. The noises in the two sub-figures are from the Gaussian distribution N(0, 0.52) and the uniform distribution U(−1, 1), respectively. The noise ratio (number of noisy samples/n) is set to be NR = 25% and the sample size n = 1000. The real T (ρ) is the result in a trial and the average T (ρ) is from 100 independent trials.
10
Property 1 The true T (ρ) has the following characteristics:
(1) T (ρ < 1−NR) = 0, T (ρ > 1−NR) > 0. (15)
(2) ∂T (ρ)
∂ρ ≥ 0. (16)
∂2T (ρ)
∂NR ≥ 0. (18)
(5) If the noise is from a symmetric Gaussian(or uniform, or Laplace) distribution,
∂T (ρ)
∂σ = 0. (19)
The proof of Property 1 can be found in Appendix A.3. The first two properties can be observed from Figure 1. Equation (16) means that the true T (ρ) (non-strictly) monotonously increases with ρ. Equation (17) shows the convexity of T (ρ). Actually, the true T (ρ) might be non-convex when there are many equal noises. Equation (18) indicates that T (ρ) increases with the noise ratio NR. Although the noise level is related to the noise ratio and the noise variance, Equation (19) shows that T (ρ) is independent of the variance for usual symmetric noise distributions. The reason is that both the numerator and denominator in the definition of T (ρ) (Equation 10) are proportional to the variance and the variance can be eliminated at the same time. Note that (19) might not hold for some asymmetric noise distributions.
As the average T (ρ) is similar to the true T (ρ) in shape, it should have the same proportional relationship about the factors including ρ, NR, and σ in most cases. In addition, the estimated T (ρ) associates with noise estimation apart from the noise level.
2.2.2 Properties of BT (ρ)
From (11), the upper bound BT (ρ) is related to the relative size ρ, the sample size n, the VC-dimension h, the probability η, and the coefficient C. These relationships are shown in Figure 2. It can be observed that BT (ρ) increases with n, η and decreases with h, C. There is no significant difference among the considered η values. Besides, BT (ρ) monotonously increases with ρ for other fixed factors (proved in Equation 21). The simulated results indicate that BT (ρ) is a concave function about ρ. According to Theorem 1, BT (ρ = 0.6|n = 1000, h = 10, η = 0.05, C = 1) = 0.85 indicates that the noise level should be reduced by more that 15% (1− 0.85) in the filtering of removing 40% (1−ρ) of the samples so as to obtain a lower GE bound with probability 0.95 (1− η).
11
;
B T (;
(a) Influence of the sample size n
;
B T (;
(b) Influence of VC-dimension h
;
B T (;
(c) Influence of the probability η
;
B T (;
C=0.25 C=0.5 C=1 C=2 C=4
(d) Influence of the coefficient C
Figure 2: Influences on the upper bound BT (ρ) are simulated with the following settings. Sub-figure (a): n = 102, 103, 104, 105, h = 10, η = 0.05, C = 1; sub-figure (b): n = 1000, h = 10, 20, 50, 100, η = 0.05, C = 1; sub-figure (c): n = 1000, h = 10, η = 0.1, 0.05, 0.01, 0.001, C = 1; sub-figure (d): n = 1000, h = 10, η = 0.05, C = 0.25, 0.5, 1, 2, 4. The independent variable ρ is in [0.1,1] for each sub-figure.
Property 2 BT (ρ) has the following characteristics:
(1) ∂BT (ρ)
∂C < 0. (20)
(2) ∂BT (ρ)
∂ρ > 0. (21)
∂n > 0. (22)
(4) ∂2BT (ρ)
A Unified Sample Selection Framework for Output Noise Filtering
The proof can be found in Appendix A.4. Note that C is inversely proportional to the initial noise level ED(e2
i ) in (8), Equation (20) implies that BT (ρ) increases with ED(e2 i ) for
any ρ < 1. For example, BT (ρ = 0.8|ED(e2 i ) = 2) > BT (ρ = 0.8|ED(e2
i ) = 1). Equation (23) means that the slope of BT (ρ) with regard to ρ will increase with C or decrease with ED(e2
i ). Equations (21) and (23) indicate that the bound BT (ρ) has a smaller increase for the larger ρ when the initial noise level is enlarged. For example, assume that ED(r2
i ) = 1 and ED(e2 i )
is enlarged from 0.5 to 1, then C will be reduced from 2 to 1 for a fixed goodness of fit. It can be deduced that 0 < [BT (ρ|ED(e2
i ) = 1)− BT (ρ|ED(e2 i ) = 0.5)]|ρ=0.8 < [BT (ρ|ED(e2
i ) = 1)−BT (ρ|ED(e2
i ) = 0.5)]|ρ=0.6, i.e., 0 < [BT (0.8)−BT (0.6)]|C=1 < [BT (0.8)−BT (0.6)]|C=2. These properties can be clearly observed from Figure 2.
2.2.3 Properties of ε(DF )
;
0 (D
(a) Influence of the sample size n
;
0 (D
(b) Influence of VC-dimension h
;
0 (D
(c) Influence of the probability η
Figure 3: Influences on ε(DF ) are simulated with the following settings. Sub-figure (a): n = 500, 1000, 5000, 10000, h = 10, η = 0.05; sub-figure (b): n = 1000, h = 10, 20, 30, 40, η = 0.05; sub-figure (c): n = 1000, h = 10, η = 0.1, 0.05, 0.01, 0.001.
It can be observed that ε(DF ) increases with h and decreases with n, η for fixed ρ. There is no significant difference among the considered η values. Besides, ε(DF ) monotonously decreases with ρ when other factors are fixed. More importantly, the ε(DF ) curve is almost flat when ρ ∈ [0.6, 1]. It implies that ε(DF ) may have a smaller impact on the objective function F(ρ) than the margin between T (ρ) and BT (ρ) when the relative size ρ is large enough.
Property 3 ε(DF ) has the following characteristics:
(1) When n h,
(2)
∂ε(DF )
∂η < 0. (27)
These relationships can be easily obtained by (9), and the proof is omitted. In addition, the relations are verified by Figure 3.
The proposed properties are summarized in Table 1. T (ρ) refers to the true version here. The relationships between the component and its factors are denoted by different signals. The correspondences are as follows: ↑-proportional relationship, ↓-inversely-proportional re- lationship, -insignificant proportional relationship, -insignificant inversely-proportional relationship, ×-independent. The signal with a star means the relationship is just supported by simulation results, and the others have been proved.
ρ n Noise
Table 1: Summary of the properties
2.3 Practical Instructions for OSS Framework
This part describes practical instructions for the OSS framework, and explains its adapt- ability based on previous simulations and properties.
2.3.1 Applications of OSS Framework
Figure 4 shows the simulation results of relevant functions for determining the effective noise filtering and optimizing the relative size ρ. From Figure 4(a), the filtering is effective (gets a lower GE bound) when the relative size is on the right of the red dot (ρ > 0.145). From Figure 4(b), the maximum of the objective function (red dot) is on the upper left of the maximum of the margin curve (blue dot) since ε(DF ) decreases with ρ and ε(DF ) > 1. Besides, the two dots have very close horizontal coordinates, and it means the optimal relative size mainly depends on the margin.
14
;
0.5
1
1.5
2
0(DF)
T(;)
;
0.2
0.4
0.6
0.8
1
[BT(;)-T(;)]"0(DF)
BT(;)-T(;)
(b) Optimization of the relative size
Figure 4: The sample selection is simulated based on the proposed OSS framework. The left sub-figure shows the three components in the objective function. Note that the average T (ρ) is implemented in the same way as Figure 1. The maximum margin between T (ρ) and BT (ρ) is marked by a black dashed line, and a red dot is plotted at the crossing position. The right sub-figure shows the margin curve (blue) and the objective function (red), and their maximums are marked by solid dots. A vertical dashed line is added at the maximum margin. The parameter setting for both sub-figures is: the sample size n = 1000, noise ratio NR = 25%, noise distribution N(0, 0.52), VC-dimension h = 10, the probability η = 0.05, the coefficient C = 1.
2.3.2 Adaptability of OSS Framework from a Qualitative Perspective
Intuitively speaking, the optimal relative size ρ∗ in (14) should have the adaptability to the data quality and noise level, so we explored the influences of the sample size n and the noise levels, including the noise ratio NR and noise variance σ2 (for a given type of noise distribution), on ρ∗.
The simulation results are shown in Figure 5. The paired top and bottom sub-figures are analyzed together in order to make a clear explanation.
(1) In the left pair (Figure 5(a) and (d)), the first new setting has a smaller sample size than the baseline setting (n = 500 < 1000). From the baseline setting to the new setting, BT (ρ) has a reduction due to (22). More importantly, sub-figure (a) indicates that the smaller the ρ value is, the more BT (ρ) is reduced. As a result, the relative size with the maximum margin becomes larger when the sample size is reduced. So does the relative size at the crossing position as shown in Figure 5(a). Although the new setting for comparison has a larger ε(DF ) due to (24), it has an insignificant influence on the maximum of the objective function. Thus the optimal relative size in Figure 5(d) also has a growing tendency. Overall, the optimal relative size ρ∗ decreases with the sample size n in the sense of expectation when the other conditions are fixed.
15
;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
<=0.5 <=0.2
;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
<=0.5 <=0.2
;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
on the components of F(ρ);
0 0.2 0.4 0.6 0.8 1 0
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
(; )
;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
(; )
;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
F (;
0.5
1
1.5
2
0(DF)
T(;)
(; )
(f) Influence of σ on ρ∗
Figure 5: Influences of the sample size n, noise ratio NR, and noise variance σ2 on the three components of the objective function F(ρ) and on the optimal relative size ρ∗ are displayed. All components of F(ρ) are shown in the top sub-figures. The bottom sub-figures show corresponding objective functions, and the maximums are marked with dots. Note that component T (ρ) is implemented in the average version similar to Figure 1, and it can be considered as a derivable function about ρ in the sense of expectation. Three new settings are compared with a baseline setting. The results under the baseline setting are denoted by the red dashed lines, and those with the new settings for comparison correspond to the blue solid lines. The baseline setting is: n = 1000, NR = 25%, the noise variance σ2 = 0.52 (Gaussian distribution), h = 10, η = 0.05, C = 1. The new settings for comparison are as follows. The left sub-figures: n = 500; the middle sub- figures: NR = 35%, C = 25%/35% = 5/7; the right sub-figures: σ2 = 0.22, C = 0.52/0.22 = 25/4. Since C is inversely proportional to the noise level ED(e2
i ), it is adjusted according to the noise levels of the new settings. Omitted variables are the same as those of the baseline setting.
16
A Unified Sample Selection Framework for Output Noise Filtering
(2) In the middle pair (Figure 5(b) and (e)), the second new setting has a larger noise ratio than the baseline setting (NR = 35% > 25%). By (8), the C value should become smaller under the new setting. It can be observed from Figure 5(b) that both BT (ρ) and T (ρ) are affected by the change of the noise ratio. BT (ρ) is larger under the new setting because of (20). Moreover, Equation (23) implies that the smaller the ρ values is, the more the increment of BT (ρ) is. The average T (ρ) function with the new setting is larger than or equal to that with the baseline setting because of (18). The final result is that both the relative size at the maximum margin and the optimal relative size become smaller from the baseline setting to the new setting. Overall, the optimal relative size ρ∗ decreases with the noise ratio in the sense of expectation when the other conditions are fixed.
(3) In the right pair (Figure 5(c) and (f)), the third new setting has a smaller noise variance than the baseline setting (σ2 = 0.22 < 0.52). For a given type of noise distribution, both T (ρ) and ε(DF ) are not affected by the variance due to Equations (19) and (9). Whereas BT (ρ) becomes smaller under the new setting because of Equations (20) and (8). Moreover, both Equation (23) and Figure 5(c) support that the reduction of BT (ρ) is more evident for a smaller ρ value. The final result is that both the relative size at the maximum margin and the optimal relative size become larger when the noise variance is reduced. Overall, the optimal relative size ρ∗ decreases with the noise variance in the sense of expectation when the other conditions are fixed. In addition, the crossing point for determining effective noise filtering has the same tendency as ρ∗.
The following property provides a strict description of the relationship between the optimal relative size and noise variance.
Property 4 (Adaptability of ρ∗ to the noise variance) Assume the noise is from a symmetric Gaussian (or Laplace, or uniform) distribution. Then
∂ρ∗
∂(σ2) < 0 (28)
holds for a fixed goodness of fit and the true T (ρ), where σ2 is the noise variance.
The proof can be found in Appendix A.5. It indicates that the optimal relative size should be reduced, or more samples could be removed, when the noise variance is enlarged. This is consistent with the result in Figure 5(c) and (f). Actually, they provide the same suggestion in tuning the relative size from the theoretical and simulated perspectives, re- spectively. They have the same assumptions including the fixed goodness of fit and the symmetry of the noise distribution. The difference between Figure 5 and Property 4 lies in the version of T (ρ). The former utilizes the average T (ρ), while Property 4 adopts the true T (ρ). The main reason for the same suggestion is that the average T (ρ) and the true version have similar properties.
In sum, the optimal relative size ρ∗ decreases with the sample size, the noise ratio, and noise variance in the sense of expectation when the other conditions are fixed. So do the relative size at the crossing position and the one with the maximum margin. In another word, more samples are allowed to be removed when the sample size and the noise level
17
Jiang, Wang, Qian and Liang
(NR and σ2) are large enough. On the contrary, more samples should be retained to avoid overcleansing. This is accordant with our intuition. Therefore, the proposed OSS framework has the adaptability to the data quality and noise level.
3. Covering Distance Filtering under OSS Framework
Two proposals, i.e. the covering interval and covering distance, are presented to distinguish noisy samples in regression. A novel filtering algorithm is designed by integrating them with the proposed OSS framework.
3.1 Covering Interval: a Selector for Low-noise Samples
Definition 5 (Covering interval) [u, v] is a covering interval of the true output y0 i if
y0 i ∈ [u, v]. The center of the interval c = u+v
2 , and the radius r = v−u 2 .
Low-noise samples can be identified by the covering interval according to the rule: falling in the covering interval corresponds to a smaller noise, and getting out of the interval indicates a larger noise. This is supported by the following propositions.
Let F (e|µ, σ), f(e) be the cumulative distribution function (CumDF) and probability density function (PDF) of the noise e, respectively. µ denotes the mean and σ2 is the variance. Note that F (e|µ, σ) is a general notation for any distribution but not specialized for the Gaussian distribution.
Proposition 1 (A necessary condition for low-noise samples) Assume that the noise ei on y0
i is from a symmetric distribution with CumDF F (e|µ = 0, σ). e(1), e(2) are two sets
of noises with variances σ2 1, σ
2 2 on y0
i . [u, v] is any covering interval of y0 i . If ∂F (e|µ=0,σ)
∂σ < 0 for e > 0 and σ2
1 < σ2 2, then
e(2)}, (29)
where P(·) denotes the probability function.
Corollary 1 Assume that the noise ei on y0 i is from a Gaussian(or uniform, or Laplace)
distribution with the CumDF F (e|µ = 0, σ). e(1), e(2) are two sets of noises with variances σ2
1, σ 2 2 on y0
i . [u, v] is any covering interval of y0 i . Then (29) holds for σ2
1 < σ2 2.
Proposition 1 means that the samples with low noises are more likely to fall in the covering interval than those with large noises. Corollary 1 shows that the conclusion still holds for usual distributions at fewer preconditions. Their proofs can be found in Appendix A.6 and A.7.
Proposition 2 (A sufficient condition for low-noise samples) Assume that the noise e on the true output is from a symmetric distribution with the PDF f(e), i.e. f(−e) = f(e). [u, v] is any covering interval of y0
i . e ∈ {ei, i = 1, 2, · · · , n}. Then
E(|ei|p y0 i + ei ∈ [u, v]) < E(|ei|p
y0 i + ei /∈ [u, v]) (30)
holds for any p ∈ N+.
18
A Unified Sample Selection Framework for Output Noise Filtering
The proof can be found in Appendix A.8. It means that the samples within any covering interval have a lower average noise than those out of the interval. Proposition 2 is applicable to some special symmetric noise distributions, including the zero-mean Gaussian, uniform, and Laplace distributions. The conclusion is still true if the noise distribution is a mixture of multiple symmetric distributions.
The covering interval is required for distinguishing between the low noise and high noise. Unfortunately, it is difficult to decide whether a given interval is the covering interval since the true output is usually unknown in reality. It is helpful if we can deduce that a specific interval covers the true output with a relatively large probability. Actually, this kind of covering interval can be prepared by means of model predictions.
Definition 6 The prediction interval of J base models {y = mj(x), j = 1, 2, · · · , J} for yi is defined as
[ui, vi] = [min j mj(xi),max
j mj(xi)]. (31)
Note that whether the above interval covers the true value is unknown yet.
If the base models {mj(x), j = 1, 2, · · · , J} are trained on different data subsets, we can assume the independence of mj(x). Also, the events y0
i < mj(xi) for different j values are independent. Similarly, the independence of y0
i > mj(xi) with different j values holds, too. According to the principle of indifference (Peters, 2014), we can assume P{mj(xi) < y0 i } = P{mj(xi) > y0
i } = 1/2. By the above independence and indifference, the covering probability
P{y0 i ∈ [ui, vi]} = P{min
j mj(xi) ≤ y0
j mj(xi)} − P{y0
i > max j mj(xi)}
i > mj(xi),∀j}
(32)
It indicates that the larger J produces a higher covering probability. However, increasing J will enlarge the interval radius, and then raise the deviation of noise estimation (see Property 6). In order to obtain a good balance, the parameter J is selected in a trade-off situation when the uncovering probability is around the popular significant levels, such as 0.1, 0.05 and 0.01. As P{y0
i ∈ [ui, vi]} = 0.9 ⇒ J ≈ 4.3 and P{y0 i ∈ [ui, vi]} = 0.99 ⇒ J ≈
7.6, J = 5, 6, 7 is considered in the construction of covering interval.
Considering that the independence is required, the model predictions are generated in the subsets scheme in reality. The original data set is randomly partitioned into J subsets. Then the regression model is trained on each subset and tested on the whole data set.
19
3.2 Covering Distance: an Approach of Noise Estimation
The covering distance (CD) is proposed to estimate the absolute noise. On the basis of the noise estimation, a noisy data set can be filtered under the proposed OSS framework.
3.2.1 Definition of the Covering Distance
Definition 7 (Covering distance, CD) The covering distance from yi to the covering interval [u, v] is defined as
Ri = 1
) . (33)
By the noise definition in (1), the absolute noise has the following bounds:
inf |ei| = min y0i ∈[u,v]
|yi − y0 i |
min{|yi − u|, |yi − v|} otherwise ,
=
|yi − c| − r otherwise ,
|yi − y0 i |
(35)
where c, r denote the center and radius of the covering interval, respectively. Substituting (34) and (35) into (33), we get a practical definition of the covering distance
Ri =
{ 1 2 (|yi − c|+ r) if yi ∈ [u, v] |yi − c| otherwise
, (36)
where the interval center c = u+v 2 and the interval radius r = v−u
2 . Besides, the covering distance can be simplified to be the absolute covering distance (ACD)
RAi = |yi − c|. (37)
Figure 6 shows the mapping from yi to the absolute noise. One possible situation of y0 i is plotted in the covering interval. The true absolute noise is zero when yi = y0
i , and it linearly increases with the distance |yi − y0
i | (see the blue dotted line). The area between the lower and upper bounds from (34) and (35) is gray-colored. And the covering distance lies in the center of the gray area from the vertical view.
3.2.2 Theoretical Property
As an estimator of the true absolute noise, the covering distance has the following properties.
20
yi
jeij
u vy0 i c
The true noise Lower bound of noise Upper bound of noise Covering distance
Figure 6: Cover distance and the true absolute noise
Property 5 (Unbiasedness) Assume the probability density function (PDF) of the in- terval center fc(·) is symmetric about the true output y0
i , i.e. fc(y 0 i − c) = fc(y
0 i + c) and
y0 i ∈ [u, v], then Ec(Ri) = |ei|, ∀yi /∈ [u, v].
The proof can be found in Appendix A.9. It indicates that the covering distance is an unbiased estimation of |ei| for any yi /∈ [u, v] if the distribution of the interval center c is symmetric about y0
i . In another word, the average estimated noise is nearby the true noise as long as the centers of covering intervals are around the true output. Note that the unbiasedness of the CD does not hold for yi ∈ [u, v].
The expected absolute deviation (EAD) of the CD is defined as the expectation of the distance from |ei| with respect to c:
EADCD = Ec Ri − |ei| =
Ri − |ei| · fc(c)dc. (38)
The expected relative deviation (ERD) of the CD is defined as
ERDCD = Ec
· fc(c)dc. (39)
The EADs of the two bounds in (34) and (35) have similar equations:
EADL = Ec inf |ei| − |ei|
, EADU = Ec sup |ei| − |ei|
. The ERDs of the two bounds are
ERDL = Ec
|ei| .
Property 6 Assume the PDF of the interval center fc(·) is symmetric about y0 i and y0
i ∈ [u, v]. For any yi /∈ [u, v],
21
(2) ERDCD < ERDL = ERDU .
(3) ∂EADCD ∂r > 0.
The proof can be found in Appendix A.10. It indicates the covering distance outperforms the two bounds in estimating the noise in terms of the EAD and ERD when the noise is relatively large (yi /∈ [u, v]). Besides, the deviation of the CD increases with the interval radius r. In another word, the shorter the covering interval is, the more accurate the estimator is. Particularly, it is easy to prove EADCD = r/2 when the interval center c is from the uniform distribution U(y0
i − r, y0 i + r). In addition, the ACD in (37) has the same
characteristics as the CD in Properties 5 and 6 since RAi ≡ Ri for any yi /∈ [u, v].
3.2.3 Empirical Property
It is assumed y0 i ∈ [u, v] in Properties 5 and 6. However, Equation (32) implies that y0
i
may be out of the constructed covering interval. So the previous properties are required to be reexamined in a more realistic situation. Assume the interval center c is from three distributions: the uniform distribution U(y0
i − 1.25r, y0 i + 1.25r), the Gaussian distribution
N(µ = y0 i , σ = r/2) and the Laplace distribution Lp(µ = y0
i , σ = r), where µ denotes the mean and σ is the standard deviation. Their covering probabilities (P{y0
i ∈ [ui, vi]} = P{y0
i ∈ [c− r, c+ r]}) are 0.8, 0.95 and 0.76, respectively. Figure 7 shows the density plots of the true noise and four noise approximations, in-
cluding inf |ei| in (34), sup |ei| in (35), Ri in (36), and RAi in (37). Generally, the lower bound inf |ei| underestimates the noise, and the upper bound sup |ei| overestimates for all predefined PDFs. From Figure 7(d), (e) and (f), the covering distance Ri has an unbiased estimation for |ei| > 2r. Indeed, |ei| > 2r implies yi /∈ [u, v] with a large probability (at least the covering probability),
P(yi /∈ [u, v]) = P(|yi − c| > r)
≥ P(|yi − y0 i | − |y0
i − c| > r)
≥ P(2r − |y0 i − c| > r)
= P(|y0 i − c| < r)
= P(y0 i ∈ [u, v]).
It means the unbiasedness of Property 5 is verified in the simulation. Whereas Ri usually makes an overestimation when |ei| < r. This bias is mainly from the interference of the upper bound (Ri = (inf |ei| + sup |ei|)/2) as the least estimation of sup |ei| is the radius r but not zero in (35). In a word, the covering distance is unbiased for high noises, and it produces an overestimation for low noises. Compared with the covering distance, the ACD estimator performs better for low noises as it gets rid of the minimum limit of sup |ei|.
Figure 8 shows the EAD and ERD curves under three predefined PDFs of c.
• It can be observed from Figure 8 that EADCD ≤ EADACD < EADL ≤ EADU
and ERDCD ≤ ERDACD < ERDL ≤ ERDU when |ei| > r. It means that the
22
ei
in f je
ei
in f je
ei
in f je
ei
R i
ei
R i
ei
R i
ei
R A i
ei
R A i
ei
R A i
ei
su p je
ei
su p je
ei
su p je
(l) sup |ei| and ei (Laplace)
Figure 7: The density maps of the true noise ei and four approximations (inf |ei|, Ri, RAi , sup |ei|) are displayed. In order to obtain a stable result, the interval center c is set to be the equally spaced percentiles (0.005 : 0.01 : 0.995) of the predefined PDFs. One hundred noise values are uniformly taken from the interval [−3r, 3r]. Thus there are 10000 dots in each density plot. A red solid line representing the accurate estimation is added to each sub-figure.
23
c
f c (c
(a) PDF of c (Uniform)
c
f c (c
(b) PDF of c (Gaussian)
c
f c (c
(c) PDF of c (Laplace)
ei
E A
ei
E A
ei
E A
ei
E R
ei
E R
ei
E R
(i) ERD for the Laplace distribution
Figure 8: The top sub-figures show the predefined PDFs of the interval center c. The middle and bottom sub-figures display the EAD and ERD curves, respectively. In the top sub-figures, the area of y0
i ∈ [u, v] (u < y0 i < v ⇔ c − r < y0
i < c + r ⇔ y0
i − r < c < y0 i + r) is blue-colored. The EAD and ERD curves are
obtained in the same way as the density plot in Figure 7, i.e., the variable of integration c is taken from 100 equally spaced percentiles (0.005 : 0.01 : 0.995) of the predefined PDFs. Since Ri depends on yi, c, r and yi = y0
i + ei, EADCD
and ERDCD can be considered as the functions with regard to ei for fixed y0 i , r
(c is the variable of integration). Also, the independent variable can be replaced with yi (ei ∈ [−3r, 3r]⇔ yi ∈ [y0
i − 3r, y0 i + 3r]).
24
A Unified Sample Selection Framework for Output Noise Filtering
CD outperforms the others for large noises in terms of the EAD and ERD. This is generally consistent with that in Property 6. Their difference lies in the independent variable. Property 6 describes the results with regard to yi (yi /∈ [u, v]), whereas Figure 8 displays the results with respect to ei (|ei| > r) for the convenience of calculation. Actually, both yi /∈ [u, v] and |ei| > r refer to the large noise.
• When |ei| < r, EADL and EADACD may be less than EADCD. That is because the lower bound in (34) usually underestimates the noise and caters for the low noise, while the upper bound in (35) overestimates and then interferes with the low noise estimation of the covering distance (Ri = (inf |ei|+ sup |ei|)/2).
• From Figure 8 (d), (e) and (f), all EADs become stable when |ei| > 2r. The reason is that |ei| > 2r could ensure yi /∈ [u, v] with a relatively large probability (at least the covering probability). According to Property 6, the EAD is a constant for given fc(c) and r when yi /∈ [u, v], and so, all EAD curves become stable when |ei| > 2r. Besides, both EADL and EADU are very close to r for all predefined probability density functions when |ei| > 2r.
• All EADCD and EADACD curves are wavy when |ei| < r. It is from the instability of Ri − |ei|. For example,
Ri − |ei| in Figure 6 increases with yi in [u, y0 i ], and
decreases with yi in [y0 i , c + δ], then increases again in [c + δ, v] (δ is the horizontal
distance of the crossing point of |ei| and Ri from the interval center c). In contrast, the two bounds have monotone EAD curves with regard to |ei| as their deviations are simpler than the CD. For example,
inf |ei| − |ei| decreases with yi in [u, y0
i ] and increases in [y0
i , v] in Figure 6. Indeed, the EADCD curves are based on the density maps in Figure 7(d), (e) and (f), and the instability of
Ri−|ei| also can be observed from Figure 7.
• From Figure 8 (g), (h) and (i), all ERDs generally decrease with |ei| since they are inversely proportional to the absolute noise. ERDCD < 50% for the uniform distri- bution, ERDCD < 30% for the Gaussian distribution and ERDCD < 20% for the Laplace distribution when |ei| > r. It means the covering distance has small ERDs in estimating large noises.
Too large ERD (e.g. ERD > 50%) may bring confusion to noise comparison or ranking. For example, there are two cases of noise comparison based on a noise estimator: case 1 {A=50± 2 vs. B=30± 2}; case 2 {A=5± 2 vs. B=3± 2}. Although both cases have the same absolute deviation (±2), they differ greatly in the relative deviation. In case 1, it is clear that noise A is larger than noise B. While there exists an obvious uncertainty in the noise comparison of case 2. That is because the relative deviation in case 2 is significantly larger than that in case 1. The noise estimator with a large ERD may lead to an unreliable or wrong result in noise comparison and it should be avoided wherever possible.
In a word, the covering distance (CD) would be an accurate and reliable estimator, especially for high noises. Meanwhile, the absolute covering distance (ACD) is comparable with the covering distance and it has a lower deviation for low noises.
25
3.3 Covering Distance Filtering for Regression
This part shows the filtering difference between the CD and ACD, and presents the filtering algorithm based on CD.
3.3.1 Filtering based on CD and ACD
The optimal sample selection in (14) provides a unified framework for noise filtering, and the (absolute) covering distance in (36) and (37) can be embedded in the framework. Specif- ically, the joint points are in the estimations of T (ρ) and the coefficient C.
T (ρ) can be estimated by replacing e2 i with R2
i ,
, (40)
C =
, (41)
where the model error r (j) i = mj(xi) − yi. Compared with the definition of C in (8), C
takes the maximum on J sets of prediction errors. The reason is that the covering distance is overestimated for low noises, and then the denominator term
∑n i=1R
2 i /n in (41) is an
overestimation of ED(e2 i ). This modification in (41) aims to weaken the negative impact of
noise estimation. In addition, T (ρ) and C also can be similarly estimated by the absolute covering distance RAi in (37).
By (11) and(13), the estimated objective function
ˆF(ρ) =
[ ε(D)
] · ε(DF ) (42)
= (C + 1) · ε(D)− (C + T (ρ)) · ε(DF ), (43)
where ε(D), ε(DF ) depend on n, ρ, h, η in (6) and (9).
Considering that the CD and ACD may be unreliable in estimating and ranking low noises (large ERDs), low-noise samples are directly retained in noise filtering. According to Proposition 2, they can be identified through the covering interval. The sample selection is optimized on high-noise samples (yi /∈ [u, v]) under the proposed OSS framework,
ρ∗ = arg max ρ>nc/n
ˆF(ρ), (44)
where ρ∗ denotes the estimated optimal relative size, and nc is the number of low-noise samples (yi ∈ [u, v]).
26
A Unified Sample Selection Framework for Output Noise Filtering
It is obvious that ρ∗ satisfies ∂ ˆF(ρ) ∂ρ = 0 when n→∞, where
∂ ˆF(ρ)
+
) · ( −∂ε(DF )
∂ρ
) · ( −∂ε(DF )
∂ρ
(45)
Note that the ACD in (37) provides a lower estimation than the CD in (36) for low noises in DF , i.e., RAi < Ri for yi ∈ [u, v], we have
∑ DF
and
∑ DF
.
∂ρ
∂ρ > 0. Then ∂ ˆF(ρ) ∂ρ based on the
ACD will be less than that based on the CD. Considering that the slope of ˆF(ρ) generally decreases with ρ, we could deduce that the estimated relative size ρ∗ from the ACD is smaller than that from the CD. In another word, the filtering based on the ACD will remove more samples than that based on the CD, and the ACD takes a higher risk of overcleansing than the CD in noise filtering.
The CD and ACD are compared in the following simulation.
Let Din, Dout be the subsets consisting of samples in or out of the covering interval, respectively. Their sample sizes are nc = #{Din} and np = #{Dout}. Considering that the sorted sequences of Din from both CD and ACD may be unreliable (large ERDs), samples in Din and Dout are sorted separately and those in Dout are removed first.
Figure 9 shows the filtering results based on the CD and ACD on benchmark data set Space ga (detailed data information is shown in Table 5). From Figure 9(a), the optimal relative size ρ∗ has the following relation: ρ∗ACD < ρ∗CD < ρ∗. The result in Figure 9(b) becomes ρ∗ACD < ρ∗CD ≈ ρ∗. It means the filtering from ACD removes more samples than the CD and the true ρ∗. It verifies the previous deduction that the ACD takes a higher risk of overcleansing than the CD. Clearly, the optimal relative size from the CD is more closer to the true one. When the noise ratio is large enough, the filtering based on the CD is comparable with the true ρ∗.
27
;
F (;
(a) Objective function for NR = 10%
;
F (;
(b) Objective function for NR = 30%
Figure 9: The sample selections based on the CD and ACD are compared under the OSS framework. The Gaussian noises from N(0, 0.52) are artificially added to the data set Space ga (the sample size n = 3107, the noise ratio NR = 10%, 30%). The model predictions are produced by the kNN model (k = 3) in the subsets scheme (J = 5). Note that the sequence of sample removing based on the CD is the same as that from the ACD, only one true objective function is displayed as the baseline. In other words, the three objective functions in each sub-figure have the same sequence of sample removing. The true F(ρ) is computed by the true noise, while the others are based on the noise estimations in (36) and (37). The reference line marked with ρ = nc/n separates the low-noise (Din) and high-noise (Dout) samples.
It is worth noting in Figure 9 that a valley occurs at ρ = nc/n for each curve, especially for those from the CD and ACD. The reason is that samples in Din and Dout are sorted separately and the estimated noises of a few samples in Din are larger than the least of Dout. These samples are named as the overflowing samples. As mentioned before, the noise for Din is usually overestimated by the CD and ACD (see Figure 8), while the estimation for Dout is unbiased (Property 5). The truth is that the overflowing samples based on the true noise is not as many as those based on the CD or ACD. This is verified by the fact that the valley in the true F(ρ) curve is insignificant. In addition, ˆF(ρ) in reality is a smooth function with respect to ρ, while the true F(ρ) is slightly rough as the sequence of sample removing is based on the noise estimation but not the true value.
The objective function estimated by the ACD is closer to the true curve because it has a lower deviation for some low noises. In estimating the objective function, the ACD wins in the distance from the true function, while the CD wins in the shape and the trend. Compared with the ACD, the CD estimator allows more samples to be retained. As a result, it has a lower risk of overcleansing and is closer to the true optimal sample selection. Thus the CD estimator is adopted in the filtering algorithm.
28
3.3.2 Filtering Algorithm based on CD
Let Din, Dout be the subsets consisting of samples in or out of the covering interval, re- spectively. Their sample sizes are nc = #{Din} and np = #{Dout}. The noise filtering is executed in the following way. Firstly, the samples in Din are considered as low-noise and they are retained directly as the covering distance (CD) has large relative deviations in estimating low noises. Secondly, the noise of each sample is estimated by the CD. Then the samples in Dout can be removed one by one according to the absolute noise (large noise first). When a new sample is removed, the objective function F(ρ) for a smaller ρ can be estimated by (43). Finally, the removing operation is stopped at the maximum F(ρ).
Algorithm 1 Covering distance filtering (CDF) algorithm for regression.
Input: Regression data set D = {(xi, yi), i = 1, 2, · · · , n} Base models y = mj(x), j = 1, 2, · · · , J
Output: Filtered data set DF
1: Train and test the base models in the subsets scheme, then each sample has J model predictions and errors.
2: Compute the covering interval for each sample by (31). 3: Compute the CD value for each sample by (36). Sort the samples in Dout by the CD in
ascending order and obtain a new set D′out = {(xi′ , yi′)} np i′=1.
4: Estimate the coefficient C by (41), compute ε(D) by (6). 5: for s = 1 to np do 6: nF = nc + s, ρs = nF /n, DF=Din
{(xi′ , yi′) ∈ D′out}si′=1;
7: Compute T (ρs) by (40); Compute ε(DF ) by (9). 8: Estimate the objective function F(ρs) by (43). 9: end for
10: s∗ = arg maxsF(ρs), DF=Din {(xi′ , yi′) ∈ D′out}s
∗ i′=1.
Algorithm 1 shows the steps of the proposed covering distance filtering (CDF). Model predictions and errors are generated in the subsets scheme in step 1. In steps 2-3, the covering interval and CD are obtained based on the model predictions and real outputs. The samples in Dout are sorted to decide the removing sequence. Step 4 computes the fixed items in (43), including coefficient C and ε(D). In steps 5-9, the objective function F(ρ) is estimated for each possible filtering from the removing sequence. Step 10 finds the filtering such that F(ρ) achieves the maximum (ρ∗ = (nc+s
∗)/n). The traversal calculation of F(ρs) (steps 5-9) is efficiently executed through vector operations (vectors T (ρs) and ε(DF )) in our experiments.
The relative size is limited in ρ = (nc+1 : n)/n, and the maximum objective value must exist. If any filtering is determined to be ineffective by (7), the optimal relative size will be equal to 1. In most cases, T (ρ) is convex about ρ and BT (ρ) is concave, the estimated F(ρ) is usually a concave function with respect to ρ ∈ [(nc + 1)/n, 1]. Thus the optimization procedure can also be implemented by other optimizing methods such as the binary search and gradient-based search.
29
Jiang, Wang, Qian and Liang
Assume that Cj(n) is the time complexity function of the j-th base model. Only the first step of Algorithm 1, computing the model predictions, is the complexity of Cj(n) and the other steps have a linear complexity. Hence the total complexity is T (CDF ) =∑J
j=1 Cj(n)+np ·n, where np denotes the number of samples out of the covering interval. If the kNN model is employed in the subsets scheme, it becomes T (CDF ) = (log(n) +np) ·n.
4. A Real Example: Noise Filtering on Apparent Age Data Set
4.1 Competition Data Set Information
Apparent age estimation has attracted more and more researchers since its potential ap- plications in the real world such as in forensics or social media (Rothe et al., 2018). In apparent age estimation, each facial image is labeled by multiple individuals. The mean age (rather than the real age) is set to be the ground truth age and the uncertainty is intro- duced by the standard deviation (Std). Briefly speaking, face age estimation consists of two crucial stages: age feature representation and age estimator learning (Liu et al., 2015). Huo et al. (2016) provide four sets of feature representations for images in the age estimation competition data set based on fine-tuned deep models. The original images are from ICCV 2015 (training data set: 2463×2; validation data set: 1136×2) and CVPR 2016 (training data set: 4113×2; validation data set: 1500×2) (Escalera et al., 2015). Images are turned over as the image augmentation (×2). The data set contains 90 features and 18424 samples in each feature representation. The following age noise identification is based on the given features and labels.
As the apparent age is labeled by multiple individuals, the age value may be inconsistent with the facial image. The noise filtering on apparent age data set aims to find the most inconsistent label(s) and improve the prediction ability of the model trained on it. The proposed CDF (J = 5) is independently executed on the data sets with four sets of different feature representations. Predictions are obtained by kNN (k = 5) regressor in the subsets scheme. Considering the randomness of data partition, each noise filtering round is repeated five times and the covering distance (CD) is averaged.
Figure 10(a) displays the age distribution by means of the density curve and histogram. Figure 10(b) shows some confident examples (with the least average CD) in different age intervals. The apparent age and image name are given over and under the image, respec- tively.
4.2 Noise Filtering Results
It is known that high-noise samples usually have large CDs. Table 2 lists the images with the 20 largest average CDs (on the four sets and five repetitions) and those with the maximum CD over 4. Considering that the facial image is similar to its mirror image (such as 005052.jpg and 005052t.jpg), they are listed adjacently and only the one with the larger average is shown. The CD values beyond 4 have a light blue background, and the average CDs over 4 are bolded. The label bias denotes the relationship between the age label and the covering interval. The symbol ↑ means the label is larger than the interval center, i.e., the label seems to be overestimated. While the symbol ↓ has the opposite meaning.
30
Image No. Name Age
Std. Label CD from different features Aver.
(Source) label bias Fea. 1 Fea. 2 Fea. 3 Fea. 4 CD
1 005152.jpg 11.3 14.1 ↑ 8.82 8.62 6.80 7.39 7.91 2 005152t.jpg 11.3 14.1 ↑ 7.88 10.17 4.84 7.88 7.69
(CVPR2016 validation dataset)
3 000115t.jpg 89.2 6.5 ↑ 6.36 6.29 5.18 5.50 5.83 4 000115.jpg 89.2 6.5 ↑ 5.94 6.27 5.15 5.25 5.65
(CVPR2016 training dataset)
5 005165.jpg 87.9 6.6 ↑ 4.94 4.72 3.78 4.13 4.39 6 005165t.jpg 87.9 6.6 ↑ 5.01 5.13 3.79 3.97 4.47
(CVPR2016 validation dataset)
7 005565.jpg 25.2 10.7 ↑ 3.30 4.24 4.20 3.44 3.79 8 005565t.jpg 25.2 10.7 ↑ 3.29 4.55 3.58 2.29 3.43
(CVPR2016 validation dataset)
9 000962t.jpg 74.1 11.2 ↑ 4.06 4.04 3.23 3.26 3.65 10 000962.jpg 74.1 11.2 ↑ 4.06 3.85 2.14 3.43 3.37
(CVPR2016 training dataset)
11 augmentation image 2084.jpg 7 5.2 ↑ 10.05 0.82 1.96 1.60 3.61 12 image 2084.jpg 7 5.2 ↑ 8.73 0.97 1.49 1.63 3.21
(ICCV2015 validation dataset)
13 002914.jpg 7.1 5.2 ↑ 8.64 0.30 1.58 1.54 3.02 14 002914t.jpg 7.1 5.2 ↑ 9.97 0.24 1.87 1.52 3.40
(CVPR2016 training dataset)
15 003260.jpg 86.9 5.5 ↑ 3.96 4.85 1.16 2.32 3.07 16 003260t.jpg 86.9 5.5 ↑ 3.96 4.60 1.16 2.27 3.00
(CVPR2016 training dataset)
17 image 432.jpg 51 4.7 ↓ 2.06 1.88 2.01 2.79 2.18 18 augmentation image 432.jpg 51 4.7 ↓ 1.94 1.98 1.95 2.23 2.02
(ICCV2015 training dataset)
19 002376t.jpg 55.1 7.7 ↓ 2.27 2.18 2.16 1.93 2.14 20 002376.jpg 55.1 7.7 ↓ 2.09 2.29 2.11 1.57 2.01
(CVPR2016 training dataset)
(CVPR2016 training dataset)
22 image 4725.jpg 31 2.2 ↑ 0.94 5.30 0.03 0.02 1.57 23 augmentation image 4725.jpg 31 2.2 ↑ 0.41 5.35 0.01 0.04 1.45
(ICCV2015 training dataset)
24 001265t.jpg 31.1 2.2 ↑ 0.37 5.26 0.03 0.02 1.42 25 001265.jpg 31.1 2.2 ↑ 0.63 5.19 0.17 0.02 1.50
(CVPR2016 training dataset)
31
Jiang, Wang, Qian and Liang
Age 0 10 20 30 40 50 60 70 80 90 100
N um
be r
of c
as es
(b) Confident age labels
Figure 10: Information of age estimation data set from ICCV2015 and CVPR2016
In Table 2, it might be difficult to decide whether the age labels have deviations for the elderly people (Nos. 3-6, 9-10, 15-16), whereas it becomes easier for the children. For example, the age for the first image (Nos. 1, 2, age=11.3) should intuitively between 2 and 5, and the label is inaccurate. Although the 9-th and 10-th images (Nos. 17-20) are identical, their features have minor differences among the four sets of representations. Moreover, the subset partition is random, hence these images (Nos. 17-20) do not exactly have the same CD. So do images Nos. 11-14 and Nos. 22-25. The apparent age of the 6-th and 7-th images (Nos. 11-12, age=7; Nos. 13-14, age=7.1) should be no more than 5. It is inappropriate to assign age=31.1 to the last two images (No. 21-25). The above images have notable overestimated labels. While the ages for images Nos. 17-20 seem to be over 60 and the labels (51 and 55.1) should be underestimated. The results indicate that the proposed CDF filter can effectively identify inaccurate labels in apparent age data set on the basis of proper feature representations.
The relative size of the filtered data set is about 82% (15108 samples). The age distri- bution changes are shown in Figure 11. The number of removed samples for each age label (1-100) is plotted in the figure, and the horizontal dotted line denotes threshold 30. The age labels losing more than 30 samples range from 16 to 61. From the smoothed density curves, the distribution difference mainly lies in the same range. It means that the CDF is more likely to drop samples in the high-density area. Thus the CDF would not destroy the initial age distribution and it is a safe filter.
For an imbalanced data set, the filtering also can be executed on the subsets separately. For example, the age data set can be partitioned into three subsets according to the initial age density: [0-15], [16-60], [60-100]. As described in Subsection 2.3.2, the optimal relative size decreases with the sample size. Then more samples will be removed by CDF on the second subset ([16-60]) which has many more samples than the other two.
32
Age 20 40 60 80 100
D en
si ty
/ N o.
1500 Density before filtering Density after CDF5 filtering Removed samples
Figure 11: Change of the age distribution in filtering
4.3 Age Prediction on Real Data Set wiki
To evaluate the effectiveness of filtering, the model trained on the competition data set (ICCV2015 and CVPR2016) is tested on real data set wiki based on Wikipedia (Rothe et al., 2018). The raw wiki data set is preprocessed by the face and label validity detections. The cascaded convolutional networks is employed to detect the face and landmarks (Huang et al., 2018). The feature representation for images in data set wiki is obtained by the fine-tuned deep models the same as that for the training data set (Huo et al., 2016). Ages in [0,100] are considered as valid labels. The final test set has 29930 samples.
MAE of CDF5 10 20 30 40 50 60
M A
E o
M A
E o
M A
E o
Figure 12: Prediction error comparison
Three widely used models, kNN (k = 5), random forest (RF, 100 trees) and neural networks (NNet, 30 neurons), are trained on the data sets before (no filtering, NoF) and after CDF5 (CDF,J = 5) filtering, and the test error is measured by the mean absolute error (MAE). The test errors of NoF and CDF5 are compared in Figure 12. The red diagonal means equal errors and the blue lines denote the error difference within 3. It can be observed that most scatters are around the diagonal and most MAE values are less than 20. The
33
Jiang, Wang, Qian and Liang
number of samples with the error difference over 3 is marked in the corresponding area. For example, there are 493 samples whose NoF error is beyond the CDF error+3 in Figure 12(a), while only 116 samples whose NoF error is less than the CDF error-3. It means that CDF5 outperforms NoF for samples with the error difference over 3. Results for the RF and NNet models are similar to that of kNN (163>22, 385>259).
The detailed error comparison of NoF and CDF5 is implemented on three test sets. The first set consists of all available testing samples, the second set contains samples whose MAE of NoF is over 5 (above the lower dotted line in each sub-figure of 12), and the last contains those with MAE over 10 (above the upper dotted line in Figure 12). Note that the last two sets may vary with the testing model. Table 3 lists the results including the win-tie-lose ratio, the average error (± Std) on testing samples, the relative reduction, and the P-value of the rank-sum test. The relative reduction is calculated by the formula: 1−MAECDF5/MAENoF . The reductions over 1% and P-values under 0.05 are bolded.
Test set Model #Samples CDF5 vs. NoF MAE Relative
P-value win tie lose NoF CDF5 reduction
kNN 29930 38.28% 23.72% 38.00% 5.38±4.43 5.29±4.19 1.63% 0.561 All available RF 29930 50.07% 0.00% 49.93% 5.39±4.30 5.35±4.20 0.66% 0.934
NNet 29930 50.93% 0.00% 49.07% 5.41±4.43 5.40±4.36 0.03% 0.641 kNN 13342 41.84% 22.96% 35.20% 9.07±4.12 8.79±3.80 2.97% 0.000
MAE(NoF)>5 RF 13516 52.74% 0.00% 47.26% 9.00±3.83 8.87±3.73 1.37% 0.046 NNet 13369 50.91% 0.00% 49.09% 9.10±4.13 9.02±4.03 0.81% 0.800 kNN 3745 46.33% 23.60% 30.07% 14.12±4.50 13.28±4.14 5.99% 0.000
MAE(NoF)>10 RF 3807 61.15% 0.00% 38.85% 13.77±3.96 13.43±3.87 2.50% 0.000 NNet 3819 57.29% 0.00% 42.71% 14.11±4.43 13.76±4.29 2.44% 0.004
Table 3: MAE comparison of NoF and CDF5 on wiki data set
It is clear from Table 3 that the average error of CDF5 is less than that of NoF in all cases. For all samples, the two sets of errors have no significant difference. For the second subset, the CDF5 error is significantly less than the NoF error for kNN and RF. For the last subset, the error difference between CDF5 and NoF is significant for all models. Generally speaking, the CDF filter could significantly reduce the prediction error for hard-to-learn samples.
Furthermore, the CDF filter is superior to NoF in efficiency as it has a smaller data size and a shorter training time. For example, the total time of CDF5 in filtering and testing (9.1+219.3 seconds) is less than that of NoF (0+351.5 seconds) for the NNet model.
5. Experiments and Analysis
In this section, we empirically study the performances of the proposed CDF filter on bench- mark data sets. We present our experimental framework, empirical results, and analysis.
5.1 Experimental Framework
For each data set and filter, the overall process is shown in Figure 13. Firstly, the original data set D is randomly partitioned into two parts DA, DB whose size ratio is 8:2. The polluted set DAp is obtained by adding noises artificially to the first part DA. Then DAp is filtered and a cleaner set DAf is obtained. Finally, the model is trained on DAf and tested
34
A Unified Sample Selection Framework for Output Noise Filtering
on the second part DB. The above steps are repeated ten times owing to the randomness in data partition and adding noises.
Original
Task Regression Ordinal Classification/Regression
Data 10 data sets (Table 5) 17 data sets (Table 6) Uniform 1: U(−0.5, 0.5); Uniform 2: U(−0.8, 0.8); Gaussian 1: N(µ = 0, σ = 0.5); Label yi → yj
Noise distribution Gaussian 2: N(µ = 0, σ = 0.8); ·yi 6= yj /Noise rule Laplace 1: Lp(µ = 0, σ = 0.5); ·P (yi → yj) ∝ #{yj}/n
Laplace 2: Lp(µ = 0, σ = 0.8); Mixture 1: N(µ = 1, σ = 0.3) +N(µ = −1, σ = 0.3); Mixture 2: N(µ = 1, σ = 0.2) +N(µ = −0.8, σ = 0.4).
Noise ratio 0%,10%,20%,30%,40% 0%,10%,20%,30% CDF (par.) CDF5(J = 5), CDF6(J = 6), CDF7(J = 7) CDF5(J = 5), CDF6(J = 6), CDF7(J = 7)
Competitors(par.)
NoF (No filtering), RegENN/Reg (threshold θ = 5, number of neighbors: 9), DiscENN/Disc (number of neighbors: 9),
NoF (No filtering), ENN (number of neighbors: 3), MI (threshold α = 0.05, number of neighbors: 6), ANN (number of neighbors: 3), RegENN/Reg (threshold α = 5, number of neighbors: 9), CF (number of folds: 10), DiscENN/Disc (number of neighbors: 9). MVF (number of folds: 4),
IPF (number of subsets: 5), HARF (agreement level: 70%), INFFC (threshold: 0).
Testing model kNN, NNet, RF SVC1V1, NNOP No. of neighbors (kNN): k ∈ {1, 3, 5, 7, 9} Kernel width(SVC1V1): σ ∈ {10−3, 10−2, · · · , 103}
Hyper-parameters No. of hidden neurons (NNet): H ∈ {10, 20, 30, 40} No. of hidden neurons (NNOP): H ∈ {10, 20, 30, 40} No. of trees (RF): T ∈ {50, 100, 150, 200}
Evaluation MSE MAE, MZE
Table 4: Experimental settings
The detailed experimental settings are listed in Table 4.
• Data sets. 27 benchmark data sets are employed for supervised learning. All data sets in Table 5 are regression problems (Dua and Graff, 2017; Chang and Lin, 2016), and those in Table 6 are prepared for ordinal classification/regression (Gutierrez et al., 2016; Sanchez-Monedero et al., 2019). Note that the first ten data sets in Table 6 are derived from regression data sets by discretizing the output. All numerical features have been scaled to [−1, 1].
35
Jiang, Wang, Qian and Liang
No. Data set #Sample #Feature
1 Yacht Hydrodynamics 308 7 2 Housing 506 14 3 Energy efficiency 768 8 4 Concrete 1030 9 5 Geographical Original of Music 1059 68 6 MG 1385 6 7 Airfoil self-noise 1503 6 8 Space ga 3107 6 9 Skill Craft Master Table 3395 20 10 Parkinsons Telemonitoring 5875 26
Table 5: Regression data sets
No. Data set Type #Sample #Feaure #Class Class distribution
1 Housing5 Discretised 506 14 5 ≈101 per class 2 Stock5 Discretised 700 9 5 140 per class 3 Abalone5 Discretised 4177 11 5 ≈836 per class 4 Bank5 Discretised 8192 8 5 ≈1639 per class 5 Bank5′ Discretised 8192 32 5 ≈1639 per class 6 Computer5 Discretised 8192 12 5 ≈1639 per class 7 Computer5′ Discretised 8192 21 5 ≈1639 per class 8 Cal.housing5 Discretised 20640 8 5 4128 per class 9 Census5 Discretised 22784 8 5 ≈4557 per class 10 Census5′ Discretised 22784 16 5 ≈4557 per class 11 Balance-Scale Real 625 4 3 288-49-288 12 SWD Real 1000 10 4 32-352-399-217 13 Car Real 1728 21 4 211-384-69-65 14 Eucalyptus Real 736 91 5 180-107-130-214-105 15 LEV Real 1000 4 5 93-280-403-197-27 16 Wine quality-Red Real 1599 11 6 10-53-681-638-199-18 17 ERA Real 1000 4 9 92-142-181-172-158-118-88-31-18
Table 6: Ordinal classification data sets
36
A Unified Sample Selection Framework for Output Noise Filtering
• Noises. Assume that all the original data sets are unpolluted. And noises are arti- ficially added to the output. There are 8 kinds of noise distributions for regression. Note that the last two are mixed by Gaussian distributions, and 50% of the noises are from a single Gaussian distribution in each mixture. The last mixed distribution is asymmetric. In ordinal classification, the label is randomly changed to other labels according to the noise ratio. And the tran