On the need of preserving order of data when validating within-project defect classifiers
- PDF / 1,211,145 Bytes
- 26 Pages / 439.37 x 666.142 pts Page_size
- 1 Downloads / 137 Views
On the need of preserving order of data when validating within-project defect classifiers Davide Falessi 1 Burak Turhan 3,4
2
2
2
& Jacky Huang & Likhita Narayana & Jennifer Fong Thai &
# The Author(s) 2020
Abstract
We are in the shoes of a practitioner who uses previous project releases’ data to predict which classes of the current release are defect-prone. In this scenario, the practitioner would like to use the most accurate classifier among the many available ones. A validation technique, hereinafter “technique”, defines how to measure the prediction accuracy of a classifier. Several previous research efforts analyzed several techniques. However, no previous study compared validation techniques in the within-project acrossrelease class-level context or considered techniques that preserve the order of data. In this paper, we investigate which technique recommends the most accurate classifier. We use the last release of a project as the ground truth to evaluate the classifier’s accuracy and hence the ability of a technique to recommend an accurate classifier. We consider nine classifiers, two industry and 13 open projects, and three validation techniques: namely 10fold cross-validation (i.e., the most used technique), bootstrap (i.e., the recommended technique), and walk-forward (i.e., a technique preserving the order of data). Our results show that: 1) classifiers differ in accuracy in all datasets regardless of their entity per value, 2) walk-forward outperforms both 10-fold cross-validation and bootstrap statistically in all three accuracy metrics: AUC of the selected classifier, bias and absolute bias, 3) surprisingly, all techniques resulted to be more prone to overestimate than to underestimate the performances of classifiers, and 3) the defect rate resulted in changing between the second and first half in both industry projects and 83% of open-source datasets. This study recommends the use of techniques that preserve the order of data such as walk-forward over 10-fold cross-validation and bootstrap in the within-project across-release class-level context given the above empirical results and that walk-forward is by nature more simple, inexpensive, and stable than the other two techniques. Keywords Defect classifiers . Classifiers . Model validation techniques
Guest Editor: Yasutaka Kamei
* Davide Falessi [email protected] Extended author information available on the last page of the article
Empirical Software Engineering
1 Introduction As testing remains one of the most important activities in software engineering, prioritizing test cases by predicting components likely to be defective is vital to prioritizing effort allocation. The software engineering community has provided significant advances in classifiers, and more advances are probably on their way. Prediction models can support test resource allocation by predicting the existence of defects1 in a software module (e.g., class). Specifically, classifiers aim to estimate a categorical variable, i.e., the existence or lack of at least
Data Loading...