In this issue

  • PDF / 235,010 Bytes
  • 2 Pages / 439.37 x 666.142 pts Page_size
  • 84 Downloads / 111 Views

DOWNLOAD

REPORT


In this issue Rachel Harrison1

© Springer Science+Business Media, LLC, part of Springer Nature 2020

I hope that all our readers remain well and positive as we approach the winter season. In this issue, we have eight regular research papers and a special section. The first three regular research papers are linked by the common theme of software defects, and this is followed by two papers on prediction and two papers on software requirements. The final paper reports the results of a survey on asset selection. Software defects are a major concern to the industry. In “The effect of Bellwether analysis on software vulnerability severity prediction models”, Patrick Kwaku Kudjo, Jinfu Chen, Solomon Mensah, Richard Amankwah and Christopher Kudjo describe an algorithm to identify and select an exemplary subset of data for use as a training set to yield improved prediction accuracy. Their experimental results show that their approach achieves an improvement over the usual benchmarks. Continuing with the theme of defects, the paper “A public unified bug dataset for java and its assessment regarding metrics and bug prediction”, by Rudolf Ferenc, Zoltán Tóth, Gergely Ladányi, István Siket and Tibor Gyimóthy, brings together public source code bug datasets and unifies their contents. They used a decision tree algorithm to show the capabilities of the dataset in bug prediction. The result is a unified dataset publicly available for use by everyone. In “A classification and systematic review of feature model defects”, Megha Bhushan, Arun Negi, Piyush Samant, Shivani Goel and Ajay Kumar present a review and their findings of key research issues related to feature model defects in product lines. This should help developers to find the types of defects and their causes. Turning to prediction, the paper “Predicting technical debt from commit contents: reproduction and extension with automated feature selection”, by Leevi Rantala and Mika Mäntyl, investigates sub-optimal development solutions that are expressed in written code comments or commits. As a result, the authors have produced a list of predictor words that correlate positively with self-admitted technical debt. Prediction can be difficult if the dataset is imbalanced. In “An empirical study on predictability of software maintainability using imbalanced data”, Ruchika Malhotra and Kusum Lata present empirical work to improve software maintainability prediction models which have been developed with machine learning techniques using imbalanced data. The

* Rachel Harrison [email protected] 1



School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford OX33 1HX, UK

13

Vol.:(0123456789)



Software Quality Journal

authors recommend that the safe-level synthetic minority oversampling technique is a useful method for dealing with imbalanced datasets. Software requirements are the basis of all development, but it remains challenging to elicit and assess requirements. The paper “What lies behind requirements? A quality assessment of stateme