@ispma

Comparative Analysis of Predictive Techniques for Release Readiness Classification

, , , and . Proceedings of the 5th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, page 15--21. New York, NY, USA, ACM, (2016)
DOI: 10.1145/2896995.2896997

Abstract

Context: A software release is the deployment of a version of an evolving software product. Product managers are typically responsible for deciding the release content, time frame, price, and quality of the release. Due to all the dynamic changes in the project and process parameters, the decision is highly complex and of high impact. Objective: This paper has two objectives: i) Comparative analysis of predictive techniques in classifying an ongoing release in terms of its expected release readiness., and ii) Comparative analysis between regular and ensemble classifiers to classify an ongoing release in terms of its expected release readiness. Methodology: We use machine learning classifiers to predict release readiness. We analyzed three OSS projects under Apache Software Foundation from JIRA issue repository. As a retrospective study, we covered a period of 70 months, 85 releases and 1696 issues. We monitored eight established variables to train classifiers in order to predict whether releases will be ready versus non-ready. Predictive performance of different classifiers was compared by measuring precision, recall, F-measure, balanced accuracy, and area under the ROC curve (AUC). Results: Comparative analysis among nine classifiers revealed that ensemble classifiers significantly outperform regular classifiers. Balancing precision and recall, Random Forrest and BaggedADABoost were the two best performers in total, while Naïve Bayes performed best among just the regular classifiers.

Links and resources

Tags

community

  • @ispma
  • @dblp
@ispma's tags highlighted