Learning to Predict Software Testability Axiomatically

Authors
School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
Abstract
Software testability is the propensity of code to reveal its existing faults, particularly during automated testing. Testing success highly depends on the testability of the program under test. On the other hand, testing success relies on the test coverage provided by a given test data generation algorithm. However, little empirical evidence has been proposed to clarify whether and how software testability affects test coverage. This article proposes an approach to measure testability based on an axiomatic test quality metric, i.e., branch coverage. The main difficulty is determining the branch coverage before performing the actual testing to reduce the cost and time imposed by the program execution. To address this problem, we leverage machine learning classification techniques to predict the extent to which a class under test could be covered based on static source code metrics. Automatic test data generation is applied to compute the branch coverage, a concrete proxy to quantify source code testability, for a large corpus of 110 Java project with 23,000 classes. An extensive set of software metrics represents each class to be used in the classification process. Our experiments show an acceptable accuracy of 81.94% in predicting software testability.

Keywords