Notes on Model Evaluation and Selection
A model evaluation measure is a quantitative statement of how well a particular classifier meets the goals of the supervised learning endeavour 1. In this research project the performance of each hypothesised classifier is reported by means of several evaluation measures: the confusion matrix, classification accuracy, precision and recall. Each reported evaluation measure is estimated using 10-fold cross-validation2. That is, an evaluation measure is calculated as the average of 10 individual evaluation measures estimated on independently trained and tested classifiers. The evaluation measures used in this dissertation are outlined in the next three subsections.
Confusion matrix
A confusion matrix is a matrix of size , where is the number of classes considered in the classification problem, that shows the relation between the predicted class labels and actual class labels3. The confusion matrix is particularly useful to determine the extent of individual class misclassification. It might be possible that a classifier performs extremely well in correctly classifying instances belonging to one class, and extremely poorly in correctly classifying instances belonging to another class. The value of a classifier that performs well in the identification of classes and worse in the identification of other classes depends on the application domain.
A confusion matrix for a three-class classification problem is shown in Table below. The number of instances actually belonging to class 1 that was also predicted as belonging to class 1 is denoted ; the total number of instances in the testing set actually belonging to class 1 is denoted ; while the total number of instances in the testing set predicted to belong to class 1 is denote .
Actual\Predicted | Class 1 | Class 2 | Class 3 | Total | Precision |
---|---|---|---|---|---|
Class 1 | a | b | c | ||
Class 1 | a | b | c | ||
Class 1 | a | b | c | ||
Class 1 | a | b | c | ||
Class 2 | d | e | f | ||
Class 3 | g | h | i | ||
Total | |||||
Recall |
Precision and recall
There are four possible classification outcomes for every instance in the data set4. A classifier can assign an instance to class if it does in actual fact belong to class , this is counted as a true positive (). If the classifier assigns the instance to class if it does not actually belong to class it is counted as a false positive (). An instance not assigned to class that does not belong in class is counted as a true negative () whereas an instance that is not assigned to class that does belong to class is counted as a false positive ().
For every class in the class classification problem a precision rate can be calculated as 5,
and a recall rate can be calculated as,
The precision rate summarizes a classifier's ability to identify a pattern as belonging to a class if it does belong to class . The precision rate for class 1 in the table can be calculated as . The recall rate measures the probability that an instance classified as belonging to class is in fact a class instance. The recall rate for class 1 in table can be calculated as . Similarly, precision and recall can be calculated for each class. The average precision and recall rate for a classifier is calculated as the average over all the individual class precision and recall rates.
Model accuracy
Accuracy, sometimes called the correct classification rate, can be calculated as the number of correct classifications made by the model over the test set, divided by the total number of instances in the test set6. From the confusion matrix in the Table above, the classification accuracy of the classifier can be calculated as .
The classification accuracy is related to the empirical loss (with a 0-1 loss function) estimated in cross-validation2. The empirical loss measures the rate of misclassification whereas the classification accuracy measures the rate of correct classification, formulated as,
Model comparison
After evaluating each hypothesized classifier built as part of the knowledge discovery endeavour, the performance measures can be compared to select the best model. In comparing the performance of two classifiers, the best classifier can be selected based on the perceived difference in the observed evaluation measures. Alternatively, a more scientific approach can be taken to statistically test the significance of the observed performance difference between two classifiers7. A statistical hypothesis test tries to determine whether one can expect the same results from a repeated study or whether the observed results can be attributed to chance.
To compare the performance of two classifiers and a -fold cross-validation -test is used to test the research hypothesis that the two classifiers perform equally8. That is, the null hypothesis states that the results obtained are purely random, whereas the alternative hypothesis states that the results are extraordinary. If the two algorithms are statistically different then the null hypothesis will be rejected in favour of the alternative hypothesis.
As stated already, the classification accuracy of an induced classifier is estimated using the -fold cross-validation. For each subset an accuracy measure and is estimated. The difference in the estimated accuracies of classifier and on subset is . The difference in the estimated classifier accuracies is assumed to be normally distributed with mean,
and variance
Under the null hypothesis the test statistic,
has a Student's -distribution with degrees of freedom. From the Student's -distribution the probability of obtaining a test statistic greater than or equal to the the observed statistic, when assuming the null hypothesis is true, can be obtained. If the two-tailed -value is less than the chosen level of significance then the null hypothesis can be rejected in favour of the two-tailed alternative hypothesis . That is, reject the null hypothesis if . The alternative hypothesis states that the accuracy of classifier and accuracy of classifier are statistically different from one another. Instead of explicitly specifying a level of significance, usually chosen as , the -value can also be interpreted as the smallest significance level at which the null hypothesis will be rejected9.
As an alternative to the -fold cross-validation -test the McNemar's test, the -fold cross-validation test and the -fold cross-validation test were also considered. The McNemar's test is not suitable as it does not consider variability due to the choice of training set or due to the internal randomness in the induction algorithm [^book_kuncheva]. Neither the -fold cross-validation test nor the cross-validation test have been widely adopted in the data mining field, with the -fold cross-validation test remaining the most popular procedure10.
Notes
- Published as part of Wilgenbus, E.F., 2013. The file fragment classification problem: a combined neural network and linearprogramming discriminant model approach. Masters thesis, North West University.
References
Footnotes
-
Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., 1996. From data mining to knowledge discovery in databases. AI magazine 17 (3), 37. ↩
-
See post on cross-validation. ↩ ↩2
-
Kohavi, R., Provost, F., 1998. Glossary of terms. Machine Learning 30, 271274. ↩
-
Fawcett, T., 2006. An introduction to roc analysis. Pattern recognition letters 27 (8), 861874 ↩
-
Ozgur, A., Ozgur, L., Gungor, T., 2005. Text categorization with class-based and corpus-based key word selection. In: International Symposium on Computer and Information Sciences. Istanbul, Turkey, pp. 606615. ↩
-
Olson, D. L., Shi, Y., 2006. Introduction to Business Data Mining. Irwin-McGraw-Hill Series: Operations and Decision Sciences. McGraw-Hill. ↩
-
Kumar, R., 2005. Research Methodology: A Step-by-Step Guide for Beginners. SAGE Publications. ↩
-
See Salzberg, S. L., 1997. On comparing classi ers: Pitfalls to avoid and a recommended approach. Data mining and knowledge discovery 1 (3), 317328. ALso see Bouckaert, R. R., 2003. Choosing between two learning algorithms based on calibrated tests. In: Proceedings of the Twentieth International Conference on Machine Learning. Washington, DC, United States of America, pp. 5158 ↩
-
Rice, J. A., 2007. Mathematical Statistics and Data Analysis. Duxbury Advanced Series. Brooks/Cole ↩
-
Refaeilzadeh, L., Tang, L., Liu, H., 2009. Cross-validation. Encyclopedia of Database Systems. Springer, United States. ↩
Related tags