FPR: is high. Since our model predicts everything 1, we have a high number of FP. And it signifies that this is not a good classifier/model.
AUC score: is very low and represents the true picture of evaluation here.
Accuracy: is very high. Even when TN = 0. Since the data is imbalanced (high number of +tive class). Numerator i.e TN+TP is high
Precision: is very high. Since data has a very disproportionately high number of Positive cases. The ration of TP/(TP+FP) becomes high.
Recall: is very high. Since data has a very disproportionately high number of Positive cases. The ration of TP/(TP+FN) becomes high.
F1-score: is very high. The high values of Precision and Recall make F1- score misleading.
2. negative class(0) < positive class(1)
help
don't help
Precision: is very low. Because of the high number of FP. The ration of TP/(TP+FP) becomes low.
Recall: is very low. Since data has a very disproportionately high number of Negative cases. The classifier may detect a larger no. of positive as negative. The ration of TP/(TP+FN) becomes low.
F1-score: is low. The low values of Precision and Recall make F1- score, a good indicator of performance here.
Accuracy: is very high. Since the proportion of TN is very, as the data is imbalanced (high number of -tive class). Numerator i.e TN+TP becomes high.
AUC score: is high. Even more than 50% of Actual positive are predicted as FN. (TPR)
FPR: is low. It gets skewed because of the large number of TN(imbalanced). Even when a classifier makes a lot of FP
요약
negative class(0) > positive class(1) 이고 negative에 포커스일 때 AUC score 사용
negative class(0) < positive class(1) 이고 positive에 포커스일 때 Precision, Recall, F1-score 사용