본문 바로가기
Python

[python][sklearn] scikit-learn Classification_report를 이용하여 머신 러닝 분류 모델 평가 지표 한번에 보기

by Chandler.j 2021. 1. 15.
반응형

fig1. title

sklearn 에서 classification_report module을 사용하여 평가 지표를 한번에 볼 수 있음


#1. classification_report

from sklearn.metrics import classification_report

y_pred = model.predict(X_test_features)

print(classification_report(y_test, y_pred, target_names=['normal', 'abnormal']))

fig1. output of #1

참고 : scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html

 

sklearn.metrics.classification_report — scikit-learn 0.24.0 documentation

 

scikit-learn.org

 

#2. 평가 지표 해석

Precision:- Accuracy of positive predictions.

Precision = TP/(TP + FP)

 

Recall:- Fraction of positives that were correctly identified.

Recall = TP/(TP+FN)

 

F1 score

F1 Score = 2*(Recall * Precision) / (Recall + Precision)

 

Accuracy : (TP+TN) / all

 

macro avg = (normal+abnormal) /2 * precision or recall or f1 score

weighted avg = normal/(normal+abnormal)  *  precision or recall or f1 score

 

참고 : medium.com/@kohlishivam5522/understanding-a-classification-report-for-your-machine-learning-model-88815e2ce397

 

Understanding a Classification Report For Your Machine Learning Model

The classification report visualizer displays the precision, recall, F1, and support scores for the model.

medium.com

참고 : datascience.stackexchange.com/questions/65839/macro-average-and-weighted-average-meaning-in-classification-report

 

macro average and weighted average meaning in classification_report

I use the "classification_report" from from sklearn.metrics import classification_report in order to evaluate the imbalanced binary classification Classification Report : precision

datascience.stackexchange.com

 


TOP

Designed by 티스토리