檢驗方法、混淆矩陣、模型評估
阿新 • • 發佈:2019-01-12
假設
如果一個統計檢驗的結果拒絕零假設(結論不支援零假設),而實際上真實的情況屬於零假設,那麼稱這個檢驗犯了第一類錯誤。
反之,如果檢驗結果支援零假設,而實際上真實的情況屬於備擇假設,那麼稱這個檢驗犯了第二類錯誤。
儘量使後果嚴重的錯誤成為第一類錯誤.
- 先定義
α 顯著水平 - 定義原假設,即按照常理推斷出的情況
- 計算
P 值,如果P>α 則拒絕原假設接受H1 假設
獨立性檢驗
秩和檢驗
驗證兩個樣本是否服從同一分佈
將兩個樣本合併後排序,得到每個樣本單位的秩次。當幾個資料的大小相同秩次卻不相同時,最終的秩次取其算術平均。
顯著水平為
求出樣本數較少的那個總體的秩和T
查“秩和檢驗表”,得出臨界值
chi squared test (X2 test)
獨立性檢驗:
class\item | good | normal | bad | total |
---|---|---|---|---|
child | ||||
teens | ||||
aldot | ||||
total |
Degree of freedom(df) = (total row number - 1)(total column number -1)
T test
F test
confusion matrix
confusion matrix of classification
actual\class | cat | dog | rabbit |
---|---|---|---|
cat | 5 | 3 | 0 |
dog | 2 | 3 | 1 |
rabbit | 0 | 2 | 11 |
table of confusion
correctness\Predict | Positive | Negative |
---|---|---|
True | TP | TN |
False | FP | FN |
Type I error : FP 誤判為陽性樣本
Type II error: FN 誤判為陰性樣本
Error=TN+FP4
relevant 相關
which is correctly classified
Relevant=TP+FN retrieved 預測為正例的(即檢索出的)
selected items,
Retrieved=TP+FP Accuracy
for the cat class
correctness\Predict | Positive | Negative |
---|---|---|
True | 5 | 3 |
False | 2 | 17 |
code
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true, y_pred)
confusion matrix plot
import itertools
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
Usage
---
cnf_matrix = confusion_matrix(y_test, y_pred)
np.set_printoptions(precision=2)
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
P-R & F1
precision查準率
被
f 判定為正例的樣本當中有多少實際為真?How many selected items are relevant?
Precision=TPTP+FP={relevant}∩{retrieved}{