1. 程式人生 > >SKlearn庫學習曲線

SKlearn庫學習曲線

score ear lec AMM sklearn ets mode 參考 svm

思想:

# 1.現將所有樣本用交叉驗證方法或者(隨機抽樣方法) 得到 K對 訓練集-驗證集
# 2.依次對K個訓練集,拿出數量不斷增加的子集如m個,並在這些K*m個子集上訓練模型。
# 3.依次在對應訓練集子集、驗證集上計算得分。
# 4.對每種大小下的子集,計算K次訓練集得分均值和K次驗證集得分均值,共得到m對值。
# 5.繪制學習率曲線。x軸訓練集樣本量,y軸模型得分或預測準確率。

用到的方法:

learning_curve #直接得到1個模型在不同訓練集大小參數下:1.訓練集大小 2.訓練得分 3.測試得分

ShuffleSplit #實現交叉驗證、或 隨機抽樣劃分不同的訓練集合驗證集

plt.fill_between

python代碼:

import numpy as np  
from sklearn.model_selection import learning_curve, ShuffleSplit  
from sklearn.datasets import load_digits  
from sklearn.naive_bayes import GaussianNB  
from sklearn import svm  
import matplotlib.pyplot as plt  
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    
if __name__ == __main__: plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel(Training example) plt.ylabel(score) train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size) train_scores_mean
= np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid()#區域 plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, o-, color=r, label="Training score") plt.plot(train_sizes, test_scores_mean,o-,color="g", label="Cross-validation score") plt.legend(loc="best") return plt digits = load_digits() X = digits.data y = digits.target cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)#切割100ci estimator = GaussianNB() title = "Learning Curves(naive_bayes)" plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=4) title = "Learning Curves(SVM,RBF kernel, $\gamma=0.001$)" cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)#交叉驗證傳入別的方法,而不是默認的k折交叉驗證 estimator = svm.SVC(gamma=0.001) plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=4) plt.show()

結果:

技術分享圖片

說明:

1. 貝葉斯模型上,訓練集規模達到1100時已經比較合適,太小的話會導致過擬合。。總的來看,該模型的準確率趨向於0.85

2. SVM模型上,訓練集規模到800時已經比較合適,太小的話會導致訓練集無代表性,對驗證集無預測能力。。。總的來看,該模型在這份數據上的表現比第一個好。

參考

http://www.360doc.com/content/18/0424/22/50223086_748481549.shtml

SKlearn庫學習曲線