利用scikitlearn畫ROC曲線例項
阿新 • • 發佈:2020-07-03
一個完整的資料探勘模型,最後都要進行模型評估,對於二分類來說,AUC,ROC這兩個指標用到最多,所以 利用sklearn裡面相應的函式進行模組搭建。
具體實現的程式碼可以參照下面博友的程式碼,評估svm的分類指標。注意裡面的一些細節需要注意,一個是呼叫roc_curve 方法時,指明目標標籤,否則會報錯。
具體是這個引數的設定pos_label ,以前在unionbigdata實習時學到的。
重點是以下的程式碼需要根據實際改寫:
mean_tpr = 0.0 mean_fpr = np.linspace(0,1,100) all_tpr = [] y_target = np.r_[train_y,test_y] cv = StratifiedKFold(y_target,n_folds=6) #畫ROC曲線和計算AUC fpr,tpr,thresholds = roc_curve(test_y,predict,pos_label = 2)##指定正例標籤,pos_label = ###########在數之聯的時候學到的,要制定正例 mean_tpr += interp(mean_fpr,fpr,tpr) #對mean_tpr在mean_fpr處進行插值,通過scipy包呼叫interp()函式 mean_tpr[0] = 0.0 #初始處為0 roc_auc = auc(fpr,tpr) #畫圖,只需要plt.plot(fpr,tpr),變數roc_auc只是記錄auc的值,通過auc()函式能計算出來 plt.plot(fpr,lw=1,label='ROC %s (area = %0.3f)' % (classifier,roc_auc))
然後是博友的參考程式碼:
# -*- coding: utf-8 -*- """ Created on Sun Apr 19 08:57:13 2015 @author: shifeng """ print(__doc__) import numpy as np from scipy import interp import matplotlib.pyplot as plt from sklearn import svm,datasets from sklearn.metrics import roc_curve,auc from sklearn.cross_validation import StratifiedKFold ############################################################################### # Data IO and generation,匯入iris資料,做資料準備 # import some data to play with iris = datasets.load_iris() X = iris.data y = iris.target X,y = X[y != 2],y[y != 2]#去掉了label為2,label只能二分,才可以。 n_samples,n_features = X.shape # Add noisy features random_state = np.random.RandomState(0) X = np.c_[X,random_state.randn(n_samples,200 * n_features)] ############################################################################### # Classification and ROC analysis #分類,做ROC分析 # Run classifier with cross-validation and plot ROC curves #使用6折交叉驗證,並且畫ROC曲線 cv = StratifiedKFold(y,n_folds=6) classifier = svm.SVC(kernel='linear',probability=True,random_state=random_state)#注意這裡,probability=True,需要,不然預測的時候會出現異常。另外rbf核效果更好些。 mean_tpr = 0.0 mean_fpr = np.linspace(0,100) all_tpr = [] for i,(train,test) in enumerate(cv): #通過訓練資料,使用svm線性核建立模型,並對測試集進行測試,求出預測得分 probas_ = classifier.fit(X[train],y[train]).predict_proba(X[test]) # print set(y[train]) #set([0,1]) 即label有兩個類別 # print len(X[train]),len(X[test]) #訓練集有84個,測試集有16個 # print "++",probas_ #predict_proba()函式輸出的是測試集在lael各類別上的置信度, # #在哪個類別上的置信度高,則分為哪類 # Compute ROC curve and area the curve #通過roc_curve()函式,求出fpr和tpr,以及閾值 fpr,thresholds = roc_curve(y[test],probas_[:,1]) mean_tpr += interp(mean_fpr,tpr) #對mean_tpr在mean_fpr處進行插值,通過scipy包呼叫interp()函式 mean_tpr[0] = 0.0 #初始處為0 roc_auc = auc(fpr,tpr) #畫圖,只需要plt.plot(fpr,變數roc_auc只是記錄auc的值,通過auc()函式能計算出來 plt.plot(fpr,label='ROC fold %d (area = %0.2f)' % (i,roc_auc)) #畫對角線 plt.plot([0,1],[0,'--',color=(0.6,0.6,0.6),label='Luck') mean_tpr /= len(cv) #在mean_fpr100個點,每個點處插值插值多次取平均 mean_tpr[-1] = 1.0 #座標最後一個點為(1,1) mean_auc = auc(mean_fpr,mean_tpr) #計算平均AUC值 #畫平均ROC曲線 #print mean_fpr,len(mean_fpr) #print mean_tpr plt.plot(mean_fpr,mean_tpr,'k--',label='Mean ROC (area = %0.2f)' % mean_auc,lw=2) plt.xlim([-0.05,1.05]) plt.ylim([-0.05,1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show()
補充知識:批量進行One-hot-encoder且進行特徵欄位拼接,並完成模型訓練demo
import org.apache.spark.ml.Pipeline import org.apache.spark.ml.feature.{StringIndexer,OneHotEncoder} import org.apache.spark.ml.feature.VectorAssembler import ml.dmlc.xgboost4j.scala.spark.{XGBoostEstimator,XGBoostClassificationModel} import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator import org.apache.spark.ml.tuning.{ParamGridBuilder,CrossValidator} import org.apache.spark.ml.PipelineModel val data = (spark.read.format("csv") .option("sep",",") .option("inferSchema","true") .option("header","true") .load("/Affairs.csv")) data.createOrReplaceTempView("res1") val affairs = "case when affairs>0 then 1 else 0 end as affairs," val df = (spark.sql("select " + affairs + "gender,age,yearsmarried,children,religiousness,education,occupation,rating" + " from res1 ")) val categoricals = df.dtypes.filter(_._2 == "StringType") map (_._1) val indexers = categoricals.map( c => new StringIndexer().setInputCol(c).setOutputCol(s"${c}_idx") ) val encoders = categoricals.map( c => new OneHotEncoder().setInputCol(s"${c}_idx").setOutputCol(s"${c}_enc").setDropLast(false) ) val colArray_enc = categoricals.map(x => x + "_enc") val colArray_numeric = df.dtypes.filter(_._2 != "StringType") map (_._1) val final_colArray = (colArray_numeric ++ colArray_enc).filter(!_.contains("affairs")) val vectorAssembler = new VectorAssembler().setInputCols(final_colArray).setOutputCol("features") /* val pipeline = new Pipeline().setStages(indexers ++ encoders ++ Array(vectorAssembler)) pipeline.fit(df).transform(df) */ /// // Create an XGBoost Classifier val xgb = new XGBoostEstimator(Map("num_class" -> 2,"num_rounds" -> 5,"objective" -> "binary:logistic","booster" -> "gbtree")).setLabelCol("affairs").setFeaturesCol("features") // XGBoost paramater grid val xgbParamGrid = (new ParamGridBuilder() .addGrid(xgb.round,Array(10)) .addGrid(xgb.maxDepth,Array(10,20)) .addGrid(xgb.minChildWeight,Array(0.1)) .addGrid(xgb.gamma,Array(0.1)) .addGrid(xgb.subSample,Array(0.8)) .addGrid(xgb.colSampleByTree,Array(0.90)) .addGrid(xgb.alpha,Array(0.0)) .addGrid(xgb.lambda,Array(0.6)) .addGrid(xgb.scalePosWeight,Array(0.1)) .addGrid(xgb.eta,Array(0.4)) .addGrid(xgb.boosterType,Array("gbtree")) .addGrid(xgb.objective,Array("binary:logistic")) .build()) // Create the XGBoost pipeline val pipeline = new Pipeline().setStages(indexers ++ encoders ++ Array(vectorAssembler,xgb)) // Setup the binary classifier evaluator val evaluator = (new BinaryClassificationEvaluator() .setLabelCol("affairs") .setRawPredictionCol("prediction") .setMetricName("areaUnderROC")) // Create the Cross Validation pipeline,using XGBoost as the estimator,the // Binary Classification evaluator,and xgbParamGrid for hyperparameters val cv = (new CrossValidator() .setEstimator(pipeline) .setEvaluator(evaluator) .setEstimatorParamMaps(xgbParamGrid) .setNumFolds(3) .setSeed(0)) // Create the model by fitting the training data val xgbModel = cv.fit(df) // Test the data by scoring the model val results = xgbModel.transform(df) // Print out a copy of the parameters used by XGBoost,attention pipeline (xgbModel.bestModel.asInstanceOf[PipelineModel] .stages(5).asInstanceOf[XGBoostClassificationModel] .extractParamMap().toSeq.foreach(println)) results.select("affairs","prediction").show println("---Confusion Matrix------") results.stat.crosstab("affairs","prediction").show() // What was the overall accuracy of the model,using AUC val auc = evaluator.evaluate(results) println("----AUC--------") println("auc="+auc)
以上這篇利用scikitlearn畫ROC曲線例項就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支援我們。