xgboost、random forest等模型調參小結
阿新 • • 發佈:2019-02-15
1、關於調參
調參是模型適應不同資料集的一個優化過程,如果只是建立了模型,而不對引數進行調整,是很不合理的。
2、xgboost調參
3、網路調參
from sklearn.metrics import fbeta_score, make_scorer,r2_score
from sklearn.model_selection import GridSearchCV
cv = KFold(n_splits=5,shuffle=True,random_state=45)
parameters = {'alpha': [0.5,0.6,0.7]}
clf=KernelRidge()
r2 = make_scorer(r2_score)
grid_obj = GridSearchCV(clf, parameters, cv=cv,scoring=r2)
# grid_fit = grid_obj.fit(train, labels)
grid_fit = grid_obj.fit(train_df.values, y_train_df)
best_clf = grid_fit.best_estimator_
best_clf.fit(train_df.values, y_train_df)
參考: