1. 程式人生 > 其它 >boost原理與sklearn原始碼_從sklearn原始碼簡析GBDT

boost原理與sklearn原始碼_從sklearn原始碼簡析GBDT

技術標籤:boost原理與sklearn原始碼

7739c6e8f003faeb5937f9239ecaf336.png

GBDT即Gradient Boosting Decision Tree,它在我個人認為是可以在傳統的一眾機器學習演算法中佔據Top1的地位,由它衍生出的Xgboost、Lightgbm和Catboost在表格型資料建模領域裡我願稱之為最強。針對GBDT的原理講解網上已經有很多優秀的版本,我這裡也不再贅述,這裡推薦下劉建平大佬的部落格https://www.cnblogs.com/pinard/p/6140514.html ,本文主要從sklearn原始碼裡來解析GBDT是如何進行模型構建的。 GBDT可以做迴歸任務也可以做分類任務,無論我們是使用sklearn裡的GradientBoostingClassifier還是GradientBoostingRegressor,他們對應的父類均為BaseGradientBoosting,下面我就開始對這個類做詳細的分析(sklearn為了相容性,程式碼裡包括了很多輸入引數檢驗的工作,這部分程式碼我會做修改,丟掉過多的包袱也方便切入重點)。 BaseGradientBoosting類初始化的引數主要供調參使用,直接使用預設的引數基本就可以構建一個比較不錯的baseline,這些引數包括了迭代輪數,學習率,loss以及一些控制樹生長和分裂策略的引數等。
  def __init__(self, loss, learning_rate, n_estimators, criterion,                min_samples_split, min_samples_leaf, min_weight_fraction_leaf,                max_depth, min_impurity_decrease, min_impurity_split,                init, subsample, max_features, ccp_alpha,                random_state, alpha=0.9, verbose=0, max_leaf_nodes=None,                warm_start=False, presort='deprecated',                validation_fraction=0.1, n_iter_no_change=None,                tol=1e-4):      self.n_estimators = n_estimators      self.learning_rate = learning_rate      self.loss = loss      self.criterion = criterion      self.min_samples_split = min_samples_split      self.min_samples_leaf = min_samples_leaf      self.min_weight_fraction_leaf = min_weight_fraction_leaf      self.subsample = subsample      self.max_features = max_features      self.max_depth = max_depth      self.min_impurity_decrease = min_impurity_decrease      self.min_impurity_split = min_impurity_split      self.ccp_alpha = ccp_alpha      self.init = init      self.random_state = random_state      self.alpha = alpha      self.verbose = verbose      self.max_leaf_nodes = max_leaf_nodes      self.warm_start = warm_start      self.presort = presort      self.validation_fraction = validation_fraction      self.n_iter_no_change = n_iter_no_change      self.tol = tol
下面直接看fit方法,首先是warm_start引數,這個引數決定了訓練的學習器是否需要被清空,即重新訓練。然後就是sample_weight,即樣本權重,如果不設定就會初始化為1。接著對模型進行初始化,將預測值預設為0,不過我們也可以自己去設定一個基學習器去根據X來預測一個初始的y。
def fit(self, X, y, sample_weight=None, monitor=None):    if not self.warm_start:        self._clear_state()    n_samples, self.n_features_ = X.shape    sample_weight_is_none = sample_weight is None    if sample_weight_is_none:        sample_weight = np.ones(n_samples, dtype=np.float32)    else:        sample_weight_is_none = False    X, X_val, y, y_val, sample_weight, sample_weight_val =train_test_split(X, y, sample_weight,                                                                              random_state=self.random_state,                                                                               test_size=self.validation_fraction)    if self.init_ == 'zero':        raw_predictions = np.zeros(shape=(X.shape[0], self.loss_.K),                                   dtype=np.float64)    else:        if sample_weight_is_none:            self.init_.fit(X, y)        else:            msg = ("The initial estimator {} does not support sample "                   "weights.".format(self.init_.__class__.__name__))            try:                self.init_.fit(X, y, sample_weight=sample_weight)            except TypeError:                raise ValueError(msg)            except ValueError as e:                if "pass parameters to specific steps of " \                   "your pipeline using the " \                   "stepname__parameter" in str(e):                    raise ValueError(msg) from e                else:                    raise        raw_predictions = \            self.loss_.get_init_raw_predictions(X, self.init_)    begin_at_stage = 0    self._rng = self.random_state    X_idx_sorted = None    n_stages = self._fit_stages(        X, y, raw_predictions, sample_weight, self._rng, X_val, y_val,        sample_weight_val, begin_at_stage, monitor, X_idx_sorted)    self.n_estimators_ = n_stages    return self
再下來就是核心的_fit_stages,這部分就是構建整個gbdt模型的過程,_fit_stages其實一直在迭代_fit_stage,這裡我們可以直接看_fit_stage的原始碼。 引數裡的i即為當前的迭代輪數,這邊需要注意的是多分類的問題,K分類的話每次迭代的estimator由K棵迴歸樹構成。GBDT的核心就是負梯度的更新上,程式碼裡對應loss部分。
def _fit_stage(self, i, X, y, raw_predictions, sample_weight, random_state, X_idx_sorted):        loss = self.loss_        original_y = y        raw_predictions_copy = raw_predictions.copy()        for k in range(loss.K):            if loss.is_multi_class:                y = np.array(original_y == k, dtype=np.float64)            residual = loss.negative_gradient(y, raw_predictions_copy, k=k,                                              sample_weight=sample_weight)            tree = DecisionTreeRegressor(                criterion=self.criterion,                splitter='best',                max_depth=self.max_depth,                min_samples_split=self.min_samples_split,                min_samples_leaf=self.min_samples_leaf,                min_weight_fraction_leaf=self.min_weight_fraction_leaf,                min_impurity_decrease=self.min_impurity_decrease,                min_impurity_split=self.min_impurity_split,                max_features=self.max_features,                max_leaf_nodes=self.max_leaf_nodes,                random_state=random_state,                ccp_alpha=self.ccp_alpha)            tree.fit(X, residual, sample_weight=sample_weight,                     check_input=False, X_idx_sorted=X_idx_sorted)            loss.update_terminal_regions(                tree.tree_, X, y, residual, raw_predictions, sample_weight,                learning_rate=self.learning_rate, k=k)            self.estimators_[i, k] = tree        return raw_predictions
針對不同的loss有不同的梯度更新方式(raw_predictions為模型根據X的預測值),如: ·迴歸MSE
residual = y - raw_predictions.ravel()

·迴歸 MAE

residual = 2 * (y - raw_predictions >0) - 1

·迴歸 Huber

raw_predictions =raw_predictions.ravel()diff = y - raw_predictionsgamma = np.percentile(np.abs(diff), self.alpha * 100)gamma_mask = np.abs(diff) <= gammaresidual = np.zeros((y.shape[0],), dtype=np.float64)residual[gamma_mask] = diff[gamma_mask]residual[~gamma_mask] = gamma * np.sign(diff[~gamma_mask])

·迴歸 Quantile

raw_predictions =raw_predictions.ravel()mask = y > raw_predictionsresidual = (alpha * mask) - ((1 - alpha) * ~mask)

·二分類 Logistic sigmoid

residual = y -scipy.special.expit(raw_predictions.ravel())

·二分類 Exponential loss,來源於AdaBoost

 y_ = -(2. * y - 1.)residual = y_ * np.exp(y_ * raw_predictions.ravel())

·多分類 Softmax

residual = y -np.nan_to_num(np.exp(raw_predictions[:, k] -                             logsumexp(raw_predictions, axis=1))
補充一個小插曲,知乎ID石塔西曾提出了這樣的一個問題,針對m*n的資料集,如果用GBDT進行建模的話,模型對應的梯度是多少維,m 維? n維?m*n維?難道 和 決策樹的深度有關?亦或者與樹的葉子節點的個數有關?還是都有關聯?文章連結:https://www.zhihu.com/question/62482926/answer /526988250,這裡可以根據原始碼就可以輕鬆得出這個問題的答案為m維。 我們根據不同的任務型別計算出對應的殘差之後,就可以建立CART迴歸樹,其中輸入仍為X,但target label此時變成了每一類迭代計算得出的殘差,最後再看raw_predictions累計更新的程式碼,我們以二分類Logistic sigmoid為例: 1. 根據樣本sample找出對應的葉子節點 2. 根據葉子節點的索引得出殘差值 3. 我們用sum(y - prob) / sum(prob * (1 - prob))來更新樹的葉子節點值,這裡y - prob即為殘差。 4. 乘上學習率learning_rate再加上之前的預測結果即作為更新後的raw_predictions。

迴歸任務不使用上述的第三步。

def update_terminal_regions(self, tree, X, y, residual, raw_predictions,                              sample_weight, learning_rate=0.1, k=0):      terminal_regions = tree.apply(X)      for leaf in np.where(tree.children_left == TREE_LEAF)[0]:          self._update_terminal_region(tree, terminal_regions,                                        leaf, X, y, residual,                                        raw_predictions[:, k], sample_weight)      raw_predictions[:, k] += \          learning_rate * tree.value[:, 0, 0].take(terminal_regions, axis=0)           def _update_terminal_region(self, tree, terminal_regions, leaf, X, y,                              residual, raw_predictions, sample_weight):      terminal_region = np.where(terminal_regions == leaf)[0]      residual = residual.take(terminal_region, axis=0)      y = y.take(terminal_region, axis=0)      sample_weight = sample_weight.take(terminal_region, axis=0)      numerator = np.sum(sample_weight * residual)      denominator = np.sum(sample_weight *                            (y - residual) * (1 - y + residual))      if abs(denominator) < 1e-150:          tree.value[leaf, 0, 0] = 0.0      else:          tree.value[leaf, 0, 0] = numerator / denominator
在_fit_stages裡會計算累計的loss情況,我們可以設定n_iter_no_change的數量來決定模型是否繼續訓練(即early stopping)。最後得到的就是訓練好的N個estimators,後面的預測也由raw_predictions而來。 GBDT裡關於特徵重要性的計算程式碼如下,主要就是將所有輪決策樹的特徵重要性做均值統計。
def feature_importances_(self):    relevant_trees = [tree                      for stage in self.estimators_ for tree in stage                      if tree.tree_.node_count > 1]    if not relevant_trees:        return np.zeros(shape=self.n_features_, dtype=np.float64)    relevant_feature_importances = [        tree.tree_.compute_feature_importances(normalize=False)        for tree in relevant_trees    ]    avg_feature_importances = np.mean(relevant_feature_importances,                                      axis=0, dtype=np.float64)    return avg_feature_importances / np.sum(avg_feature_importances)
本文從sklearn的原始碼解析了GBDT的整個建模過程,當我們對GBDT的原理有所熟悉之後再去看程式碼的實現就會加深對整體的理解,如果認為文章寫得有錯誤,請不吝指出,謝謝。