1. 程式人生 > >經典的機器學習模型(貳)

經典的機器學習模型(貳)

決策樹

ID3 演算法的核心問題是選取在樹的每個結點要測試的屬性。我們希望選擇的是最有 助於分類例項的屬性。那麼衡量屬性價值的一個好的定量標準是什麼呢?這裡將定義一 個統計屬性,稱為“資訊增益(information gain)”,用來衡量給定的屬性區分訓練樣例 的能力。ID3 演算法在增長樹的每一步使用這個資訊增益標準從候選屬性中選擇屬性。

熵(Entropy)

所有可能結果的資訊量的總和組成熵。資訊量= l o

g p ( x ) -logp(x) H ( X
) = E ( l o g 2 p (
x ) ) = p ( x ) l o g 2 p ( x ) H(X)=E(log_2p(x))=−∑p(x)log_2p(x)

資訊增益(Information Gain)

G a i n ( S , A ) E n t r o p y ( S ) v V a l u e s ( A ) S v S E n t r o p y ( S v ) Gain(S,A)\equiv Entropy(S)-\sum_{v\in Values(A)}\frac{|S_v|}{|S|}Entropy(S_v)
玩網球
ID3演算法,通過計算資訊增益來構建決策樹,IG越大,則選用的決策屬性越好,本質是空間分割區域,每個區域儘可能樣本同樣種類。

基尼係數(Gini Index)

G I N I ( t ) = 1 j [ p ( j t ) ] 2 GINI(t)=1-\sum_j[p(j|t)]^2

基尼分割(Gini Split)

G I N I s p l i t = i = 1 k n i n G I N I ( i ) GINI_{split}=\sum_{i=1}^k\frac{n_i}{n}GINI(i)

錯誤分類誤差(Misclassification Error)

E r r o r ( t ) = 1 M a x i P ( i t ) Error(t)=1-Max_iP(i|t)
示意圖

訓練和測試誤差

圖示
影象表明,隨著訓練節點的增加,訓練資料的誤差是在一直減小,而測試資料的誤差,是先減小後增加的。

實戰專案(預測患者隱形眼鏡型別)

“使用的演算法稱為ID3,它是一個好的演算法但並不完美。ID3演算法無法直接處理數值型資料,儘管我們可以通過量化的方法將數值型資料轉化為標稱型數值,但是如果存在太多的特徵劃分,ID3演算法仍然會面臨其他問題。”

核心程式碼

計算資料的夏農熵

def calcShannonEnt(dataSet):
    numEntries = len(dataSet)
    labelCounts = {}
    for featVec in dataSet: #the the number of unique elements and their occurance
        currentLabel = featVec[-1]
        if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0
        labelCounts[currentLabel] += 1
    shannonEnt = 0.0
    for key in labelCounts:
        prob = float(labelCounts[key])/numEntries
        shannonEnt -= prob * log(prob,2) #log base 2
    return shannonEnt

根據給定資料,選擇最好的特徵來劃分資料

def splitDataSet(dataSet, axis, value):
    retDataSet = []
    for featVec in dataSet:
        if featVec[axis] == value:
            reducedFeatVec = featVec[:axis]     #chop out axis used for splitting
            reducedFeatVec.extend(featVec[axis+1:])
            retDataSet.append(reducedFeatVec)
    return retDataSet
    
def chooseBestFeatureToSplit(dataSet):
    numFeatures = len(dataSet[0]) - 1      #the last column is used for the labels
    baseEntropy = calcShannonEnt(dataSet)
    bestInfoGain = 0.0; bestFeature = -1
    for i in range(numFeatures):        #iterate over all the features
        featList = [example[i] for example in dataSet]#create a list of all the examples of this feature
        uniqueVals = set(featList)       #get a set of unique values
        newEntropy = 0.0
        for value in uniqueVals:
            subDataSet = splitDataSet(dataSet, i, value)
            prob = len(subDataSet)/float(len(dataSet))
            newEntropy += prob * calcShannonEnt(subDataSet)     
        infoGain = baseEntropy - newEntropy     #calculate the info gain; ie reduction in entropy
        if (infoGain > bestInfoGain):       #compare this to the best gain so far
            bestInfoGain = infoGain         #if better than current best, set to best
            bestFeature = i
    return bestFeature                      #returns an integer

遞迴開始建立決策樹

def createTree(dataSet,labels):
    classList = [example[-1] for example in dataSet]
    if classList.count(classList[0]) == len(classList): 
        return classList[0]#stop splitting when all of the classes are equal
    if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet
        return majorityCnt(classList)
    bestFeat = chooseBestFeatureToSplit(dataSet)
    bestFeatLabel = labels[bestFeat]
    myTree = {bestFeatLabel:{}}
    del(labels[bestFeat])
    featValues = [example[bestFeat] for example in dataSet]
    uniqueVals = set(featValues)
    for value in uniqueVals:
        subLabels = labels[:]       #copy all of labels, so trees don't mess up existing labels
        myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
    return myTree        

利用決策樹進行分類

def classify(inputTree,featLabels,testVec):
    firstStr = list(inputTree.keys())[0]
    secondDict = inputTree[firstStr]
    featIndex = featLabels.index(firstStr)
    key = testVec[featIndex]
    valueOfFeat = secondDict[key]
    if isinstance(valueOfFeat, dict): 
        classLabel = classify(valueOfFeat, featLabels, testVec)
    else: classLabel = valueOfFeat
    return classLabel

完整程式碼和資料路徑:https://github.com/Miraclemin/DecisionTree