Python實現決策樹對西瓜進行分類
阿新 • • 發佈:2019-01-04
使用的周志華老師書上的例子,因為習主席講過一切不給資料集的演算法都是耍流氓,所以我這裡先給出資料集:
0,色澤,根蒂,敲聲,紋理,臍部,觸感,密度,含糖率,好瓜 1,青綠,蜷縮,濁響,清晰,凹陷,硬滑,0.697,0.46,是 2,烏黑,蜷縮,沉悶,清晰,凹陷,硬滑,0.774,0.376,是 3,烏黑,蜷縮,濁響,清晰,凹陷,硬滑,0.634,0.264,是 4,青綠,蜷縮,沉悶,清晰,凹陷,硬滑,0.608,0.318,是 5,淺白,蜷縮,濁響,清晰,凹陷,硬滑,0.556,0.215,是 6,青綠,稍蜷,濁響,清晰,稍凹,軟粘,0.403,0.237,是 7,烏黑,稍蜷,濁響,稍糊,稍凹,軟粘,0.481,0.149,是 8,烏黑,稍蜷,濁響,清晰,稍凹,硬滑,0.437,0.211,是 9,烏黑,稍蜷,沉悶,稍糊,稍凹,硬滑,0.666,0.091,否 10,青綠,硬挺,清脆,清晰,平坦,軟粘,0.243,0.267,否 11,淺白,硬挺,清脆,模糊,平坦,硬滑,0.245,0.057,否 12,淺白,蜷縮,濁響,模糊,平坦,軟粘,0.343,0.099,否 13,青綠,稍蜷,濁響,稍糊,凹陷,硬滑,0.639,0.161,否 14,淺白,稍蜷,沉悶,稍糊,凹陷,硬滑,0.657,0.198,否 15,烏黑,稍蜷,濁響,清晰,稍凹,軟粘,0.36,0.37,否 16,淺白,蜷縮,濁響,模糊,平坦,硬滑,0.593,0.042,否 17,青綠,蜷縮,沉悶,稍糊,稍凹,硬滑,0.719,0.103,否
決策樹的目的就是根據這些特徵分析出哪些因素決定了一個西瓜的好壞,首先讀取資料集,將結果儲存到dataSet,labels中。
def createDataset(): index = [1,2,3,4,5,6,9] data = pandas.read_csv(r"d:\data.csv",sep=",") data = data.values size = len(data) dataSet = [] labels = [] for i in range(size): dataSet.append([]) for j in index: dataSet[i].append(data[i][j]) labels = ['色澤','根蒂','敲聲','紋理','臍部','觸感'] return dataSet,labels
然後確定決策樹如何劃分,首先獲得資訊熵:
def calcShannonEnt(dataSet): numEntries = len(dataSet) labelCounts = {} number = 0 for featVec in dataSet: currentLabel = featVec[-1] if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0 labelCounts[currentLabel] += 1 number += 1 shannonEnt = 0.0 for key in labelCounts: prob = float(labelCounts[key])/numEntries shannonEnt -= prob * log(prob,2) return shannonEnt
利用資訊增益來決定該如何對資料集進行劃分:
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0
bestFeature = -1
for i in range(numFeatures):
featList = [example[i] for example in dataSet]
uniqueVals = set(featList)
newEntropy = 0.0
for value in uniqueVals:
subDataSet = splitDataSet(dataSet, i, value)
prob = len(subDataSet) / float(len(dataSet))
newEntropy += prob * calcShannonEnt(subDataSet)
infoGain = baseEntropy - newEntropy
if infoGain > bestInfoGain:
bestInfoGain = infoGain
bestFeature = i
return bestFeature
當然這裡用到一個方法對資料集進行劃分:
def splitDataSet(dataSet, axis, value):
retDataSet = []
for featVec in dataSet:
if featVec[axis] == value:
reducedFeatVec = featVec[:axis]
reducedFeatVec.extend(featVec[axis+1:])
retDataSet.append(reducedFeatVec)
return retDataSet
迭代建立決策樹模型:
def majorityCnt(classList):
classCount = {}
for vote in classList:
if vote not in classCount.keys():
classCount[vote] = 0
classCount[vote] += 1
sortedClassCount = sorted(classCount.iteritems(), key=operator.itemgetter(1), reverse=True)
return sortedClassCount[0][0]
def createTree(dataSet, labels):
classList = [example[-1] for example in dataSet]
if classList.count(classList[0]) == len(dataSet):
return classList[0]
if len(dataSet[0]) == 1:
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel:{}}
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues)
for value in uniqueVals:
subLabels = labels[:]
myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
return myTree
最後計算得到結果:
dataSet,labels = createDataset()
tree = createTree(dataSet, labels)
print(tree)
決策樹畫出來大概是如下圖這個樣子的:
畫完才想起我電腦裡其實有viso氣死我了。