K近鄰演算法的實現
##### Kd樹的實現
> K近鄰(KNN)演算法實現有很多種,比如全部遍歷,球樹,kd樹等等,這裡我們使用kd樹來實現KNN
- 構造kd樹
```
T = [[2,3],[5,4],[9,6],[4,7],[8,1],[7,2]]
class myNode:
def __init__(self,point):
self.left = None
self.right = None
self.point = point #節點
pass
def media(data):
m = int(len(data) / 2)
return data[m],m
def build_kd_tree(data,d):
data = sorted(data, key = lambda x : x[d]) #key:特徵值
p,m = media(data)
tree = myNode(p)
del data[m]
print(data,p)
if m > 0 :tree.left = build_kd_tree(data[:m],not d)
if len(data) > 1 : tree.right = build_kd_tree(data[m:],not d)
return tree
kd_tree = build_kd_tree(T,0)
print(kd_tree)
```
首先這裡建立了一個類,裡面的構造方法記錄節點以及左右分支。
media每次去中間的點。build_kd_tree構造kd樹。首先對傳入的資料按照特徵值進行排序。然後取中間的節點構建一棵樹。然後對兩邊剩餘的節點進行判斷,構造左右分支
- 對kd樹進行查詢
```
T = [[2, 3], [5, 4], [9, 6], [4, 7], [8, 1], [7, 2]]
#構建kd樹邏輯
class node:
def __init__(self,point):
self.left = None
self.right = None
self.parent = None
self.point = point
def set_left(self,left):
if left == None : pass
self.left = left
left.parent = self
def set_right(self,right):
if right == None : pass
self.right = right
right.parent = self
def media(lst):
m = int(len(lst) / 2)
return lst[m],m
#對kd樹進行構建
def build_kdtree(data,d):
data = sorted(data,key = lambda x: x[d])
p,m = media(data)
tree = node(p)
del data[m]
if m > 0:
tree.set_left(build_kdtree(data[:m],not d))
if len(data) > 1:
tree.set_right(build_kdtree(data[m:],not d))
return tree
#計算距離
def get_distance(pointA,target):
print(pointA,target)
return ((pointA[0] - target[0]) ** 2 + (pointA[1] - target[1]) ** 2) ** 0.5
#對kd樹進行查詢
def search_kdtree(tree,d,target):
#目標點的特徵值小於當前節點,左子樹,否則右子樹
if target[d] < tree.point[d]:
if tree.left != None:
return search_kdtree(tree.left,not d,target)
else:
if tree.right != None:
return search_kdtree(tree.right,not d,target)
#對當前最小距離進行更新
def update_best(t,distance):
if t == None: return
t = t.point
distance = get_distance(t,target)
print(distance)
if distance < best[1]:
best[0] = t
best[1] = distance
best = [tree.point,1000]
#對kd樹進行回溯
while(tree.parent != None):
update_best(tree.parent.left,best)
update_best(tree.parent.right,best)
tree = tree.parent
return best[0]
kd_tree = build_kdtree(T,0)
print(search_kdtree(kd_tree,0,[2.1,3.5]))
```
對kd樹的搜尋是建立在生成kd樹之後的,首先要計算每個點到目標點距離,進行不斷地更新。search_kdtree()方法開始先進行迭代,得到葉子節點後,進行回溯,計算每個點和目標點的距離,記錄下最小距離的點。
##### 基於全搜尋KNN演算法的實現
> 這裡實現KNN我先使用對所有點進行歐式距離的計算,而不使用kd樹,因為kd相對複雜
- 處理資料
這裡實現KNN演算法首先需要讀取資料,對資料進行劃分,將資料集劃分為訓練集和測試集
```
import csv
import random
import pandas as pd
from sklearn.model_selection import train_test_split
#filename:檔案路徑 d:特徵個數 split:一般將訓練集:測試集為67:33 trainSet訓練集 testSet 測試集
def loadDataSet(filename,d,split = 0.66,trainSet = [],testSet = []):
with open(filename,"r") as csvfile:
d = d
lines = csv.reader(csvfile)
dataset = list(lines)
#為dataset進行賦值
for x in range(len(dataset)):
for y in range(d):
dataset[x][y] = float(dataset[x][y])
#分割資料集
if random.random() < split:
trainSet.append(dataset[x])
else:
testSet.append(dataset[x])
```
這裡讀取資料這裡具體不是很清楚,之後解決。但是這不是KNN的主要部分,就先不糾結了
- 計算相似度(距離)
```
import math
#計算歐式距離,length: 計算特徵的數量 data: 資料
def get_distance(data1,data2,length):
distance = 0
#根據維度的不同計算歐式距離
for i in range(length):
distance += pow((data1[i] - data2[i]),2)
return math.sqrt(distance)
```
- 鄰近相似度
這裡的鄰近相似度就是根據k的不同,獲取到與目標點最近的k個點
```
import operator
#trainingSet:訓練集 testInstance: 目標值
def getNeighbors(trainingSet,testInstance,k):
distance = []
length = len(testInstance) - 1 #這裡有些疑問,為什麼要減1!!!!
for i in range(len(trainingSet)):
dist = get_distance(trainingSet[i],testInstance,length)
distance.append((trainingSet[i],dist))
#對維度為1的數值進行排序
distance.sort(key = operator.itemgetter(1))
neighbors = []
for j in range(k):
neighbors.append(distance[j][0])
return neighbors
```
- 進行預測
然後就是根據鄰近的相似度,預測出目標點所屬的分類
```
#neighbors: 最近的k個點
def getResponse(neighbors):
#建立一個字典
classVotes = {}
for x in range(len(neighbors)):
response = neighbors[x][-1] #假設該類別的屬性在最後一列
#進行投票表決
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
#排序
sortedVotes = sorted(classVotes.items(),key = None,reverse = True)
return sortedVotes[0][0] #返回出現最多的屬性
```
- 對準確率進行計算
然後就是對準確率進行計算,這裡只是一個簡單的計算,我用命中的點/所有的點計算準確率
```
#引數 testSet: 測試集 predicitions: 對測試集的預測結果
def getScore(testSet,predictions):
corrent = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
corrent += 1
return float(corrent / len(testSet))
```
- 主函式
```
def main():
trainingSet = []
testSet = []
split = 0.67
#分割資料集
loadDataSet(r"D:\Python Dataset\iris.csv",4,split,trainingSet,testSet)
print("Training Set: ",repr(len(trainingSet)))
print("Test Set: ",repr(len(testSet)))
k = 3
predictions = []
#獲取前k個點
for i in range(len(testSet)):
neighbors = getNeighbors(trainingSet,testSet[i],k)
response = getResponse(neighbors)
predictions.append(response)
score = getScore(testSet,predictions)
print("The score is: ",repr(score))
main()
```
##### 基於kd樹KNN演算法的實現
> 這部分即根據kd樹的思想,結合上一部分KNN演算法的實現,根據kd樹來實現KNN演算法,但是可能程式合理性不高,所以在演算法的準確率和時間均差於全搜尋的KNN,準確率比全搜尋差0.1-0.2.所以有時間後期還會修改
- [ ] KNN演算法實現
- 構造kd樹的基類
```
class myNode:
def __init__(self,point):
self.left = None
self.right = None
self.parent = None
self.point = point
pass
def setLeft(self,left):
if left == None:
pass
self.left = left
left.parent = self
def setRight(self,right):
if right == None:
pass
self.right = right
right.parent = self
```
- 對kd樹進行構造
```
#計算節點
def media(lst):
m = int(len(lst) / 2)
return lst[m],m
#構建kd樹
def build_kd_tree(data,d):
data = sorted(data,key = lambda x : x[d])
p,m = media(data)
tree = myNode(p)
del data[m]
#print(data,p)
if m > 0:
tree.setLeft(build_kd_tree(data[:m],not d))
if len(data) > 1:
tree.setRight(build_kd_tree(data[m:],not d))
#print("The data length is :" ,repr(len(data)))
return tree
```
- 對資料進行讀取,並將其分割為訓練集和測試集
```
#讀取檔案
def loadDataSet(filename,d,split = 0.66,trainingSet = [],testSet = []):
with open(filename,"r") as csvfile:
d = d
lines = csv.reader(csvfile)
dataSet = list(lines)
for x in range(len(dataSet)):
for y in range(d):
dataSet[x][y] = float(dataSet[x][y])
if random.random() < split:
trainingSet.append(dataSet[x])
else:
testSet.append(dataSet[x])
```
- 計算歐式距離
```
def getDistance(data1,data2,length):
distance = 0
for i in range(length):
distance += pow((data1[i] - data2[i]),2)
return math.sqrt(distance)
```
- 對kd樹進行搜尋,返回距離目標點最近的k個點
```
#搜尋kd樹
def search_kdtree(tree,d,target,k = 3):
print("This is searchFun")
length = len(target) - 1 #獲取距離剔除掉最後的屬性
if target[d] < tree.point[d]:
if tree.left != None:
return search_kdtree(tree.left,not d,target,k)
else:
if tree.right != None:
return search_kdtree(tree.right,not d,target,k)
def updateBestDis(t,distance,k):
print("This is updataFun")
if t == None:
return
t = t.point
distance = getDistance(t,target,length)
print(t,distance)
if distance < best[k - 1][1]:
print(type(best[k-1]))
best[k - 1][0] = t
best[k - 1][1] = distance
best.sort(key = operator.itemgetter(1))
return best
def initBestDis(t,i):
print("This is initFun")
print("I is : ",repr(i))
if t == None:
return
t = t.point
i = i
distance = getDistance(t,target,length)
print(t,distance)
print("i is : ",repr(i))
#best[i][0] = t
#best[i][1] = distance
best.append([t,distance])
best.sort(key = operator.itemgetter(1))
#return best
best = []
i = 0
if k >= 2:
while(i <= k - 1):
if tree.parent != None:
initBestDis(tree.parent.left,i)
i += 1
if i <= k - 1:
if tree.parent.right != None:
initBestDis(tree.parent.right,i)
i += 1
tree = tree.parent
else:
initBestDis(tree.parent.left,k,i)
print(best)
print(type(best))
print("The Best Length is : ",repr(len(best)))
minK = best[k - 1][1]
while(tree.parent != None):
if getDistance(tree.parent.point,target,length) < minK:
updateBestDis(tree.parent.left,best,k)
updateBestDis(tree.parent.right,best,k)
tree = tree.parent
print("search Done!!!")
best.sort(key = operator.itemgetter(1))
neighbors = []
for i in range(k):
neighbors.append(best[i][0])
return neighbors
```
這裡是這個演算法最核心的部分,我來詳細講一下。首先對kd樹進行遞迴的搜尋,找到距離目標點最近的葉子節點。如果目標點的特徵值小於某節點,進入左子樹,反之右子樹。然後裡面有兩個函式,分別是initBestDis()和updataBestDis()。initBestDis()函式是根據傳入的k值對前K個距離的點進行初始化,updateBestDis()是對best(list)進行更新,如果新節點小於list中第k個距離,則進行替換。下面一個迴圈是對best陣列進行初始化。最後一個迴圈則是計算不同節點到目標點的距離,然後進行替換。如果父節點的距離大於best裡的最大值,則不用進行查詢父節點了。
- 對最近的節點使用少數服從多數的思想進行預測
```
#進行預測
def getResponse(neighbors):
classDic = {}
for i in range(len(neighbors)):
response = neighbors[i][-1]
if response in classDic:
classDic[response] += 1
else:
classDic[response] = 1
sortDic = sorted(classDic.items(),key = None,reverse = True)
return sortDic[0][0]
```
- 對準確率進行計算
```
#計算得分
def getScore(testSet,predictions):
print("testSet is : ",repr(len(testSet)))
print("prediction is : ",repr(len(predictions)))
temp = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
temp += 1
return float(temp / len(testSet))
```
- 主函式
```
def main():
start = time.clock()
split = 0.66
trainingSet = []
testSet = []
loadDataSet(r"D:\Python Dataset\iris.csv",4,split,trainingSet,testSet)
print("The TrainingSet length : ",repr(len(trainingSet)))
print("The TestSet length : ",repr(len(testSet)))
kd_tree = build_kd_tree(trainingSet,0)
print("-------",repr(kd_tree))
predictions = []
k = 3
for i in range(len(testSet)):
neighbors = search_kdtree(kd_tree,0,testSet[i],k)
result = getResponse(neighbors)
predictions.append(result)
print("----------->",repr(predictions))
score = getScore(testSet,predictions)
print("The Score is : ",repr(score))
end = time.clock()
total_time = end -start
print("-----The Time finished-----",str(total_time))
main()
```