機器學習-K近鄰演算法
阿新 • • 發佈:2018-11-05
用例一:
from sklearn.neighbors import NearestNeighbors
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
#鄰居數為2,計算x中各節點最近兩個鄰居距離和下標
distances, indices = nbrs.kneighbors(X)
print distances
print indices
#是最近的距離的節點
print nbrs.kneighbors_graph(X).toarray()
用例二:
from sklearn.neighbors import KNeighborsClassifier
X = [[0], [1], [2], [3]]
Y = [0, 0, 1, 1]
neigh = KNeighborsClassifier(n_neighbors = 3)
#鄰居數為3,使用X和Y的值訓練分類器,x為輸入值,y為劃分的目標取值
neigh.fit(X, Y)
#輸入值為1.1 預測劃分的目標為0/1
print (neigh.predict([[1.1]] ))
#輸入值為0.9 預測取值的概率
print (neigh.predict_proba([[0.9]]))
具體參考:《web安全之機器學習入門》
https://github.com/duoergun0729/1book/