KD tree optimization of k-nearest neighbor algorithm (construction and search of KD tree) -- Based on Python

A good function 2020-11-13 02:53:31
kd tree optimization k-nearest nearest


Preface

kd The implementation principle of tree , I wrote a blog before kd Tree optimized k Nearest neighbor algorithm
Reference article :wenffe:python Realization KD Trees

1. kd The structure of the tree

import numpy as np
class Node(object):
"""
Define node class :
val: Instance point in node
label: The class of the instance in the node
dim: The split dimension of the current node
left: The left subtree of the node
right: The right subtree of the node
parent: Parent of node
"""
def __init__(self,val=None,label=None,dim=None,left=None,right=None,parent=None):
self.val = val
self.label = label
self.dim = dim
self.left = left
self.right = right
self.parent = parent
class kdTree(object):
"""
Defining tree classes :
dataNum: The number of samples in the training set
root: Tectonic kd Root node of tree
"""
def __init__(self,dataSet,labelList):
self.dataNum = 0
self.root = self.buildKdTree(dataSet,labelList) ## Pay attention to the value of the parent node .
def buildKdTree(self,dataSet, labelList, parentNode=None):
data = np.array(dataSet)
dataNum, dimNum = data.shape # The number of samples in the training set , Dimension of individual data 
label = np.array(labelList).reshape(dataNum,1)
if dataNum == 0: # If the training set is data , return None
return None
varList = self.getVar(data) # Calculate the variance of each dimension 
mid = dataNum // 2 # Find the median 
maxVarDimIndex = varList.index(max(varList)) # Find the dimension with the largest variance 
sortedDataIndex = data[:,maxVarDimIndex].argsort() # Sort by the dimension with the largest variance 
midDataIndex = sortedDataIndex[mid] # Find the data in the middle of the dimension , As root node 
if dataNum == 1: # If there is only one data , Just go back to the root node 
self.dataNum = dataNum
return Node(val = data[midDataIndex],label = label[midDataIndex],dim = maxVarDimIndex,left = None,right = None,parent = parentNode)
root = Node(data[midDataIndex],label[midDataIndex],maxVarDimIndex,parent = parentNode,)
"""
Divide left subtree and right subtree , Then a recursive
"""
leftDataSet = data[sortedDataIndex[:mid]] # Note that mid It's not midDataIndex
leftLabel = label[sortedDataIndex[:mid]]
rightDataSet = data[sortedDataIndex[mid+1:]]
rightLabel = label[sortedDataIndex[mid+1:]]
root.left = self.buildKdTree(leftDataSet,leftLabel,parentNode = root)
root.right = self.buildKdTree(rightDataSet, rightLabel,parentNode = root)
self.dataNum = dataNum # Record the number of training samples 
return root
def root(self):
return self.root
def getVar(self,data): # Find variance function 
rowLen,colLen = data.shape
varList = []
for i in range(colLen):
varList.append(np.var(data[:,i]))
return varList

2. kd The tree is transformed into list and dict

2.1 convert to list

 """
list Every element in is a dictionary , The keys of the dictionary are :
The value of the node 、 The dimension of the node 、 Type of node 、 The left and right subtrees of nodes and the parent nodes of nodes .
Every dictionary , All represent a node .
"""
def transferTreeToList(self,root,rootList = []):
if root == None:
return None
tempDict = {
}
tempDict["data"] = root.val
tempDict["left"] = root.left.val if root.left else None
tempDict["right"] = root.right.val if root.right else None
tempDict["parent"] = root.parent.val if root.parent else None
tempDict["label"] = root.label[0]
tempDict["dim"] = root.dim
rootList.append(tempDict)
self.transferTreeToList(root.left,rootList)
self.transferTreeToList(root.right,rootList)
return rootList

2.2 Turn it into a dictionary

 def transferTreeToDict(self,root):
if root == None:
return None
"""
Be careful : The key of the dictionary must be immutable , You can't use arrays or lists , So we use Yuanzu tuple
"""
dict = {
}
dict[tuple(root.val)] = {
}
dict[tuple(root.val)]["label"] = root.label[0]
# root.label It's a np Array , To return a value, use the subscript .
dict[tuple(root.val)]["dim"] = root.dim
dict[tuple(root.val)]["parent"] = root.parent.val if root.parent else None
dict[tuple(root.val)]["left"] = self.transferTreeToDict(root.left)
dict[tuple(root.val)]["right"] = self.transferTreeToDict(root.right)
return dict

3. kd Tree search

3.1 Search for x Leaf node of

 def findtheNearestLeafNode(self,root,x):
if root == None: # Or use it directly self.dataNum Is it equal to 0 Just check 
return None
if root.left == None and root.right == None:
return root
node = root
while True: # Find a leaf node or a node without a subtree 
curDim = node.dim
if x[curDim] < node.val[curDim]:
if not node.left:
return node
node = node.left
else:
if not node.right:
return node
node = node.right

3.2 Search for k Nearest neighbor point

 """
Here's a search for k Nearest neighbor point , The only difference from the nearest neighbor algorithm is , You need an array to hold , The present front k Nearest neighbor point ,
And determine the conditions , It's not the nearest distance , It's the first K A small distance ( The goalie of the result ),
Only if the number of nodes in the result does not exceed K Or the distance between the node and the input instance is less than the K Only a small distance can enter the result array
"""
def knnSearch(self,x,k):
"""
When the whole training data set does not exceed K Time , Training datasets are all neighbors .
Use a dictionary to make statistics , Judging by most decision-making principles is enough
"""
if self.dataNum <= k:
labelDict = {
}
for element in self.transferTreeToList(self.root):
if element["label"] not in labelDict:
labelDict[element['label']] = 0
labelDict[element["label"]] += 1
sortedLabelList = sorted(labelDict.items(), key=lambda item:item[1],reverse=True) # Sorting dictionaries returns a list of primitives .
return sortedLabelList[0][0]
"""
First find the nearest leaf node , Then recursively look up
"""
node = self.findtheNearestLeafNode(self.root,x)
nodeList = []
if node == None: # If it's an empty tree , Go straight back to None
return None
x = np.array(x)
distance = np.sqrt(sum((x-node.val)**2)) # Calculate the distance between the nearest leaf node and the input instance 
nodeList.append([distance, tuple(node.val), node.label[0]])
# Distance , Node instances and categories are added to the result as an array .
while True: # loop 
if node == self.root: # When looping to the root node , Stop the cycle 
break
parentNode = node.parent # Find the parent of the current node 
parentDis = np.sqrt(sum((x-parentNode.val)**2)) # Calculation input example x Distance from parent 
if k > len(nodeList) or distance > parentDis:
# If the current results are insufficient K The distance between nodes or parent nodes is less than the distance in the current list x The biggest distance ,
nodeList.append([parentDis,tuple(parentNode.val),parentNode.label[0]])# Press in the results list 
nodeList.sort() # Sort 
distance = nodeList[-1][0] if k > len(nodeList) else nodeList[k-1][0] # to update dis It's the... In the team entry node K A small distance or a direct distance is the biggest distance 
if k > len(nodeList) or abs(x[parentNode.dim] - parentNode.val[parentNode.dim]) < distance: # Judge whether there is a closer node in another sub node area 
if x[parentNode.dim] < parentNode.val[parentNode.dim]:
otherChild = parentNode.right
# If x The value of the current dimension is less than the value of the parent node 
# explain x On the left subtree of the parent node , Go to the right node to find 
self.search(nodeList,otherChild,x,k) # Recursively search the nearest neighbor 
else: # otherwise , Look for the left child node 
otherChild = parentNode.left
self.search(nodeList, otherChild, x, k)
node = node.parent
labelDict = {
} # Statistical categories , And judge the type of instance point 
nodeList = nodeList[:k] if k <= len(nodeList) else nodeList
for element in nodeList:
if element[2] not in labelDict:
labelDict[element[2]] = 0
labelDict[element[2]] += 1
sortedLabel = sorted(labelDict.items(),key=lambda x:x[1],reverse=True)
return sortedLabel[0][0]
def search(self,nodeList,root,x,k):
# Recursively k The search of neighbors , It's almost the same as the function above , It's just that there are no categories of statistics and judgments 
if root == None:
return nodeList
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k-1][0]
x = np.array(x)
node = self.findtheNearestLeafNode(root,x)
distance = np.sqrt(sum((x - node.val)**2))
if k > len(nodeList) or distance < dis:
nodeList.append([distance, tuple(node.val), node.label[0]])
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k - 1][0]
while True:
if node == root:
break
parentNode = node.parent
parentDis = np.sqrt(sum((x-parentNode.val)**2))
if k > len(nodeList) or parentDis < dis:
nodeList.append([parentDis,tuple(parentNode.val),parentNode.label[0]])
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k - 1][0]
if k > len(nodeList) or abs(x[parentNode.dim]-parentNode.val[parentNode.dim]) < dis:
if x[parentNode.dim] < parentNode.val[parentNode.val]:
otherChild = parentNode.right
self.search(nodeList,otherChild,x,k)
else:
otherChild = parentNode.left
self.search(nodeList, otherChild, x, k)
node = node.parent

4. give an example

if __name__ == "__main__":
dataArray = [[7, 2], [5, 4], [2, 3], [4, 7], [9, 6], [8, 1]]
label = [[0], [1], [0], [1], [1], [1]]
kd = kdTree(dataArray, label)
Tree = kd.buildKdTree(dataArray, label) ## tree Root node 
list = kd.transferTreeToList(Tree, [])
dict = kd.transferTreeToDict(Tree)
node = kd.findtheNearestLeafNode(Tree, [6, 3])
result = kd.knnSearch([6,3],1)
print(list)
print(result)
"""
The output is :[
{'data': array([7, 2]), 'left': array([5, 4]), 'right': array([9, 6]), 'parent': None, 'label': 0, 'dim': 0},
{'data': array([5, 4]), 'left': array([2, 3]), 'right': array([4, 7]), 'parent': array([7, 2]), 'label': 1, 'dim': 1},
{'data': array([2, 3]), 'left': None, 'right': None, 'parent': array([5, 4]), 'label': 0, 'dim': 0},
{'data': array([4, 7]), 'left': None, 'right': None, 'parent': array([5, 4]), 'label': 1, 'dim': 0},
{'data': array([9, 6]), 'left': array([8, 1]), 'right': None, 'parent': array([7, 2]), 'label': 1, 'dim': 1},
{'data': array([8, 1]), 'left': None, 'right': None, 'parent': array([9, 6]), 'label': 1, 'dim': 0}]
"""
# Category is :1

5. Complete code


```python
import numpy as np
class Node(object):
def __init__(self,val=None,label=None,dim=None,left=None,right=None,parent=None):
self.val = val
self.label = label
self.dim = dim
self.left = left
self.right = right
self.parent = parent
class kdTree(object):
def __init__(self,dataSet,labelList):
self.dataNum = 0
self.root = self.buildKdTree(dataSet,labelList) ## Pay attention to the value of the parent node .
def buildKdTree(self,dataSet, labelList, parentNode=None):
data = np.array(dataSet)
dataNum, dimNum = data.shape
label = np.array(labelList).reshape(dataNum,1)
if dataNum == 0:
return None
varList = self.getVar(data)
mid = dataNum // 2
maxVarDimIndex = varList.index(max(varList))
sortedDataIndex = data[:,maxVarDimIndex].argsort()
midDataIndex = sortedDataIndex[mid]
if dataNum == 1:
self.dataNum = dataNum
return Node(val = data[midDataIndex],label = label[midDataIndex],dim = maxVarDimIndex,left = None,right = None,parent = parentNode)
root = Node(data[midDataIndex],label[midDataIndex],maxVarDimIndex,parent = parentNode,)
leftDataSet = data[sortedDataIndex[:mid]]##### Note that mid No midDataIndex
leftLabel = label[sortedDataIndex[:mid]]
rightDataSet = data[sortedDataIndex[mid+1:]]
rightLabel = label[sortedDataIndex[mid+1:]]
root.left = self.buildKdTree(leftDataSet,leftLabel,parentNode = root)
root.right = self.buildKdTree(rightDataSet, rightLabel,parentNode = root)
self.dataNum = dataNum
return root
def root(self):
return self.root
def transferTreeToDict(self,root):
if root == None:
return None
"""
The key of the dictionary must be immutable
"""
dict = {
}
dict[tuple(root.val)] = {
}
dict[tuple(root.val)]["label"] = root.label[0] # root.label Is an array , To return a value, use the subscript .
dict[tuple(root.val)]["dim"] = root.dim
dict[tuple(root.val)]["parent"] = root.parent.val if root.parent else None
dict[tuple(root.val)]["left"] = self.transferTreeToDict(root.left)
dict[tuple(root.val)]["right"] = self.transferTreeToDict(root.right)
return dict
def transferTreeToList(self,root,rootList = []):
if root == None:
return None
tempDict = {
}
tempDict["data"] = root.val
tempDict["left"] = root.left.val if root.left else None
tempDict["right"] = root.right.val if root.right else None
tempDict["parent"] = root.parent.val if root.parent else None
tempDict["label"] = root.label[0]
tempDict["dim"] = root.dim
rootList.append(tempDict)
self.transferTreeToList(root.left,rootList)
self.transferTreeToList(root.right,rootList)
return rootList
def getVar(self,data):
rowLen,colLen = data.shape
varList = []
for i in range(colLen):
varList.append(np.var(data[:,i]))
return varList
def findtheNearestLeafNode(self,root,x):
if root == None: # Or use it directly self.dataNum Is it equal to 0 Just check 
return None
if root.left == None and root.right == None:
return root
node = root
while True:
curDim = node.dim
if x[curDim] < node.val[curDim]:
if not node.left:
return node
node = node.left
else:
if not node.right:
return node
node = node.right
def knnSearch(self,x,k):
if self.dataNum <= k:
labelDict = {
}
for element in self.transferTreeToList(self.root):
if element["label"] not in labelDict:
labelDict[element['label']] = 0
labelDict[element["label"]] += 1
sortedLabelList = sorted(labelDict.items(), key=lambda item:item[1],reverse=True) # Sorting dictionaries returns a list of primitives .
return sortedLabelList[0][0]
node = self.findtheNearestLeafNode(self.root,x)
nodeList = []
if node == None:
return None
x = np.array(x)
distance = np.sqrt(sum((x-node.val)**2))
nodeList.append([distance, tuple(node.val), node.label[0]])
while True:
if node == self.root:
break
parentNode = node.parent
parentDis = np.sqrt(sum((x-parentNode.val)**2))
if k > len(nodeList) or distance > parentDis:
nodeList.append([parentDis,tuple(parentNode.val),parentNode.label[0]])
nodeList.sort()
distance = nodeList[-1][0] if k > len(nodeList) else nodeList[k-1][0]
if k > len(nodeList) or abs(x[parentNode.dim] - parentNode.val[parentNode.dim]) < distance:
if x[parentNode.dim] < parentNode.val[parentNode.dim]:
otherChild = parentNode.right
self.search(nodeList,otherChild,x,k)
else:
otherChild = parentNode.left
self.search(nodeList, otherChild, x, k)
node = node.parent
labelDict = {
}
nodeList = nodeList[:k] if k <= len(nodeList) else nodeList
for element in nodeList:
if element[2] not in labelDict:
labelDict[element[2]] = 0
labelDict[element[2]] += 1
sortedLabel = sorted(labelDict.items(),key=lambda x:x[1],reverse=True)
return sortedLabel[0][0]
def search(self,nodeList,root,x,k):
if root == None:
return nodeList
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k-1][0]
x = np.array(x)
node = self.findtheNearestLeafNode(root,x)
distance = np.sqrt(sum((x - node.val)**2))
if k > len(nodeList) or distance < dis:
nodeList.append([distance, tuple(node.val), node.label[0]])
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k - 1][0]
while True:
if node == root:
break
parentNode = node.parent
parentDis = np.sqrt(sum((x-parentNode.val)**2))
if k > len(nodeList) or parentDis < dis:
nodeList.append([parentDis,tuple(parentNode.val),parentNode.label[0]])
nodeList.sort()
dis = nodeList[-1][0] if k > len(nodeList) else nodeList[k - 1][0]
if k > len(nodeList) or abs(x[parentNode.dim]-parentNode.val[parentNode.dim]) < dis:
if x[parentNode.dim] < parentNode.val[parentNode.val]:
otherChild = parentNode.right
self.search(nodeList,otherChild,x,k)
else:
otherChild = parentNode.left
self.search(nodeList, otherChild, x, k)
node = node.parent
if __name__ == "__main__":
dataArray = [[7, 2], [5, 4], [2, 3], [4, 7], [9, 6], [8, 1]]
label = [[0], [1], [0], [1], [1], [1]]
kd = kdTree(dataArray, label)
Tree = kd.buildKdTree(dataArray, label) ## tree Root node 
list = kd.transferTreeToList(Tree, [])
dict = kd.transferTreeToDict(Tree)
node = kd.findtheNearestLeafNode(Tree, [6, 3])
result = kd.knnSearch([6,3],1)
print(list)
print(dict)
print(result)
print(node.val)

版权声明
本文为[A good function]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database