"Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python

Coriander Chat Game 2021-08-09 14:53:42
python instance shocked realized dirty


Catalog

1、 Demand analysis :

2、 Algorithm principle :

3、 Technical analysis

4、 Source code

5、 Expand

6、 Problems encountered

7、 summary :


Like watching , Develop habits

In the game, chat function is almost a necessary function , There are some problems with this function, that is, it will cause the world channel to be very chaotic , There are often some sensitive words , Or chat that some game manufacturers don't want to see , We had this problem in the game before , Our company has done reporting and background monitoring , Let's realize this kind of monitoring today .

1、 Demand analysis :

Because deep learning is not good , Although I have written about reinforcement learning before , But the results of reinforcement learning are not particularly satisfactory , So study the simpler method to achieve .

There are ready-made solutions for this classification task , For example, the classification of spam is the same problem , Although there are different solutions , But I chose the simplest naive Bayesian classification . Mainly do some exploration ,

Because most of our games are in Chinese , So we need to segment Chinese words , For example, I'm a handsome guy , Break it up .

2、 Algorithm principle :

Naive bayes algorithm , It is a method of searching in a data set according to the existing characteristics of a new sample Conditional probability An algorithm to determine the category of the new sample ; It assumes that ① Each feature is independent of each other 、② Each feature is equally important . It can also be understood as judging the probability when the current characteristics are satisfied at the same time according to the past probability . Specific math companies can baidu themselves , The data formula is too hard to write , Just know about it .

Use the right algorithm at the right time .

jieba The principle of word segmentation :jieba Word segmentation belongs to probabilistic language model . The task of probabilistic language model word segmentation is : In all the results of total segmentation, find a segmentation scheme S, bring P(S) Maximum .

 picture

You can see jieba I brought some phrases , During segmentation, these phrases will be split as the base unit .

notes : I just briefly introduce the principles of the above two technologies , If you want to fully understand, you have to write another big article , Can baidu next , Everywhere, , Just find something you can understand . If you can use it, use it first .

3、 Technical analysis

Chinese word segmentation bag the most famous word segmentation bag is jieba, As for whether it is the best, I don't know , I think fire has its reason , Do it first .jieba Don't delve into the principle of , Give priority to solving problems , When you encounter problems, you can learn from the problem points , Such a learning model is the most efficient .

Because I've been doing voice related things recently , A big man recommended Library nltk, Looking up the relevant information , It seems to be a well-known library for language processing , Very powerful , It's very powerful , I mainly chose his classification algorithm here , So I don't have to focus on the specific implementation , You don't have to build wheels again , Besides, it's not as good as others , Just use it .

python It's really nice , All kinds of bags , All kinds of wheels .

Installation command :

pip install jieba
pip install nltk

Enter the above two codes respectively , After running , The package is installed successfully , You can test happily

"""
#Author: Coriander
@time: 2021/8/5 0005 Afternoon 10:26
"""
import jieba
if __name__ == '__main__':
   result = " | ".join(jieba.cut(" I love tian 'anmen square in Beijing ,very happy"))
   print(result)

Look at the word segmentation results , It can be said that it is very good , Sure enough, a major is a major .

 picture

4、 Source code

Simple tests were done , It can be found that we basically have everything to complete , Now start working directly on the code .

1、 Load the initial text resource .

2、 Remove punctuation marks from text

3、 Feature extraction of text

4、 Training data set , Training out models ( That is, the prediction model )

5、 Start testing new words

#!/usr/bin/env python
# encoding: utf-8
import re
import jieba
from nltk.classify import NaiveBayesClassifier
"""
#Author: Coriander
@time: 2021/8/5 0005 Afternoon 9:29
"""
rule = re.compile(r"[^a-zA-Z0-9\u4e00-\u9fa5]")
def delComa(text):
text = rule.sub('', text)
return text
def loadData(fileName):
text1 = open(fileName, "r", encoding='utf-8').read()
text1 = delComa(text1)
list1 = jieba.cut(text1)
return " ".join(list1)
# feature extraction
def word_feats(words):
return dict([(word, True) for word in words])
if __name__ == '__main__':
adResult = loadData(r"ad.txt")
yellowResult = loadData(r"yellow.txt")
ad_features = [(word_feats(lb), 'ad') for lb in adResult]
yellow_features = [(word_feats(df), 'ye') for df in yellowResult]
train_set = ad_features + yellow_features
# Training decisions
classifier = NaiveBayesClassifier.train(train_set)
# Analysis test
sentence = input(" Please enter a sentence :")
sentence = delComa(sentence)
print("\n")
seg_list = jieba.cut(sentence)
result1 = " ".join(seg_list)
words = result1.split(" ")
print(words)
# The statistical results
ad = 0
yellow = 0
for word in words:
classResult = classifier.classify(word_feats(word))
if classResult == 'ad':
ad = ad + 1
if classResult == 'ye':
yellow = yellow + 1
# The proportion
x = float(str(float(ad) / len(words)))
y = float(str(float(yellow) / len(words)))
print(' The possibility of advertising :%.2f%%' % (x * 100))
print(' The possibility of swearing :%.2f%%' % (y * 100))

Look at the results of the operation

 picture

Download address of all resources :https://download.csdn.net/download/perfect2011/20914548

5、 Expand

1、 The data source can be modified , The monitored data can be stored in the database for loading

2、 You can classify more data , It is convenient for customer service to handle , For example, it is divided into advertising , dirty language , Advice to officials, etc , Define according to business requirements

3、 Data with high probability can be automatically processed by other systems , Improve the speed of dealing with problems

4、 You can use the player's report , Increase the accumulation of data

5、 This idea can be used as a treatment of sensitive words , Provide a dictionary of sensitive words , And then match , testing

6、 It can be made into web service , Play a callback game

7、 The model can be made to predict while learning , For example, some cases need to be handled manually by customer service , After marking, it is directly added to the dataset , In this way, the data model can be learned all the time s

6、 Problems encountered

1、 Problems encountered , Punctuation problem , If punctuation is not removed, it will lead to matching. Punctuation is also regarded as matching , unreasonable .

2、 Coding problem , It reads binary , It took a long time to solve

3、 Technology selection , At the beginning, I wanted to use deep learning to solve , I also saw some solutions , However, my computer training is too slow , First choose this way to practice

4、 The code is simple , But it's hard to explain Technology , The code is already written , But it took a weekend to write this article

7、 summary :

If you encounter problems, go to the technical solution , If you know the plan, implement it , encounter bug Just go and check , If you can't forget, there will be echoes , Any attempt you make is a good opportunity to learn .

Originality is not easy. , Ask for a favor to forward , support .

版权声明
本文为[Coriander Chat Game]所创,转载请带上原文链接,感谢
https://pythonmana.com/2021/08/20210809145102053P.html

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database