Python crawler -- the basis of crawler

Hee hee hee hee hee 2020-11-13 01:57:32
python crawler basis crawler


Python Reptiles ———— Reptile base

One 、 Reptile Overview

What is a reptile ?

A crawler is a program that grabs web data .

Three features of web page :

1. Web pages have their own for the only URL( Uniform resource locator ) To locate ;
2. Web pages use HTML( Hypertext markup language ) To describe page information ;
3. Web pages use HTTP/HTTPS( Hypertext transfer protocol ) To transmit HTML data .

Crawler design ideas

1. First identify the web page you need to crawl URL Address .
2. adopt HTTP/HTTPS Protocol to get the corresponding HTML page .
3. extract HTML Useful data in the page :
1) If it's the data you need , Just keep it .
2) If it's the rest of the page URL, Continue with step two .

Two 、 General purpose reptiles and focus reptiles

Universal crawler

General purpose web crawler yes Rope engine grab system (Baidu、Google、Yahoo etc. ) An important part of . The main purpose is to put the net on the Internet Page download to local , Form a mirror backup of Internet content .

General purpose web crawler Collect web pages from the Internet , Gather information , These web pages are used to index search engines to support , It determines Whether the content of the whole engine system is rich , Whether the information is real-time , So its performance directly affects the effect of search engine .
 Insert picture description here

Search engine how to get a new website URL:
1. New website submits website address to search engine initiatively :
2. Set up new website chains on other websites
3. and DNS Resolution service provider ( Such as DNSPod etc. ) cooperation , The domain name of the new website will be quickly grabbed .
The limitations of the universal crawler
1. Most of the time , On the web 90% The content of is useless to users .
2. Search engine can not provide search results for a specific user .
3. picture 、 database 、 Audio 、 Video, multimedia, etc. can not be found and obtained well .
4. Keyword based Retrieval , It is difficult to support queries based on semantic information , Unable to accurately understand the specific needs of users .

Focus on reptiles

Focus on reptiles , yes " For specific subject requirements " A kind of web crawler , The difference between it and general search engine crawler is : Focus on crawlers in implementing web pages When fetching, the content will be processed and filtered , Try to ensure that only the web page information related to the requirements is captured .

3、 ... and 、HTTP and HTTPS

HTTP agreement -80 port
HyperTextTransferProtocol, Hypertext transfer protocol is a publishing and receiving protocol HTML Page method .
HTTPS-443 port
HypertextTransferProtocoloverSecureSocketLayer, To put it simply HTTP Security version , stay HTTP Lower join SSL layer .

HTTP working principle

The crawling process of web crawler can be understood as the process of simulating browser operation .

Browser send HTTP Process of request :

  1. When the user enters a URL And press enter , The browser will go to HTTP Server send HTTP request .HTTP Requests are mainly divided into “Get” and “Post” The two methods .

  2. When we type in the browser URLhttp://www.baidu.com When , Browser sends a Request Request to get http://www.baidu.com Of html file , Server handle Response The file object is sent back to the browser .

  3. Browser analysis Response Medium HTML, Many other files are referenced , such as Images file ,CSS file ,JS file . Browser will automatically resend Request To get pictures ,CSS file , perhaps JS file .

  4. When all the files are downloaded successfully , The web page will be based on HTML Grammatical structure , The complete display shows .

URL

URL(Uniform/UniversalResourceLocator Abbreviation ): Uniform resource locator , It is used to describe completely Internet A way to identify the addresses of web pages and other resources .
The basic format : scheme://host[:port#]/path/…/[?query-string][#anchor]
 Insert picture description here

Four 、 client HTTP request

The client sends one HTTP Request message to server , Include the following format : Insert picture description here
 Insert picture description here

Request method Method

according to HTTP standard ,HTTP Multiple request methods can be used for requests .
HTTP0.9: Only the basic text GET function .
HTTP1.0: Perfect request / Response model , And complete the agreement , Three request methods are defined :GET,POST and HEAD Method .
HTTP1.1: stay 1.0 Update based on , Five new request methods :OPTIONS,PUT,DELETE,TRACE and CONNECT Method .
 Insert picture description here

Get and Post Detailed explanation
  1. GET Get data from the server ,POST Is to send data to the server
  2. GET Request parameter display , It's all displayed on the web address of the browser , namely “Get” The requested parameter is URL Part of .
  3. POST The request parameter is in the request body , Message length is unlimited and sent implicitly , Usually used to HTTP Large amount of data submitted by the server ( For example, the request contains many parameters or file upload operations ), The requested parameters are contained in “Content-Type” In the head of the news , Indicates the media type and encoding of the message body ,
Common request headers

Host: Host and port number
Connection : Client and service connection type , The default is keep-alive
User-Agent: The name of the client's browser
Accept: Browsers or other clients can accept MIME file type
Referer: Indicates which page from which the request originated URL
Accept-Encoding: Point out how browsers can accept encoding .
Accept-Language: Language types
Accept-Charset: Character encoding
Cookie: The browser uses this property to send... To the server Cookie
Content-Type:POST The type of content used in the request .
 Insert picture description here
 Insert picture description here

5、 ... and 、HTTP Respond to

HTTP The response consists of four parts , Namely : Status line 、 The message header 、 Blank line 、 Response Content  Insert picture description here

Response status code

 Insert picture description here
200: The request is successful
302: Request page to temporarily move to new url
307 and 304: Use cache resources
404: The server could not find the request page
403: Server access denied , Not enough permissions
500: The server is in an unpredictable situation

Cookie and Session

The interaction between the server and the client is limited to requests / The response process , When it's over, disconnect , On the next request , The server will think of the new client . In order to maintain the link between them , Let the server know that this is the request sent by the previous user , Client information must be saved in one place . Insert picture description here

6、 ... and 、BS4

BS4 brief introduction

Beautiful Soup Provide some simple 、python Function to handle navigation 、 Search for 、 Modify analysis tree and other functions . It's a
hold-all , By parsing the document as tiful Soup Automatically convert the input document to Unicode code , The output document is converted to utf-8 code .
You don't need to think about coding , Unless the document does not specify a coding method for the original .

BS4 Of 4 Species object

Beautiful Soup The complex HTML The document is transformed into a complex tree structure ,
Every node is Python object , All objects can be summed up as 4 Kind of : Tag , NavigableString , BeautifulSoup , Comment
2-1. BeautifulSoup object
2-2. Tag object
Tag Namely html One of the tags in , use BeautifulSoup I can figure it out Tag Specific content of , The specific format is soup.name, among name yes html Under the label .

7、 ... and 、 example : Picture downloader

The basic steps to make a crawler
  1. Demand analysis
  2. Analysis of web source code , coordination F12
  3. Write regular expressions or other parser code
  4. Official preparation python The crawler code
Demand analysis

" I want pictures , I don't want to search the Internet “
“ It's better to download automatically ”
……

This is the demand , At least two functions have to be implemented , One is to search for pictures , Second, automatic download .

pic_url = re.findall(’“objURL”:"(.*?)",’,html,re.S)

Code implementation

"""
Picture downloader
"""
# -*- coding:utf-8 -*-
import re
import requests
import os
def downloadPic(html, keyword):
"""
:param html: The source code of the page
:param keyword: Search keywords
:return:
"""
# (.*?) Represents any number of characters 
# () On behalf of the group , Value returns the content in parentheses in the string that matches the condition ;
pic_url = re.findall('"objURL":"(.*?)",', html, re.S)[:5]
count = 0
print(' Find keywords :' + keyword + ' Pictures of the , Now download the pictures ...')
# each It's for each picture url Address 
for each in pic_url:
try:
headers = {

'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) '
'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.109 '
'Mobile Safari/537.36'}
# Get the corresponding object of the specified picture ;
response = requests.get(each, timeout=10, headers=headers)
except requests.exceptions.ConnectionError:
print('【 error 】 The current picture cannot be downloaded ')
continue
except Exception as e:
print('【 error 】 The current picture cannot be downloaded ')
print(e)
continue
else:
# print(response.status_code)
if response.status_code != 200:
print(" The visit to fail : ", response.status_code)
continue
# ******** Store pictures locally *******************************
if not os.path.exists(imgDir):
print(" Creating a directory ", imgDir)
os.makedirs(imgDir)
posix = each.split('.')[-1]
if posix not in ['png', 'jpg', 'gif', 'jpeg']:
break
print(' Downloading section ' + str(count + 1) + ' A picture , Picture address :' + str(each))
name = keyword + '_' + str(count) + '.' + posix
filename = os.path.join(imgDir, name)
count += 1
with open(filename, 'wb') as f:
# response.content: It returns binary text information 
# response.text: Return string text information 
f.write(response.content)
if __name__ == '__main__':
imgDir = 'pictures'
word = input("Input key word: ")
url = 'http://image.baidu.com/search/index?tn=baiduimage&ps=1&ct=201326592&lm=-1&cl=2&nc=1&ie=utf-8&word=' + word
try:
response = requests.get(url)
except Exception as e:
print(e)
content = ''
else:
content = response.text
downloadPic(content, word)
版权声明
本文为[Hee hee hee hee hee]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database