Starting point of Python crawler

Pig brother 66 2020-11-13 07:33:07
starting point python crawler


The first chapter focuses on reptile related knowledge such as :http、 Webpage 、 Reptile law, etc , Let us have a relatively complete understanding of reptiles and some knowledge points out of the question .
 Insert picture description here

Today's article will be the first in our second chapter , We have officially entered the actual combat stage since today , There will be more practical cases .

The first article in the reptile series , Brother piggy explained HTTP principle , Many people are curious : Talk about reptiles and HTTP What does it matter ? In fact, we often talk about reptiles ( It's also called a web crawler ) It is a network request initiated by using some network protocols , At present, the most used network protocol is HTTP/S Network protocol cluster .

One 、Python What network libraries are there

In the real web browsing, we click the web page with the mouse and the browser will help us to initiate the web request , That's in Python How do we initiate network requests in ? The answer, of course, is Ku , Which specific Libraries ? Brother pig, please make a list :

  • Python2: httplib、httplib2、urllib、urllib2、urllib3、requests
  • Python3: httplib2、urllib、urllib3、requests

Python There are a lot of network request libraries , And also see the Internet are also useful , What's the relationship between them ? How to choose ?

  • httplib/2: This is a Python built-in http library , however It's a lower level library , It's not used directly . and httplib2 It's based on httplib Third party library , Than httplib Make it more complete , Supports caching 、 Compression and other functions . In general, these two libraries are not used , If you need yourself Encapsulating network requests may require . Insert picture description here
  • urllib/urllib2/urllib3:urlliib It's based on httplib The upper storehouse , and urllib2 and urllib3 It's all third-party libraries ,urllib2 be relative to urllib Add some advanced features , Such as :HTTP Authentication or Cookie etc. , stay Python3 Lieutenant general urllib2 Merge into urllib in .urllib3 Provide thread safe connection pool and file post Other support , And urllib And urllib2 It doesn't matter . Insert picture description here
  • requests:requests The library is based on urllib/3 Third party network library , Its characteristic is work Can be powerful ,API grace . As can be seen from the picture above , about http client python Official documents also recommend that we use requests library , In practice requests The library is also a more used library .

in summary , We choose to requests Library is the starting point for us to get started . In addition, these libraries are all synchronous network libraries , If you need high concurrent requests, you can use asynchronous network library :aiohttp, After this, brother pig will also explain .

Two 、requests Introduce

** I hope you will always remember : Learn any language , Don't forget to look at the official documents .** Maybe the official documentation is not the best primer , But it's definitely up to date 、 The most complete teaching documents !

1. home page

requests Official documents of ( Currently, it supports Chinese ) link :http://cn.python-requests.org
Source code address :https://github.com/kennethreitz/requests
 Insert picture description here

From the home page Give Way HTTP Serving humanity In these words we can see ,requests The core tenet is to make it easy for users to use , Indirectly expressed their design elegant idea . Insert picture description here
notes :PEP 20 It's famous Python zen .

Warning : Non professional Use other HTTP library Can cause dangerous side effects , Include : Safety defects 、 Redundancy 、 Reinvent the wheel 、 Gnawing at documents 、 depressed 、 Have a headache 、 Even death .

2. features

All say requests Powerful , So let's see requests What are the features :

  • Keep-Alive & Connection pool
  • International domain names and URL
  • With lasting Cookie Conversation
  • Browser style SSL authentication
  • Automatic content decoding
  • basic / Abstract authentication
  • elegant key/value Cookie
  • Automatically decompress
  • Unicode Response body
  • HTTP(S) Agent support
  • Upload files in blocks
  • Stream download
  • Connection timeout
  • Block request
  • Support .netrc

requests Fully satisfied today web The needs of .Requests Support Python 2.6—2.7 as well as 3.3—3.7, And can be in PyPy Next perfect run

3、 ... and 、 install requests

pip install requests

If it is pip3 Then use

pip3 install requests

If you use anaconda Then you can

conda install requests

If you don't want to use the command line , Can be found in pycharm Download the library like this
 Insert picture description here

Four 、 Reptile process

The picture below is a summary of brother pig's previous work Development process , It's more detailed , In the development of a large project really need so detailed , Otherwise, if the project goes online and breaks down or needs to be modified, it cannot be done Project resumption , At that time, the programmer may carry the pot to sacrifice the heaven ...

 Insert picture description here

Get down to business , To show you the development process of the project is to introduce the process of crawler crawling data :

  1. Identify pages to crawl
  2. The browser checks the data source ( Static web page or Dynamic loading )
  3. Looking for loading data url The parameter law of ( Such as pagination )
  4. Code emulation requests crawling data

5、 ... and 、 Climb to some East product page

Pig elder brother take a certain East commodity page as an example to learn the simple process of reptile , Why do you start with something instead of something ? Because someone doesn't need to log in to browse the product page , It's easy to get started !

1. First step : Find the item you want to crawl in the browser

 Insert picture description here
 Insert picture description here
 Insert picture description here
ps: Brother pig is not driving , Why choose this product ? Because the evaluation of this product will be crawled for data analysis , Is it exciting !

2. The second step : The browser checks the data source

The browser debug window is opened to view network requests , See how the data is loaded ? It's just going back to the static page , still js Dynamic loading ?
 Insert picture description here
Right click and click to check or directly F12 You can open the debug window , Here, brother pig recommends you to use Chrome browser , Why? ? Because it's easy to use , Programmers are using ! Concrete Chrome How to debug , Let's take a look at the tutorial online !

After opening the debug window , We can then re request the data , Then look at the returned data , Identify data sources .
 Insert picture description here

3. The third step : Looking for loading data url The parameter law of

We can see the first request link :https://item.jd.com/1263013576.html The returned data is the web data we want . Because we're crawling through the product page , So there is no pagination .
 Insert picture description here

Of course, the core information such as price and some coupons is loaded through another request , We won't discuss , Let's finish our first example !

4. Step four : Code emulation requests crawling data

obtain url After linking, let's start to write code

import requests
def spider_jd():
""" Take Jingdong commodity page """
url = 'https://item.jd.com/1263013576.html'
try:
r = requests.get(url)
# Sometimes request errors also have return data 
# raise_for_status Will determine the return status code , If 4XX or 5XX An exception will be thrown 
r.raise_for_status()
print(r.text[:500])
except:
print(' Crawling failed ')
if __name__ == '__main__':
spider_jd()

Check the return result
 Insert picture description here
At this point, we have completed the crawling of a certain East commodity page , Although the case is simple , There's very little code , But the process of reptiles is almost the same , I hope the students who want to learn reptiles can do it by themselves , Choose your favorite products to grab , Only by doing it yourself can we really learn knowledge !

6、 ... and 、requests Library Introduction

We used requests Of get Method , We can check the source code and find several other ways :post、put、patch、delete、options、head, They are the counterpart HTTP Request method of .
 Insert picture description here
Here is a brief list of , A lot of cases will be used later Use and learn , After all, no one wants to read the boring explanation .

requests.post('http://httpbin.org/post', data = {
'key':'value'})
requests.patch('http://httpbin.org/post', data = {
'key':'value'})
requests.put('http://httpbin.org/put', data = {
'key':'value'})
requests.delete('http://httpbin.org/delete')
requests.head('http://httpbin.org/get')
requests.options('http://httpbin.org/get')

notes :httpbin.org It's a test http Requested website , Can respond to requests normally

about HTTP Several request methods of , I haven't RestFul API Our students are not very clear about the meaning of each request , Here is a list of :

  • GET: Get the list of users :http://project.company.com/api/v1/users
  • GET: Get a single user :http://project.company.com/api/v1/users/{uid}
  • POST: Create a single user :http://project.company.com/api/v1/users/{uid}
  • PUT: Completely replace users :http://project.company.com/api/v1/users/{uid}
  • PATCH: Partial update user :http://project.company.com/api/v1/users/{uid}
  • DELETE: Delete individual users :http://project.company.com/api/v1/users/{uid}

Want to know requests Please refer to the usage method for more information :http://cn.python-requests.org

Later, brother pig will also use a large number of cases to learn a little requests Some tips for using the library .

7、 ... and 、 summary

Today, I'd like to give you a brief introduction to this very important library :requests,requests Capable of many simple reptile needs , It's powerful and beautiful api To be unanimously recognized .

Many students will ask : What realm is a reptile's entry point ? You will be good at using requests Library to achieve some simple crawler functions even if the entry , It's not that you need to know all kinds of frameworks to get started , On the contrary, those who can use low-level tools to achieve functions have more potential !

If you have Interesting reptile cases or ideas , Be sure to leave a message below , Let me see your operation .

More reptile knowledge , Please scan the QR code below to pay attention to brother pig reptile column !
 Insert picture description here

版权声明
本文为[Pig brother 66]所创,转载请带上原文链接,感谢

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database