Two Python crawler Libraries

C don't laugh 2022-05-14 14:12:12 阅读数:781

pythoncrawlerlibraries

In the use of Python Reptilian time , Need to simulate network request , The main libraries used are requests Kuhe python Built in urllib library , It is generally recommended to use requests, It's right urllib The second encapsulation of .

Python Two crawler Libraries

urllib library

urllib package It includes the following modules :

  • urllib.request - Open and read URL.
  • urllib.error - contain urllib.request Exception thrown .
  • urllib.parse - analysis URL.
  • urllib.robotparser - analysis robots.txt file .
     Insert picture description here

urllib Library usage

urllib Library response Object is created first http,request object , Load to reques.urlopen Complete in http request .

The return is http,response object , It's actually html attribute . Use .read().decode() After decoding, it turns into str String type ,decode Chinese characters can be displayed after decoding .

urllib.request

urllib.request Defines some open URL Functions and classes of , Include authorization verification 、 Redirect 、 browser cookies etc. .

urllib.request Can simulate a browser request initiation process .

We can use urllib.request Of urlopen Method to open a URL, The syntax is as follows :

urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)
  • url:url Address .
  • data: Other data objects sent to the server , The default is None.
  • timeout: Set access timeout .
  • cafile and capath:cafile by CA certificate , capath by CA Path to certificate , Use HTTPS Need to use .
  • cadefault: It has been deprecated .
  • context:ssl.SSLContext type , Used to specify SSL Set up .

Experimental cases :


import urllib
from urllib.request import urlopen
# get request 
response = urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8'))
# push request 
data = bytes(urllib.parse.urlencode({
'word': 'hello'}), encoding='utf-8')
response = urllib.request.urlopen('http://www.baidu.com', data=data)
print(type(response))
print(response.status)
print(response.getheaders())
print(response.getheader('Server'))
try:
response = urllib.request.urlopen("http://www.baidu.com/no.html")
except urllib.error.HTTPError as e:
if e.code == 404:
print(404) # 404

 Insert picture description here

Simulate header information

We usually need to do some research on headers( Page header information ) To simulate , You need to use urllib.request.Request class :

class urllib.request.Request(url, data=None, headers={
}, origin_req_host=None, unverifiable=False, method=None)
  • url:url Address .
  • data: Other data objects sent to the server , The default is None.
  • headers:HTTP Requested header information , Dictionary format .
  • origin_req_host: Requested host address ,IP Or domain name .
  • unverifiable: Rarely use the whole parameter , Used to set whether the web page needs to be verified , The default is False.
  • method: Request method , Such as GET、POST、DELETE、PUT etc. .
import urllib
from urllib import request
# Request header
headers = {

"User-Agent": 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'
}
# wd = {
"wd": "hello"}
# url = "http://www.baidu.com/s?"
url = 'https://www.runoob.com/?s=' # Rookie tutorial search page
keyword = 'Python course '
key_code = urllib.request.quote(keyword) # Code the request
url_all = url+key_code
req = request.Request(url_all, headers=headers)
response = request.urlopen(req)
print(type(response))
print(response)
res = response.read().decode()
print(type(res))
print(res)

 Insert picture description here

requests library

requests Library calls are requests.get Methods the incoming url And parameters , The object returned is Response object , Printed out is the response status code .

requests The advantages of :
Python Reptilian time , More recommended requests library . because requests Than urllib More convenient ,requests It can be constructed directly get,post Request and initiate , and urllib.request We have to construct first get,post request , Relaunch .

Experimental cases –get request

import requests
# 1. basic get request
response = requests.get('http://www.baidu.com')
print('response\n',response)
# 2. Parameterized get request
response2 = requests.get('http://www.baidu.com/get?name=germy&age=22')
print('response2\n',response2)
# 3. Pass in parameters to params Parameter 2 The same function in
data = {

'name': 'germy',
'age': 22
}
response3 = requests.get('http://www.baidu.com', params=data)
print('response3\n',response3)
# 4. analysis jason( If the return result is a json, Then calling this method can directly return json)
response4 = requests.get('http://httpbin.org/get')
print('response4\n',response4)
# 5. Get binary data ( picture , video ...)
response5 = requests.get('http://github.com/favicon.ico')
with open('icon.ico', 'wb') as f:
f.write(response5.content)
# 6. add to headers( Pass in headers Parameters )
headers = {

'User-Agent': '...'
}
response6 = requests.get('http://zhihu.com', headers=headers)
print('response6\n',response6)

Experimental cases – Grab web page

import requests
url = 'http://httpbin.org/get'
params = {

'name': 'germey',
'age': 25
}
r = requests.get(url, params = params)
print(type(r.json()))
print(r.json())
print(r.json().get('args').get('age'))

Experimental cases – Respond to

Response means that after sending the request , Data returned by the server , In the example above , We respond by text as well as content Got the response content , Besides , You can also get other attribute values through other methods , Such as status code 、 Response head 、Cookies

import requests
# 1. basic get request
r = requests.get('http://www.baidu.com')
print(type(r.status_code), r.status_code)
print(type(r.headers), r.headers)
print(type(r.cookies), r.cookies)
print(type(r.url), r.url)
print(type(r.history), r.history)

 Insert picture description here
In the example above , status_code , cookies ,history Respectively represent the status code of the response ,cookie And request history .

What we need to pay attention to here is ,status_code The status code is HTTP Request status code , such as 200 On behalf of the successful request ,404 Represents that the resource does not exist, etc , For details, please refer to relevant materials . therefore , In the crawler code , We can use this status code to determine whether the request is successful , So as to facilitate the corresponding processing .

import requests
r = requests.get('http://www.baidu.com')
if not r.status_code == requests.codes.ok:
print(' No OK')
else:
print('Request Successfully!')

ad locum , We use it requests.codes.ok representative 200 state , So you don't have to write by yourself 200 Equal number , It's more convenient . Of course , There are also other built-in status codes , Some commonly used... Are listed below , For your reference :

# Informational status code
100: ('continue',),
101: ('switching_protocols',),
102: ('processing',),
103: ('checkpoint',),
122: ('uri_too_long', 'request_uri_too_long'),
# Success status code
200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\o/', '*'),
201: ('created',),
202: ('accepted',),
203: ('non_authoritative_info', 'non_authoritative_information'),
204: ('no_content',),
205: ('reset_content', 'reset'),
206: ('partial_content', 'partial'),
207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),
208: ('already_reported',),
226: ('im_used',),
# Redirect the status code
300: ('multiple_choices',),
301: ('moved_permanently', 'moved', '\\o-'),
302: ('found',),
303: ('see_other', 'other'),
304: ('not_modified',),
305: ('use_proxy',),
306: ('switch_proxy',),
307: ('temporary_redirect', 'temporary_moved', 'temporary'),
308: ('permanent_redirect',
'resume_incomplete', 'resume',), # These 2 to be removed in 3.0
# Client error status code
400: ('bad_request', 'bad'),
401: ('unauthorized',),
402: ('payment_required', 'payment'),
403: ('forbidden',),
404: ('not_found', '-o-'),
405: ('method_not_allowed', 'not_allowed'),
406: ('not_acceptable',),
407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),
408: ('request_timeout', 'timeout'),
409: ('conflict',),
410: ('gone',),
411: ('length_required',),
412: ('precondition_failed', 'precondition'),
413: ('request_entity_too_large',),
414: ('request_uri_too_large',),
415: ('unsupported_media_type', 'unsupported_media', 'media_type'),
416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),
417: ('expectation_failed',),
418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),
421: ('misdirected_request',),
422: ('unprocessable_entity', 'unprocessable'),
423: ('locked',),
424: ('failed_dependency', 'dependency'),
425: ('unordered_collection', 'unordered'),
426: ('upgrade_required', 'upgrade'),
428: ('precondition_required', 'precondition'),
429: ('too_many_requests', 'too_many'),
431: ('header_fields_too_large', 'fields_too_large'),
444: ('no_response', 'none'),
449: ('retry_with', 'retry'),
450: ('blocked_by_windows_parental_controls', 'parental_controls'),
451: ('unavailable_for_legal_reasons', 'legal_reasons'),
499: ('client_closed_request',),
# Server error status code
500: ('internal_server_error', 'server_error', '/o\\', '*'),
501: ('not_implemented',),
502: ('bad_gateway',),
503: ('service_unavailable', 'unavailable'),
504: ('gateway_timeout',),
505: ('http_version_not_supported', 'http_version'),
506: ('variant_also_negotiates',),
507: ('insufficient_storage',),
509: ('bandwidth_limit_exceeded', 'bandwidth'),
510: ('not_extended',),
511: ('network_authentication_required', 'network_auth', 'network_authentication')
版权声明:本文为[C don't laugh]所创,转载请带上原文链接,感谢。 https://pythonmana.com/2022/134/202205141410503685.html