C don't laugh 2022-05-14 14:12:22 阅读数:222
Web crawler ( Also known as web spider , Network robot ) Is to simulate the browser to send network requests , Receive request response , One according to certain rules , Programs that automatically capture Internet Information .
In principle, , As long as it's a browser ( client ) Something that can be done , Reptiles can do .
headers = {
'User-Agent': 'python-requests/2.25.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
response = requests.get('http://github.com/',headers=headers)
print(response.request.headers)
{
'User-Agent': 'python-requests/2.25.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
Find picture information
obtain headers:
headers = {
'User-Agent': 'python-requests/2.25.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
response = requests.get('https://avatars.githubusercontent.com/nplasterer?s=64&v=4',headers=headers)
print(response.request.headers)
with open('icon.ico', 'wb') as f:
f.write(response.content)
print(" Crawling pictures succeeded ")
img = cv2.imread("icon.ico")
cv2.imshow('icon',img)
cv2.waitKey(0)
import requests
import cv2
headers = {
'User-Agent': 'python-requests/2.25.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
response = requests.get('https://avatars.githubusercontent.com/nplasterer?s=64&v=4',headers=headers)
print(response.request.headers)
with open('icon.ico', 'wb') as f:
f.write(response.content)
print(" Crawling pictures succeeded ")
img = cv2.imread("icon.ico")
cv2.imshow('icon',img)
cv2.waitKey(0)
版权声明:本文为[C don't laugh]所创,转载请带上原文链接,感谢。 https://pythonmana.com/2022/134/202205141410503614.html