Python教程 | 里站爬虫

Suni_ing 2020-11-16 18:58:45
Python 爬虫 教程 开源中国


声明:

1.本教程及其相关派生物仅用于个人学习python、研究python或欣赏python,以及其他非商业性或非盈利性用途。故,因使用本教程及其相关派生物而产生的任何风险甚至法律责任均由使用者自己承担。

2.本教程不提供相关网站连接方法。

3.初学Python,不足之处还望各位大佬指出。

运行环境:python3 相关第三方库自行安装

配置文件:

第一行:Cookie

第二行:下载路径

第三行:最小下载延时时间[秒]

第四行:最大下载延时时间[秒]

原理:建立链接列表,设置超时,正则表达式逐步匹配

代码:

from requests.adapters import HTTPAdapter
from fake_useragent import UserAgent
from requests_toolbelt import SSLAdapter
import os,time,random,re
import requests,ssl,chardet
#版本:1.0
#本脚本适用于下载漫画,如果下载的内容页数极其多可能会下不全。
#支持从上次打断处继续下载。多页漫画仅需输入漫画首页链接。
ua = UserAgent()
Cookies = None
TargetPath = None
MinTime = None
MaxTime = None
urlHead='https://exhentai.org/g/'
def ReadConfig():#读配置文件函数
global Cookies,TargetPath,MinTime,MaxTime
path = input('配置文件绝对路径 [也可以拖进来]:')
if not os.path.exists(path):
ReadConfig()
return
try:
fileHandler = open(path,"r")
listOfLines = fileHandler.read().splitlines()
fileHandler.close()
except:
ReadConfig()
return
Cookies = listOfLines[0]
TargetPath = listOfLines[1]
MinTime = int(listOfLines[2])
MaxTime = int(listOfLines[3])
def DlComic(Url,dlPath):#解析链接后下载图片
global ua,Cookies,MinTime,MaxTime
DlPage = requests.Session()
DlPage.mount('https://', HTTPAdapter(max_retries=5))
DlPage.mount('https://', SSLAdapter(ssl.PROTOCOL_TLSv1_2))
response = DlPage.get(Url,headers = {'Cookie':Cookies,'User-Agent':ua.random},timeout = 5).content
encode_type = chardet.detect(response)
pageStr = response.decode(encode_type['encoding'])
imgTUrl =str(re.findall(r'(?<=<img id="img" src=").*?(?=")',pageStr)[0])
time.sleep(random.uniform(MinTime,MaxTime)) #延时
DlImg = requests.Session()
DlImg.mount('https://', HTTPAdapter(max_retries=5))
DlImg.mount('https://', SSLAdapter(ssl.PROTOCOL_TLSv1_2))
response = DlImg.get(imgTUrl,headers = {'User-Agent':ua.random},timeout = 120)
with open(dlPath, 'wb') as f:
f.write(response.content)
f.flush()
print('succeed')
def getImgList(Urls,title):#获取图片链接列表
global TargetPath,ua,Cookies
IsError = False
imgList = []
for Url in Urls:
DlPage = requests.Session()
DlPage.mount('https://', HTTPAdapter(max_retries=5))
DlPage.mount('https://', SSLAdapter(ssl.PROTOCOL_TLSv1_2))
page = DlPage.get(Url,headers = {'Cookie':Cookies,'User-Agent':ua.random},timeout = 5).content
encode_type = chardet.detect(page)
pageStr = page.decode(encode_type['encoding'])
imgUrls = re.findall(r'https\:\/\/exhentai\.org\/s\/\w{1,}\/[\w\-]{1,}',pageStr)
imgList.extend(imgUrls)
Numl = len(imgList)
print('共' + str(Numl) + '页')
dlDir = TargetPath + title + '\\'
if not os.path.exists(dlDir):
os.mkdir(dlDir)
for item in range(int(Numl)):
print(' ' + str(item+1) + "/" + str(Numl) + " >>> " + imgList[item],' >>> ',end = '')
try:
if not os.path.exists(dlDir + str(item) + '.jpg'):
DlComic(imgList[item] , dlDir + str(item) + '.jpg' )
else:
print('done')
except Exception as e:
print('Error' + str(e))
IsError = True
for i in range(10):
try:
if IsError:
print(' ' + str(item+1) + " ReDl >>> " + str(i) + ' >>> ',end = '')
DlComic(imgList[item] , dlDir + str(item) + '.jpg' )
IsError = False
except:
IsError = True
def getPageList(Url):#解析漫画首页并获取’所有‘页面链接
global ua,Cookies
if (Url.find('https://exhentai.org/g/') != -1):
DlPage = requests.Session()
DlPage.mount('https://', HTTPAdapter(max_retries=5))
page = DlPage.get(Url,headers = {'Cookie':Cookies,'User-Agent':ua.random}).content
encode_type = chardet.detect(page)#<a href="https://exhentai.org/g/1765318/5c795f0010/?p=1" onclick="return false">2</a>
page = page.decode(encode_type['encoding'])
subPageUrls = re.findall(r'https\:\/\/exhentai\.org\/g\/\w{1,}\/[\w\-]{1,}\/\?p=\d{1,}\" onclick=\"return false\">\d{1,}',page)
subPageUrls = re.findall(r'https\:\/\/exhentai\.org\/g\/\w{1,}\/[\w\-]{1,}\/\?p=\d{1,}', "','".join(subPageUrls))
subPageUrls = subPageUrls[0:int(len(subPageUrls)/2)]
subPageUrls.insert(0,Url)
titleStr = str(re.findall(r'(?<=<title>).*?(?=</title>)',page)[0])
titleStr = titleStr.replace("|","").replace("*","").replace("?","").replace("<","").replace(">","").replace(":","").replace("\\","").replace("/","").replace("&","").replace(";","")
if len(titleStr)>255:
titleStr = titleStr[0:255]
print(' >>> 解析>' + titleStr)
getImgList(subPageUrls,titleStr)
else:
print('>>> 错误的里站链接 '+Url)
def main():#读配置文件,输入并备份要下载漫画的首页链接
print('\nExhentai Downloader >>> 开发者:Suni_ing\n免责声明:\n本脚本提供的内容仅用于个人学习python、研究python或欣赏python,以及其他非商业性或非盈利性用途。\n故,因使用本脚本而产生的任何风险甚至法律责任均由使用者自己承担。\n')
print('关于配置文件:\n第一行:Cookie\n第二行:下载路径\n第三行:最小下载延时时间[秒]\n第四行:最大下载延时时间[秒]\n')
ReadConfig()
urls = []
inpStr = '>>> 请输入链接 >>> 输入空白内容结束 >>> '
url = input(inpStr)
while url != "":
urls.append(url)
url = input('>>> 链接列表共 '+ str(len(urls)) + inpStr)
print('>>> 输入结束')
f = open("url.txt",'a')
f.write('--->>>\n')
for item in urls:
f.write(item + '\n')
f.close()
if(len(urls) > 0):
for item in range(len(urls)):
print(str(item+1) + '/' + str(len(urls)),end = '')
getPageList(urls[item])
else:
print('>>> 运行结束',end='')
main()
main()
版权声明
本文为[Suni_ing]所创,转载请带上原文链接,感谢
https://my.oschina.net/u/4837396/blog/4720376

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database