Python实现一个论文下载器,赶紧收藏

程序员霖霖 2021-01-20 17:07:03
Python


在科研学习的过程中,我们难免需要查询相关的文献资料,而想必很多小伙伴都知道SCI-HUB,此乃一大神器,它可以帮助我们搜索相关论文并下载其原文。可以说,SCI-HUB造福了众多科研人员,用起来也是“美滋滋”。

然而,当师姐告诉我:“xx,可以帮我下载几篇文献嘛?”。乐心助人的我自当是满口答应了,心想:“这种小事就交给我叭~”

于是乎,我收到了一个excel文档,66篇论文的列表安静地趟在里面(此刻心中碎碎念:“这尼玛,是几篇嘛...”)。我粗略算了一下,复制、粘贴、下载,一套流程走下来,每篇论文少说也得30秒,66篇的话....啊,这不能忍!

很显然,一篇一篇的下载,不是我的风格所以,我决定写一个论文下载器助我前行。

 

很多人学习python,不知道从何学起。
很多人学习python,掌握了基本语法过后,不知道在哪里寻找案例上手。
很多已经做案例的人,却不知道如何去学习更加高深的知识。
那么针对这三类人,我给大家提供一个好的学习平台,免费领取视频教程,电子书籍,以及课程的源代码!
QQ群:810735403

 

一、代码分析

代码分析的详细思路跟以往依旧如此雷同,逃不过的还是:抓包分析->模拟请求->代码整合。由于一会儿kimol君还得去搬砖,今天就不详细展开了。

1. 搜索论文

通过论文的URL、PMID、DOI号或者论文标题等搜索到对应的论文,并通过bs4库找出PDF原文的链接地址,代码如下:

 
  1.  
    def search_article(artName):
  2.  
    '''
  3.  
    搜索论文
  4.  
    ---------------
  5.  
    输入:论文名
  6.  
    ---------------
  7.  
    输出:搜索结果(如果没有返回"",否则返回PDF链接)
  8.  
    '''
  9.  
    url = 'https://www.sci-hub.ren/'
  10.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  11.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  12.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  13.  
    'Accept-Encoding':'gzip, deflate, br',
  14.  
    'Content-Type':'application/x-www-form-urlencoded',
  15.  
    'Content-Length':'123',
  16.  
    'Origin':'https://www.sci-hub.ren',
  17.  
    'Connection':'keep-alive',
  18.  
    'Upgrade-Insecure-Requests':'1'}
  19.  
    data = {'sci-hub-plugin-check':'',
  20.  
    'request':artName}
  21.  
    res = requests.post(url, headers=headers, data=data)
  22.  
    html = res.text
  23.  
    soup = BeautifulSoup(html, 'html.parser')
  24.  
    iframe = soup.find(id='pdf')
  25.  
    if iframe == None: # 未找到相应文章
  26.  
    return ''
  27.  
    else:
  28.  
    downUrl = iframe['src']
  29.  
    if 'http' not in downUrl:
  30.  
    downUrl = 'https:'+downUrl
  31.  
    return downUrl

2. 下载论文

得到了论文的链接地址之后,只需要通过requests发送一个请求,即可将其下载:

 
  1.  
    def download_article(downUrl):
  2.  
    '''
  3.  
    根据论文链接下载文章
  4.  
    ----------------------
  5.  
    输入:论文链接
  6.  
    ----------------------
  7.  
    输出:PDF文件二进制
  8.  
    '''
  9.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  10.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  11.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  12.  
    'Accept-Encoding':'gzip, deflate, br',
  13.  
    'Connection':'keep-alive',
  14.  
    'Upgrade-Insecure-Requests':'1'}
  15.  
    res = requests.get(downUrl, headers=headers)
  16.  
    return res.content

二、完整代码

将上述两个函数整合之后,我的完整代码如下:

 
  1.  
    # -*- coding: utf-8 -*-
  2.  
    """
  3.  
    Created on Tue Jan 5 16:32:22 2021
  4.  
     
  5.  
    @author: kimol_love
  6.  
    """
  7.  
    import os
  8.  
    import time
  9.  
    import requests
  10.  
    from bs4 import BeautifulSoup
  11.  
     
  12.  
    def search_article(artName):
  13.  
    '''
  14.  
    搜索论文
  15.  
    ---------------
  16.  
    输入:论文名
  17.  
    ---------------
  18.  
    输出:搜索结果(如果没有返回"",否则返回PDF链接)
  19.  
    '''
  20.  
    url = 'https://www.sci-hub.ren/'
  21.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  22.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  23.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  24.  
    'Accept-Encoding':'gzip, deflate, br',
  25.  
    'Content-Type':'application/x-www-form-urlencoded',
  26.  
    'Content-Length':'123',
  27.  
    'Origin':'https://www.sci-hub.ren',
  28.  
    'Connection':'keep-alive',
  29.  
    'Upgrade-Insecure-Requests':'1'}
  30.  
    data = {'sci-hub-plugin-check':'',
  31.  
    'request':artName}
  32.  
    res = requests.post(url, headers=headers, data=data)
  33.  
    html = res.text
  34.  
    soup = BeautifulSoup(html, 'html.parser')
  35.  
    iframe = soup.find(id='pdf')
  36.  
    if iframe == None: # 未找到相应文章
  37.  
    return ''
  38.  
    else:
  39.  
    downUrl = iframe['src']
  40.  
    if 'http' not in downUrl:
  41.  
    downUrl = 'https:'+downUrl
  42.  
    return downUrl
  43.  
     
  44.  
    def download_article(downUrl):
  45.  
    '''
  46.  
    根据论文链接下载文章
  47.  
    ----------------------
  48.  
    输入:论文链接
  49.  
    ----------------------
  50.  
    输出:PDF文件二进制
  51.  
    '''
  52.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  53.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  54.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  55.  
    'Accept-Encoding':'gzip, deflate, br',
  56.  
    'Connection':'keep-alive',
  57.  
    'Upgrade-Insecure-Requests':'1'}
  58.  
    res = requests.get(downUrl, headers=headers)
  59.  
    return res.content
  60.  
     
  61.  
    def welcome():
  62.  
    '''
  63.  
    欢迎界面
  64.  
    '''
  65.  
    os.system('cls')
  66.  
    title = '''
  67.  
    _____ _____ _____ _ _ _ _ ____
  68.  
    / ____|/ ____|_ _| | | | | | | | _ \
  69.  
    | (___ | | | |______| |__| | | | | |_) |
  70.  
    \___ \| | | |______| __ | | | | _ <
  71.  
    ____) | |____ _| |_ | | | | |__| | |_) |
  72.  
    |_____/ \_____|_____| |_| |_|\____/|____/
  73.  
     
  74.  
     
  75.  
    '''
  76.  
    print(title)
  77.  
     
  78.  
    if __name__ == '__main__':
  79.  
    while True:
  80.  
    welcome()
  81.  
    request = input('请输入URL、PMID、DOI或者论文标题:')
  82.  
    print('搜索中...')
  83.  
    downUrl = search_article(request)
  84.  
    if downUrl == '':
  85.  
    print('未找到相关论文,请重新搜索!')
  86.  
    else:
  87.  
    print('论文链接:%s'%downUrl)
  88.  
    print('下载中...')
  89.  
    pdf = download_article(downUrl)
  90.  
    with open('%s.pdf'%request, 'wb') as f:
  91.  
    f.write(pdf)
  92.  
    print('---下载完成---')
  93.  
    time.sleep(0.8)

不出所料,代码一跑,我便轻松完成了师姐交给我的任务,不香嘛? 好了,今天的分享就到这,如果你对Python感兴趣,欢迎加入我们【python学习交流裙】,免费领取学习资料和源码。

版权声明
本文为[程序员霖霖]所创,转载请带上原文链接,感谢
https://my.oschina.net/u/4636319/blog/4915918

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database