Python to achieve a paper Downloader, quickly collect

Programmer Lin Lin 2021-01-20 18:16:34
python achieve paper downloader quickly


In the process of scientific research and learning , It is inevitable that we need to consult the relevant literature , And I think many little friends know SCI-HUB, This is a big artifact , It can help us search related papers and download their original texts . so to speak ,SCI-HUB For the benefit of many researchers , It's also useful “ A fun ”.

However , When elder martial sister told me :“xx, Can you download some articles for me ?”. I'm happy to help others. I think I'm full of promises , assume :“ I'll take care of such trifles ~”

So , I received a excel file ,66 The list of papers is quietly in it ( At this moment, my heart is broken :“ This NIMA , It's a few articles ...”). I made a rough calculation , Copy 、 Paste 、 download , A process comes down , Every paper needs at least 30 second ,66 The words of this article .... ah , That can't endure !

Obviously , Download one by one , It's not my style, so , I decided to write a thesis downloader to help me move forward .

 

Many people study python, I don't know where to start .
Many people study python, After mastering the basic grammar , I don't know where to look for cases to start .
A lot of people who have done cases , But I don't know how to learn more advanced knowledge .
So for these three kinds of people , I will provide you with a good learning platform , Get a free video tutorial , electronic text , And the source code of the course !
QQ Group :810735403

 

One 、 The code analysis

The detailed thinking of code analysis is still the same as before , It's still : Caught analysis -> Simulation of the request -> Code integration . For a while kimol You still have to move bricks , I won't go into details today .

1. Search for papers

Through the analysis of the paper URL、PMID、DOI No. or title of the paper , And pass bs4 Library to find out PDF The link address of the original text , The code is as follows :

 
  1.  
    def search_article(artName):
  2.  
    '''
  3.  
    Search for papers
  4.  
    ---------------
  5.  
    Input : The title of the paper
  6.  
    ---------------
  7.  
    Output : The search results ( If there is no return "", Otherwise return to PDF link )
  8.  
    '''
  9.  
    url = 'https://www.sci-hub.ren/'
  10.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  11.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  12.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  13.  
    'Accept-Encoding':'gzip, deflate, br',
  14.  
    'Content-Type':'application/x-www-form-urlencoded',
  15.  
    'Content-Length':'123',
  16.  
    'Origin':'https://www.sci-hub.ren',
  17.  
    'Connection':'keep-alive',
  18.  
    'Upgrade-Insecure-Requests':'1'}
  19.  
    data = {'sci-hub-plugin-check':'',
  20.  
    'request':artName}
  21.  
    res = requests.post(url, headers=headers, data=data)
  22.  
    html = res.text
  23.  
    soup = BeautifulSoup(html, 'html.parser')
  24.  
    iframe = soup.find(id='pdf')
  25.  
    if iframe == None: # No corresponding articles found
  26.  
    return ''
  27.  
    else:
  28.  
    downUrl = iframe['src']
  29.  
    if 'http' not in downUrl:
  30.  
    downUrl = 'https:'+downUrl
  31.  
    return downUrl

2. Download the paper

After getting the link address of the paper , Just go through requests Send a request , You can download it :

 
  1.  
    def download_article(downUrl):
  2.  
    '''
  3.  
    Download the article according to the article link
  4.  
    ----------------------
  5.  
    Input : Thesis link
  6.  
    ----------------------
  7.  
    Output :PDF File binary
  8.  
    '''
  9.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  10.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  11.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  12.  
    'Accept-Encoding':'gzip, deflate, br',
  13.  
    'Connection':'keep-alive',
  14.  
    'Upgrade-Insecure-Requests':'1'}
  15.  
    res = requests.get(downUrl, headers=headers)
  16.  
    return res.content

Two 、 Complete code

After integrating the above two functions , My complete code is as follows :

 
  1.  
    # -*- coding: utf-8 -*-
  2.  
    """
  3.  
    Created on Tue Jan 5 16:32:22 2021
  4.  
     
  5.  
    @author: kimol_love
  6.  
    """
  7.  
    import os
  8.  
    import time
  9.  
    import requests
  10.  
    from bs4 import BeautifulSoup
  11.  
     
  12.  
    def search_article(artName):
  13.  
    '''
  14.  
    Search for papers
  15.  
    ---------------
  16.  
    Input : The title of the paper
  17.  
    ---------------
  18.  
    Output : The search results ( If there is no return "", Otherwise return to PDF link )
  19.  
    '''
  20.  
    url = 'https://www.sci-hub.ren/'
  21.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  22.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  23.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  24.  
    'Accept-Encoding':'gzip, deflate, br',
  25.  
    'Content-Type':'application/x-www-form-urlencoded',
  26.  
    'Content-Length':'123',
  27.  
    'Origin':'https://www.sci-hub.ren',
  28.  
    'Connection':'keep-alive',
  29.  
    'Upgrade-Insecure-Requests':'1'}
  30.  
    data = {'sci-hub-plugin-check':'',
  31.  
    'request':artName}
  32.  
    res = requests.post(url, headers=headers, data=data)
  33.  
    html = res.text
  34.  
    soup = BeautifulSoup(html, 'html.parser')
  35.  
    iframe = soup.find(id='pdf')
  36.  
    if iframe == None: # No corresponding articles found
  37.  
    return ''
  38.  
    else:
  39.  
    downUrl = iframe['src']
  40.  
    if 'http' not in downUrl:
  41.  
    downUrl = 'https:'+downUrl
  42.  
    return downUrl
  43.  
     
  44.  
    def download_article(downUrl):
  45.  
    '''
  46.  
    Download the article according to the article link
  47.  
    ----------------------
  48.  
    Input : Thesis link
  49.  
    ----------------------
  50.  
    Output :PDF File binary
  51.  
    '''
  52.  
    headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:84.0) Gecko/20100101 Firefox/84.0',
  53.  
    'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
  54.  
    'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2',
  55.  
    'Accept-Encoding':'gzip, deflate, br',
  56.  
    'Connection':'keep-alive',
  57.  
    'Upgrade-Insecure-Requests':'1'}
  58.  
    res = requests.get(downUrl, headers=headers)
  59.  
    return res.content
  60.  
     
  61.  
    def welcome():
  62.  
    '''
  63.  
    The welcome screen
  64.  
    '''
  65.  
    os.system('cls')
  66.  
    title = '''
  67.  
    _____ _____ _____ _ _ _ _ ____
  68.  
    / ____|/ ____|_ _| | | | | | | | _ \
  69.  
    | (___ | | | |______| |__| | | | | |_) |
  70.  
    \___ \| | | |______| __ | | | | _ <
  71.  
    ____) | |____ _| |_ | | | | |__| | |_) |
  72.  
    |_____/ \_____|_____| |_| |_|\____/|____/
  73.  
     
  74.  
     
  75.  
    '''
  76.  
    print(title)
  77.  
     
  78.  
    if __name__ == '__main__':
  79.  
    while True:
  80.  
    welcome()
  81.  
    request = input(' Please enter URL、PMID、DOI Or the title of the paper :')
  82.  
    print(' Searching ...')
  83.  
    downUrl = search_article(request)
  84.  
    if downUrl == '':
  85.  
    print(' No related papers were found , Please search again !')
  86.  
    else:
  87.  
    print(' Thesis link :%s'%downUrl)
  88.  
    print(' In the download ...')
  89.  
    pdf = download_article(downUrl)
  90.  
    with open('%s.pdf'%request, 'wb') as f:
  91.  
    f.write(pdf)
  92.  
    print('--- Download complete ---')
  93.  
    time.sleep(0.8)

It is as expected , Code run , I easily completed the task given to me by my elder martial sister , It's not fragrant ? Okay , Today's sharing is here , If you are right about Python Interested in , Welcome to join us 【python Exchange of learning skirt 】, Free access to learning materials and source code .

版权声明
本文为[Programmer Lin Lin]所创,转载请带上原文链接,感谢
https://pythonmana.com/2021/01/20210120170559935u.html

  1. 利用Python爬虫获取招聘网站职位信息
  2. Using Python crawler to obtain job information of recruitment website
  3. Several highly rated Python libraries arrow, jsonpath, psutil and tenacity are recommended
  4. Python装饰器
  5. Python实现LDAP认证
  6. Python decorator
  7. Implementing LDAP authentication with Python
  8. Vscode configures Python development environment!
  9. In Python, how dare you say you can't log module? ️
  10. 我收藏的有关Python的电子书和资料
  11. python 中 lambda的一些tips
  12. python中字典的一些tips
  13. python 用生成器生成斐波那契数列
  14. python脚本转pyc踩了个坑。。。
  15. My collection of e-books and materials about Python
  16. Some tips of lambda in Python
  17. Some tips of dictionary in Python
  18. Using Python generator to generate Fibonacci sequence
  19. The conversion of Python script to PyC stepped on a pit...
  20. Python游戏开发,pygame模块,Python实现扫雷小游戏
  21. Python game development, pyGame module, python implementation of minesweeping games
  22. Python实用工具,email模块,Python实现邮件远程控制自己电脑
  23. Python utility, email module, python realizes mail remote control of its own computer
  24. 毫无头绪的自学Python,你可能连门槛都摸不到!【最佳学习路线】
  25. Python读取二进制文件代码方法解析
  26. Python字典的实现原理
  27. Without a clue, you may not even touch the threshold【 Best learning route]
  28. Parsing method of Python reading binary file code
  29. Implementation principle of Python dictionary
  30. You must know the function of pandas to parse JSON data - JSON_ normalize()
  31. Python实用案例,私人定制,Python自动化生成爱豆专属2021日历
  32. Python practical case, private customization, python automatic generation of Adu exclusive 2021 calendar
  33. 《Python实例》震惊了,用Python这么简单实现了聊天系统的脏话,广告检测
  34. "Python instance" was shocked and realized the dirty words and advertisement detection of the chat system in Python
  35. Convolutional neural network processing sequence for Python deep learning
  36. Python data structure and algorithm (1) -- enum type enum
  37. 超全大厂算法岗百问百答(推荐系统/机器学习/深度学习/C++/Spark/python)
  38. 【Python进阶】你真的明白NumPy中的ndarray吗?
  39. All questions and answers for algorithm posts of super large factories (recommended system / machine learning / deep learning / C + + / spark / Python)
  40. [advanced Python] do you really understand ndarray in numpy?
  41. 【Python进阶】Python进阶专栏栏主自述:不忘初心,砥砺前行
  42. [advanced Python] Python advanced column main readme: never forget the original intention and forge ahead
  43. python垃圾回收和缓存管理
  44. java调用Python程序
  45. java调用Python程序
  46. Python常用函数有哪些?Python基础入门课程
  47. Python garbage collection and cache management
  48. Java calling Python program
  49. Java calling Python program
  50. What functions are commonly used in Python? Introduction to Python Basics
  51. Python basic knowledge
  52. Anaconda5.2 安装 Python 库(MySQLdb)的方法
  53. Python实现对脑电数据情绪分析
  54. Anaconda 5.2 method of installing Python Library (mysqldb)
  55. Python implements emotion analysis of EEG data
  56. Master some advanced usage of Python in 30 seconds, which makes others envy it
  57. python爬取百度图片并对图片做一系列处理
  58. Python crawls Baidu pictures and does a series of processing on them
  59. python链接mysql数据库
  60. Python link MySQL database