使用请求在python中下载大文件 [英] Download large file in python with requests

查看:95
本文介绍了使用请求在python中下载大文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

请求是一个非常不错的库。我想将其用于下载大文件(> 1GB)。
问题是不可能将整个文件保留在内存中,我需要分块读取它。这是以下代码的问题

Requests is a really nice library. I'd like to use it for download big files (>1GB). The problem is it's not possible to keep whole file in memory I need to read it in chunks. And this is a problem with the following code

import requests

def DownloadFile(url)
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    f = open(local_filename, 'wb')
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

由于某种原因,它无法通过这种方式工作。

By some reason it doesn't work this way. It still loads response into memory before save it to a file.

UPDATE

如果您需要可以从FTP下载大文件的小型客户端(Python 2.x /3.x),则可以找到它这里。它支持多线程&重新连接(它确实监视连接),并为下载任务调整套接字参数。

If you need a small client (Python 2.x /3.x) which can download big files from FTP, you can find it here. It supports multithreading & reconnects (it does monitor connections) also it tunes socket params for the download task.

推荐答案

使用以下流代码,无论下载文件的大小如何,Python内存的使用都受到限制:

With the following streaming code, the Python memory usage is restricted regardless of the size of the downloaded file:

def download_file(url):
    local_filename = url.split('/')[-1]
    # NOTE the stream=True parameter below
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                # If you have chunk encoded response uncomment if
                # and set chunk_size parameter to None.
                #if chunk: 
                f.write(chunk)
    return local_filename

请注意,使用 iter_content 返回的字节数不完全是 chunk_size ;它应该是一个通常更大的随机数,并且每次迭代都不同。

Note that the number of bytes returned using iter_content is not exactly the chunk_size; it's expected to be a random number that is often far bigger, and is expected to be different in every iteration.

请参见 https://requests.readthedocs.io/en/latest/user/advanced/#body-content-workflow https://requests.readthedocs.io/zh-CN/latest/api/#requests.Response.iter_content 以获得更多参考。

See https://requests.readthedocs.io/en/latest/user/advanced/#body-content-workflow and https://requests.readthedocs.io/en/latest/api/#requests.Response.iter_content for further reference.

这篇关于使用请求在python中下载大文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆