使用请求在 python 中下载大文件 [英] Download large file in python with requests

查看:33
本文介绍了使用请求在 python 中下载大文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Requests 是一个非常好的库.我想用它来下载大文件(> 1GB).问题是不可能将整个文件保存在内存中;我需要分块阅读.这是以下代码的问题:

Requests is a really nice library. I'd like to use it for downloading big files (>1GB). The problem is it's not possible to keep whole file in memory; I need to read it in chunks. And this is a problem with the following code:

import requests

def DownloadFile(url)
    local_filename = url.split('/')[-1]
    r = requests.get(url)
    f = open(local_filename, 'wb')
    for chunk in r.iter_content(chunk_size=512 * 1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
    f.close()
    return 

出于某种原因,它不能以这种方式工作:在将响应保存到文件之前,它仍然将响应加载到内存中.

For some reason it doesn't work this way: it still loads the response into memory before it is saved to a file.

更新

如果你需要一个可以从FTP下载大文件的小客户端(Python 2.x/3.x),你可以找到它此处.它支持多线程 &重新连接(它确实监视连接)还为下载任务调整套接字参数.

If you need a small client (Python 2.x /3.x) which can download big files from FTP, you can find it here. It supports multithreading & reconnects (it does monitor connections) also it tunes socket params for the download task.

推荐答案

使用以下流代码,无论下载的文件大小如何,都会限制 Python 内存使用:

With the following streaming code, the Python memory usage is restricted regardless of the size of the downloaded file:

def download_file(url):
    local_filename = url.split('/')[-1]
    # NOTE the stream=True parameter below
    with requests.get(url, stream=True) as r:
        r.raise_for_status()
        with open(local_filename, 'wb') as f:
            for chunk in r.iter_content(chunk_size=8192): 
                # If you have chunk encoded response uncomment if
                # and set chunk_size parameter to None.
                #if chunk: 
                f.write(chunk)
    return local_filename

注意使用iter_content返回的字节数不完全是chunk_size;它应该是一个通常更大的随机数,并且在每次迭代中都会有所不同.

Note that the number of bytes returned using iter_content is not exactly the chunk_size; it's expected to be a random number that is often far bigger, and is expected to be different in every iteration.

参见 body-content-workflowResponse.iter_content 以供进一步参考.

See body-content-workflow and Response.iter_content for further reference.

这篇关于使用请求在 python 中下载大文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆