从使用Python的ASPX网页获取文件 [英] Get a file from an ASPX webpage using Python

查看:809
本文介绍了从使用Python的ASPX网页获取文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想下载本网站的CSV文件,但我不断收到的时候我用这块code(其使用,直到几个星期前的工作)的HTML文件,或者当我M使用wget。

I'm trying to download a CSV file from this site, but I keep getting an HTML file when I'm using this piece of code (which used to work until a few weeks ago), or when I'm using wget.

url = "http://.....aspx"

file_name = "%s.csv" % url.split('/')[3]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)

file_size_dl = 0
block_sz = 8192
while True:
    buffer = u.read(block_sz)
    if not buffer:
        break

    file_size_dl += len(buffer)
    f.write(buffer)
    status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
    status = status + chr(8)*(len(status)+1)
    print status,

我怎么能再次获得该文件与Python?

How can I get this file again with Python?

感谢您

推荐答案

使用请求库,而不是urllib2的亟待解决:

Solved by using the Requests library instead of urllib2:

import requests

url = "http://www.....aspx?download=1"

file_name = "Data.csv"
u = requests.get(url)

file_size = int(u.headers['content-length'])
print "Downloading: %s Bytes: %s" % (file_name, file_size)

with open(file_name, 'wb') as f:
    for chunk in u.iter_content(chunk_size=1024): 
        if chunk: # filter out keep-alive new chunks
            f.write(chunk)
            f.flush()
f.close()

这篇关于从使用Python的ASPX网页获取文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆