Python自动化需要登录的wget脚本 [英] Python automating a wget script with login required
本文介绍了Python自动化需要登录的wget脚本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我需要从需要以下内容的网站自动执行下载过程:
I need to automate a download process from a site which requires the following:
- 发送包含您的用户名和密码的HTTP POST请求
- 我应该获取一个cookie (可能包含一个会话ID)
- 发送一个HTTP GET请求的文件,在HTTP标头中发送我的Cookie详细信息
现在使用wget,我必须首先使用密码登录):
Using wget now, I must first login with a password (open a session?):
wget --no-check-certificate -O /dev/null --save-cookies auth.rda_ucar_edu --post-data=email=name@domain.edu&passwd=5555&action=login https://rda.ucar.edu/cgi-bin/login
然后,我检索所需的文件:
then, I retrieve the files I need:
wget --no-check-certificate -N --load-cookies auth.rda_ucar_edu http://rda.ucar.edu/data/ds608.0/3HRLY/1979/NARRflx_197901_0916.tar
在Python中有一个很好的方法吗?我试过很多方法,没有得到这个工作。下面的python代码似乎登录正确。但是,我相信我需要在下载数据时保持会话正常工作。
Is there a nice way to do this in Python? I have tried many ways and have not gotten this to work. The following python code seems to log me in correctly. However, I believe I need to keep the session live while I download my data?
url = 'https://rda.ucar.edu/cgi-bin/login'
values = {'email': 'name@domain.edu', 'password': '5555', 'action': 'login'}
data = urllib.urlencode(values)
binary_data = data.encode('ascii')
req = urllib2.Request(url, binary_data)
response = urllib2.urlopen(req)
print response.read()
也尝试过:
from requests import session
with session() as c:
c.post(url, values)
request = c.get('http://rda.ucar.edu/data/ds608.0/3HRLY/1979/NARRflx_197901_0108.tar')
任何建议都会有帮助。
推荐答案
您需要保存您的Cookie 。
Easier to just use a 3rd party lib like mechanize or scrapy though
这篇关于Python自动化需要登录的wget脚本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文