如何使用python urllib在HTTP / 1.1中保持活力 [英] How to stay alive in HTTP/1.1 using python urllib

查看:650
本文介绍了如何使用python urllib在HTTP / 1.1中保持活力的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在我这样做:(Python3,urllib)

For now I am doing this: (Python3, urllib)

url = 'someurl'
headers = '(('HOST', 'somehost'), /  
            ('Connection', 'keep-alive'),/
            ('Accept-Encoding' , 'gzip,deflate'))
opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor())
for h in headers:
    opener.addheaders.append(x)
data = 'some logging data' #username, pw etc.
opener.open('somesite/login.php, data)

res = opener.open(someurl)
data = res.read()
... some stuff here...
res1 = opener.open(someurl2)
data = res1.read()
etc.

这是怎么回事;

我一直从服务器获取gzipped响应我保持登录状态(我正在获取一些内容,如果我没有登录则无法使用)但我认为每个请求openner.open之间的连接都在下降;

I keep getting gzipped responses from server and I stayed logged in (I am fetching some content which is not available if I were not logged in) but I think the connection is dropping between every request opener.open;

我认为因为连接非常慢而且似乎每次都有新的连接。两个问题:

I think that because connecting is very slow and it seems like there is new connection every time. Two questions:

a)我如何测试实际连接是否保持活着/死亡?
b)如何让它保持活力在请求其他网址之间?

a)How do I test if in fact the connection is staying-alive/dying
b)How to make it stay-alive between request for other urls ?

小心:)

推荐答案

这将是一个非常延迟的答案,但是:

This will be a very delayed answer, but:

您应该看到 urllib3 。它适用于Python 2.x但是当你看到他们的README文档时你会明白这个想法。

You should see urllib3. It is for Python 2.x but you'll get the idea when you see their README document.

是的,urllib默认不保持连接活着,我现在我正在为Python 3实现urllib3以保留在我的工具包中:)

And yes, urllib by default doesn't keep connections alive, I'm now implementing urllib3 for Python 3 to be staying in my toolbag :)

这篇关于如何使用python urllib在HTTP / 1.1中保持活力的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆