python Requests.Session() 的连续请求不起作用 [英] Consecutive requests with python Requests.Session() not working
问题描述
我正在尝试这样做,
import requests
s=requests.Session()
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data)
print ra.content
print ra.headers
ans = dict(answer='5')
f=s.cookies
r=s.post('http://example/level1.php',data=ans,cookies=f)
print r.content
但是第二个 post 请求返回 404 错误,有人可以帮我为什么吗?
But the second post request returns a 404 error, can someone help me why ?
推荐答案
在最新版本的 requests
中,sessions
对象配备了 Cookie Persistence
,查看请求 Sessions ojbects docs一>.所以你不需要人为地添加cookie.只是
In the latest version of requests
, the sessions
object comes equipped with Cookie Persistence
, look at the requests Sessions ojbects docs.
So you don't need add the cookie artificially.
Just
import requests
s=requests.Session()
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans)
print r.content
只需打印 cookie 即可查看您是否登录.
Just print the cookie to look up wheather you were logged.
for cookie in s.cookies:
print (cookie.name, cookie.value)
示例站点是您的站点吗?
如果不是,则该站点可能会拒绝机器人/爬虫!
您可以像使用浏览器一样更改请求的用户代理.
And is the example site is yours?
If not maybe the site reject the bot/crawler !
And you can change your requests's user-agent as looks likes you are using a browser.
例如:
import requests
s=requests.Session()
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.62 Safari/537.36'
}
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data, headers=headers)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans, headers = headers)
print r.content
这篇关于python Requests.Session() 的连续请求不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!