Python网页抓取登录 [英] Python web scraping login
问题描述
我正在尝试使用 python 登录网站.登录网址是:
I am trying to login to a website using python. The login URL is :
https://login.flash.co.za/apex/f?p=pwfone:login
并且表单操作"网址显示为:
and the 'form action' url is shown as :
https://login.flash.co.za/apex/wwv_flow.accept
当我手动登录时在 chrome 上使用检查元素"时,这些是显示的表单帖子(pt_02 = 密码):
When I use the ' inspect element' on chrome when logging in manually, these are the form posts that show up (pt_02 = password):
我不知道如何将一些隐藏的项目添加到下面的 python 代码中.
There a few hidden items that I'm not sure how to add into the python code below.
当我使用此代码时,返回登录页面:
When I use this code, the login page is returned:
import requests
url = 'https://login.flash.co.za/apex/wwv_flow.accept'
values = {'p_flow_id': '1500',
'p_flow_step_id': '101',
'p_page_submission_id': '3169092211412',
'p_request': 'LOGIN',
'p_t01': 'solar',
'p_t02': 'password',
'p_checksum': ''
}
r = requests.post(url, data=values)
print r.content
如何调整此代码以执行登录?
How can I adjust this code to perform a login?
Chrome 网络:
推荐答案
这或多或少应该是您的脚本的样子.使用 session 自动处理 cookie.手动填写 username
和 password
字段.
This is more or less your script should look like. Use session to handle the cookies automatically. Fill in the username
and password
fields manually.
import requests
from bs4 import BeautifulSoup
logurl = "https://login.flash.co.za/apex/f?p=pwfone:login"
posturl = 'https://login.flash.co.za/apex/wwv_flow.accept'
with requests.Session() as s:
s.headers = {"User-Agent":"Mozilla/5.0"}
res = s.get(logurl)
soup = BeautifulSoup(res.text,"lxml")
values = {
'p_flow_id': soup.select_one("[name='p_flow_id']")['value'],
'p_flow_step_id': soup.select_one("[name='p_flow_step_id']")['value'],
'p_instance': soup.select_one("[name='p_instance']")['value'],
'p_page_submission_id': soup.select_one("[name='p_page_submission_id']")['value'],
'p_request': 'LOGIN',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t01': 'username',
'p_arg_names': soup.select_one("[name='p_arg_names']")['value'],
'p_t02': 'password',
'p_md5_checksum': soup.select_one("[name='p_md5_checksum']")['value'],
'p_page_checksum': soup.select_one("[name='p_page_checksum']")['value']
}
r = s.post(posturl, data=values)
print r.content
这篇关于Python网页抓取登录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!