使用Selenium Works登录到页面-使用BS4 Works进行解析-但不能两者兼而有之 [英] login to page with Selenium works - parsing with BS4 works - but not the combination of both

查看:38
本文介绍了使用Selenium Works登录到页面-使用BS4 Works进行解析-但不能两者兼而有之的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从Wordpress论坛获取一些数据需要登录和解析-两个部分.两者都可以作为独立部件很好地工作.我可以用硒登录-完美-我可以使用BS4解析(抓取)数据. 但是当我将这两部分结合在一起时,就会遇到会话问题-我无法解决.

getting some data from Wordpress-forums requires login and parsing - two parts. Both work very well as a standalone part. i can login with selenium - perfectly - and i can parse (scrape) the data with BS4. But when i combine the two parts then i run into session issues - that i cannot solve.

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import time
 
#--| Setup
options = Options()
#options.add_argument("--headless")
#options.add_argument("--window-size=1980,1020")
#options.add_argument('--disable-gpu')
browser = webdriver.Chrome(executable_path=r'C:\chrome\chromedriver.exe', options=options)
#--| Parse or automation
browser.get("https://login.wordpress.org/?locale=en_US")
time.sleep(2)
user_name = browser.find_element_by_css_selector('#user_login')
user_name.send_keys("the username ")
password = browser.find_element_by_css_selector('#user_pass')
password.send_keys("the pass")
time.sleep(5)
submit = browser.find_elements_by_css_selector('#wp-submit')[0]
submit.click()
 
# Example send page source to BeautifulSoup or selenium for parse
soup = BeautifulSoup(browser.page_source, 'lxml')
use_bs4 = soup.find('title')
print(use_bs4.text)
#print('*' * 25)
#use_sel = browser.find_elements_by_css_selector('div > div._1vC4OE')
#print(use_sel[0].text)

请注意-这很完美. 您可以通过以下组合进行检查:

note - this works perfect. you can check it with the following combination:

login: pluginfan
pass: testpasswd123

请参阅下面的带有bs4的解析器和抓取工具-效果出色.

see below the parser&scraper with bs4 - that works outstanding.

#!/usr/bin/env python3
 
import requests
from bs4 import BeautifulSoup as BS
 
session = requests.Session()
session.headers.update({'User-Agent': 'Mozilla/5.0'}) # this page needs header 'User-Agent`
 
url = 'https://wordpress.org/support/plugin/advanced-gutenberg/page/{}/'
 
for page in range(1, 3):
    print('\n--- PAGE:', page, '---\n')
 
    # read page with list of posts
    r = session.get(url.format(page))
 
    soup = BS(r.text, 'html.parser')
 
    all_uls = soup.find('li', class_="bbp-body").find_all('ul')
 
    for number, ul in enumerate(all_uls, 1):
 
        print('\n--- post:', number, '---\n')
 
        a = ul.find('a')
        if a:
            post_url = a['href']
            post_title = a.text
 
            print('text:', post_url)
            print('href:', post_title)
            print('---------')
 
            # read page with post content
            r = session.get(post_url)
 
            sub_soup = BS(r.text, 'html.parser')
 
            post_content = sub_soup.find(class_='bbp-topic-content').get_text(strip=True, separator='\n')
            print(post_content)

但两者的组合不起作用:猜测我无法使用请求创建新会话,大多数与Selenium创建的会话一起使用时,我在使用登录部分运行解析器时遇到一些问题

but the combination of both does not work: guess that i cannot create a new session with Requests,most work with the session that Selenium created i have some issues to run the parser with the login part

stadalone解析器返回有效内容-很好!

the stadalone parser gives back valid content - thats fine!

--- post: 1 ---
 
text: https://wordpress.org/support/topic/advanced-button-with-icon/
href: Advanced Button with Icon?
---------
is it not possible to create a button with a font awesome icon to left / right?
 
--- post: 2 ---
 
text: https://wordpress.org/support/topic/expand-collapse-block/
href: Expand / Collapse block?
---------
At the very bottom I have an expandable requirements.
Do you have a better block? I would like to use one of yours if poss.
The page I need help with:
 
--- post: 3 ---
 
text: https://wordpress.org/support/topic/login-form-not-formatting-correctly/
href: Login Form Not Formatting Correctly
---------
Getting some weird formatting with the email & password fields running on outside the form.
Tried on two different sites.
Thanks
 
..... [,,,,,] ....
 
--- post: 22 ---
 
text: https://wordpress.org/support/topic/settings-import-export-2/
href: Settings Import & Export
---------
Traceback (most recent call last):
  File "C:\Users\Kasper\Documents\_f_s_j\_mk_\_dev_\bs\____wp_forum_parser_without_login.py", line 43, in <module>
    print(post_content)
  File "C:\Program Files\Python37\lib\encodings\cp1252.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\U0001f642' in position 95: character maps to <undefined>
[Finished in 14.129s]

有什么想法吗?

推荐答案

编辑:在两个版本中,我都添加了以CSV文件格式保存.

In both versions I added saving in CSV file.

如果您具有Seleniumrequests,则有三种可能性

If you have Selenium and requests then there are three posibility

  • 使用Selenium登录并获取页面.
  • 使用requests.Session登录并获取页面.
  • 使用Selenium登录,从Selenium获取会话信息,并在requests
  • 中使用它们
  • use Selenium to login and to get pages.
  • use requests.Session to login and to get pages.
  • use Selenium to login, get session information from Selenium and use them in requests

使用Selenium登录并获取页面要简单得多,但是比requests

Using Selenium to login and to get pages is much simpler but it works slower then requests

只需要使用

  • browser.get(url)而不是r = session.get(post_url)
  • BeautifulSoup(browser.page_source, ...)而不是BeautifulSoup(r.text, ...)
  • browser.get(url) instead of r = session.get(post_url)
  • BeautifulSoup(browser.page_source, ...) instead of BeautifulSoup(r.text, ...)
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
import time
import csv

#--| Setup
options = Options()
#options.add_argument("--headless")
#options.add_argument("--window-size=1980,1020")
#options.add_argument('--disable-gpu')
browser = webdriver.Chrome(executable_path=r'C:\chrome\chromedriver.exe', options=options)
#browser = webdriver.Firefox()

# --- login ---

browser.get("https://login.wordpress.org/?locale=en_US")
time.sleep(2)

user_name = browser.find_element_by_css_selector('#user_login')
user_name.send_keys("my_login")
password = browser.find_element_by_css_selector('#user_pass')
password.send_keys("my_password")
#time.sleep(5)
submit = browser.find_elements_by_css_selector('#wp-submit')[0]
submit.click()
 
# Example send page source to BeautifulSoup or selenium for parse
soup = BeautifulSoup(browser.page_source, 'lxml')
use_bs4 = soup.find('title')
print(use_bs4.text)
#print('*' * 25)
#use_sel = browser.find_elements_by_css_selector('div > div._1vC4OE')
#print(use_sel[0].text)

# --- pages ---

data = []

url = 'https://wordpress.org/support/plugin/advanced-gutenberg/page/{}/'
 
for page in range(1, 3):
    print('\n--- PAGE:', page, '---\n')
 
    # read page with list of posts
    browser.get(url.format(page))
    soup = BeautifulSoup(browser.page_source, 'html.parser') # 'lxml'
 
    all_uls = soup.find('li', class_="bbp-body").find_all('ul')
 
    for number, ul in enumerate(all_uls, 1):
 
        print('\n--- post:', number, '---\n')
 
        a = ul.find('a')
        if a:
            post_url = a['href']
            post_title = a.text
 
            print('href:', post_url)
            print('text:', post_title)
            print('---------')
 
            # read page with post content
            browser.get(post_url)
            sub_soup = BeautifulSoup(browser.page_source, 'html.parser')
 
            post_content = sub_soup.find(class_='bbp-topic-content').get_text(strip=True, separator='\n')
            print(post_content)

            # keep on list as dictionary
            data.append({
                'href': post_url,
                'text': post_title,
                'content': post_content,
            })
            
# --- save ---

with open("wp-forum-conversations.csv", "w") as f:
    writer = csv.DictWriter(f, ["text", "href", "content"])
    writer.writeheader()
    writer.writerows(data)  # all rows at once


requests的运行速度要快得多,但是需要使用Firefox/Chrome中的DevTools进行更多的工作才能查看表单中的所有字段以及它发送给服务器的其他值.还需要查看日志正确时重定向到的位置.顺便说一句:不要忘记在使用DevTools之前关闭JavaScript,因为requests不能运行JavaScript,并且页面可能以形式发送不同的值. (它确实发送了不同的字段)

requests works much faster but it needs more work with DevTools in Firefox/Chrome to see all fields in form and what other values it sends to server. It needs also to see where it is redirect when logging is correct. BTW: don't forget to turn off JavaScript before using DevTools because requests doesn't run JavaScript and page may sends different values in form. (and it really sends different fields)

它需要完整的User-Agent才能正常工作.

It needs full User-Agent to work correctly.

首先,我加载登录页面并复制<input>中的所有值,以与loginpassword

First I load login page and copy all values from <input> to send them with loginand password

登录后,我检查它是否已重定向到其他页面-确认它已正确记录.您还可以检查页面是否显示您的姓名.

After login I check if it was redirected to different page - to confirm that it was logged correctly. You can also check if page display your name.

import requests
from bs4 import BeautifulSoup
import csv

s = requests.Session()
s.headers.update({
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:80.0) Gecko/20100101 Firefox/80.0' # it needs full user-agent
})

# --- get page with login form ---

r = s.get("https://login.wordpress.org/?locale=en_US")
soup = BeautifulSoup(r.text, 'html.parser')

# get all fields in form

payload = {}

for field in soup.find_all('input'):
    name = field['name']
    value = field['value']
    payload[name] = value
    print(name, '=', value)

# --- login ---

payload['log'] = 'my_login'
payload['pwd'] = 'my_password'

r = s.post('https://login.wordpress.org/wp-login.php', data=payload)
print('redirected to:', r.url)

# --- check if logged in ---

# check if logged in - check if redirected to different page
if r.url.startswith('https://login.wordpress.org/wp-login.php'):
    print('Problem to login')
    exit()

# check if logged in - check displayed name
url = 'https://wordpress.org/support/plugin/advanced-gutenberg/page/1/'
r = s.get(url)

soup = BeautifulSoup(r.text, 'html.parser')
name = soup.find('span', {'class': 'display-name'})
if not name:
    print('Problem to login')
    exit()
else:    
    print('name:', name.text)
    
# --- pages ---

data = []

url = 'https://wordpress.org/support/plugin/advanced-gutenberg/page/{}/'
 
for page in range(1, 3):
    print('\n--- PAGE:', page, '---\n')
 
    # read page with list of posts
    r = s.get(url.format(page))
    soup = BeautifulSoup(r.text, 'html.parser') # 'lxml'
 
    all_uls = soup.find('li', class_="bbp-body").find_all('ul')
 
    for number, ul in enumerate(all_uls, 1):
 
        print('\n--- post:', number, '---\n')
 
        a = ul.find('a')
        if a:
            post_url = a['href']
            post_title = a.text
 
            print('href:', post_url)
            print('text:', post_title)
            print('---------')
 
            # read page with post content
            r = s.get(post_url)
            sub_soup = BeautifulSoup(r.text, 'html.parser')
 
            post_content = sub_soup.find(class_='bbp-topic-content').get_text(strip=True, separator='\n')
            print(post_content)

            # keep on list as dictionary
            data.append({
                'href': post_url,
                'text': post_title,
                'content': post_content,
            })
            
# --- save ---

with open("wp-forum-conversations.csv", "w") as f:
    writer = csv.DictWriter(f, ["text", "href", "content"])
    writer.writeheader()
    writer.writerows(data)  # all rows at once

这篇关于使用Selenium Works登录到页面-使用BS4 Works进行解析-但不能两者兼而有之的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆