我的刮板无法从网页上获取所有项目 [英] My scraper fails to get all the items from a webpage

查看:75
本文介绍了我的刮板无法从网页上获取所有项目的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经用python与硒结合编写了一些代码来解析网页中的不同产品名称.如果使浏览器向下滚动,则几乎看不到加载更多按钮.如果使页面向下滚动直到没有更多加载按钮可供单击,则网页将显示其全部内容.我的刮板效果似乎不错,但是我没有得到所有结果.该页面中大约有200种产品,但我有90种.我应该对刮板进行什么更改才能全部使用?预先感谢.

我正在处理的网页: Page_Link

这是我正在尝试的脚本:

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("put_above_url_here")
wait = WebDriverWait(driver, 10)

page = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,".listing_item")))
for scroll in range(17):
    page.send_keys(Keys.PAGE_DOWN)
    time.sleep(2)
    try:
        load = driver.find_element_by_css_selector(".lm-btm")
        load.click()
    except Exception:
        pass

for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[id^=item_]"))):
    name = item.find_element_by_css_selector(".pro-name.el2").text
    print(name)
driver.quit()

解决方案

尝试以下代码以获取所需数据:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("https://www.purplle.com/search?q=hair%20fall%20shamboo")
wait = WebDriverWait(driver, 10)

header = driver.find_element_by_tag_name("header")
driver.execute_script("arguments[0].style.display='none';", header)

while True:

    try:
        page = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".listing_item")))
        driver.execute_script("arguments[0].scrollIntoView();", page)
        page.send_keys(Keys.END)
        load = wait.until(EC.element_to_be_clickable((By.PARTIAL_LINK_TEXT, "LOAD MORE")))
        driver.execute_script("arguments[0].scrollIntoView();", load)
        load.click()
        wait.until(EC.staleness_of(load))
    except:
        break

for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[id^=item_]"))):
    name = item.find_element_by_css_selector(".pro-name.el2").text
    print(name)
driver.quit()

I've written some code in python in combination with selenium to parse different product names from a webpage. There are few load more buttons visible if the browser is made to scroll downward. The webpage displays it's full content if the page is made to scroll downmost until there is no load more button to click. My scraper seems to be doing good but I'm not getting all the results. There are around 200 products in that page but I'm getting 90 out of them. What change should I bring about in my scraper to get them all? Thanks in advance.

The webpage I'm dealing with: Page_Link

This is the script I'm trying with:

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("put_above_url_here")
wait = WebDriverWait(driver, 10)

page = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,".listing_item")))
for scroll in range(17):
    page.send_keys(Keys.PAGE_DOWN)
    time.sleep(2)
    try:
        load = driver.find_element_by_css_selector(".lm-btm")
        load.click()
    except Exception:
        pass

for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[id^=item_]"))):
    name = item.find_element_by_css_selector(".pro-name.el2").text
    print(name)
driver.quit()

解决方案

Try below code to get required data:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("https://www.purplle.com/search?q=hair%20fall%20shamboo")
wait = WebDriverWait(driver, 10)

header = driver.find_element_by_tag_name("header")
driver.execute_script("arguments[0].style.display='none';", header)

while True:

    try:
        page = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".listing_item")))
        driver.execute_script("arguments[0].scrollIntoView();", page)
        page.send_keys(Keys.END)
        load = wait.until(EC.element_to_be_clickable((By.PARTIAL_LINK_TEXT, "LOAD MORE")))
        driver.execute_script("arguments[0].scrollIntoView();", load)
        load.click()
        wait.until(EC.staleness_of(load))
    except:
        break

for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[id^=item_]"))):
    name = item.find_element_by_css_selector(".pro-name.el2").text
    print(name)
driver.quit()

这篇关于我的刮板无法从网页上获取所有项目的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆