抓取工具无法从下一页获取名称 [英] Scraper unable to get names from next pages

查看:88
本文介绍了抓取工具无法从下一页获取名称的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经用python与硒结合编写了一个脚本来解析网页中的名称.该网站的数据未启用javascript.但是,下一页链接位于javascript中.因为如果我去requests库,该页面的下一页链接没有用,所以我使用了硒来分析该站点中遍历25页的数据.我在这里面临的唯一问题是,尽管我的抓取工具能够到达最后一页并点击25页,但它仅从第一页获取数据.此外,即使完成了单击最后一页的操作,刮板仍保持运行.下一页链接看起来与javascript:nextPage();完全一样.顺便说一句,即使我单击下一页按钮,该站点的URL也不会改变.如何从25页中获取所有名称?我在刮板上使用的css选择器是完美无缺的.预先感谢.

I've written a script in python in combination with selenium to parse names from a webpage. The data from that site is not javascript enabled. However, the next page links are within javascript. As the next page links of that webpage are of no use if I go for requests library, I have used selenium to parse the data from that site traversing 25 pages. The only problem I'm facing here is that although my scraper is able to reach the last page clicking through 25 pages, it only fetches the data from the first page only. Moreover, the scraper keeps running even though it has done clicking the last page. The next page links look exactly like javascript:nextPage();. Btw, the url of that site never changes even if I click on the next page button. How can i get all the names from 25 pages? The css selector I've used in my scraper is flawless. Thanks in advance.

这是我写的:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)

driver.get("https://www.hsi.com.hk/HSI-Net/HSI-Net?cmd=tab&pageId=en.indexes.hscis.hsci.constituents&expire=false&lang=en&tabs.current=en.indexes.hscis.hsci.overview_des%5Een.indexes.hscis.hsci.constituents&retry=false")

while True:
    for name in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table.greygeneraltxt td.greygeneraltxt,td.lightbluebg"))):
        print(name.text)

    try:
        n_link = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "a[href*='nextPage']")))
        driver.execute_script(n_link.get_attribute("href"))
    except: break

driver.quit()

推荐答案

您不必处理下一步"按钮或以某种方式更改页码-所有条目已在页面源中.请尝试以下:

You don't have to handle "Next" button or somehow change page number - all entries are already in page source. Try below:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)

driver.get("https://www.hsi.com.hk/HSI-Net/HSI-Net?cmd=tab&pageId=en.indexes.hscis.hsci.constituents&expire=false&lang=en&tabs.current=en.indexes.hscis.hsci.overview_des%5Een.indexes.hscis.hsci.constituents&retry=false")
for name in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "table.greygeneraltxt td.greygeneraltxt,td.lightbluebg"))):
        print(name.get_attribute('textContent'))

driver.quit()

如果不是必须使用硒,也可以尝试以下解决方案:

You can also try this solution if it's not mandatory for you to use Selenium:

import requests
from lxml import html

r = requests.get("https://www.hsi.com.hk/HSI-Net/HSI-Net?cmd=tab&pageId=en.indexes.hscis.hsci.constituents&expire=false&lang=en&tabs.current=en.indexes.hscis.hsci.overview_des%5Een.indexes.hscis.hsci.constituents&retry=false")
source = html.fromstring(r.content)

for name in source.xpath("//table[@class='greygeneraltxt']//td[text() and position()>1]"):
        print(name.text)

这篇关于抓取工具无法从下一页获取名称的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆