如何在网络抓取期间修复python中的页面循环? [英] How to fix a page loop in python during webscraping?

查看:45
本文介绍了如何在网络抓取期间修复python中的页面循环?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图遍历每个页面,但是一旦到达页面末尾,它只会跳过所需的行.这些页面因链接而异.因此,我需要一个针对网页数量的动态解决方案.这是一个有效的示例,因此结果将在ran中显示.stackoverflow需要我添加更多详细信息

I am trying to loop through each page but once it gets to the end of the pages it just skips over the needed lines. The pages vary by by link. So I need a dynamic solution for the number of webpages. This a working example so the results will be shown in ran. The stackoverflow requires me to add more details

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from time import sleep
driver=webdriver.Chrome()
driver.maximize_window()
driver.get("https://www.oddsportal.com")
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.LINK_TEXT,"BASKETBALL"))).click()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.LINK_TEXT,"Europe"))).click()
WebDriverWait(driver,15).until(EC.element_to_be_clickable((By.LINK_TEXT,"Euroleague"))).click()
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.LINK_TEXT,"RESULTS"))).click()

allyears=WebDriverWait(driver,20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,"div.main-menu2.main-menu-gray >ul.main-filter a[href^='/basketball/europe/euroleague']")))
allelements=WebDriverWait(driver,15).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,"td.name.table-participant >a[href^='/basketball/europe/euroleague/']")))
max_page= 10
scores=[]
games=[]
#Get the all year text of link in a list.
alltext=[ele.text for ele in allyears]
allyearslink=[ele.get_attribute('href') for ele in allyears]
for link in allyearslink:
    driver.get(link)

    url = driver.current_url
    print(url)
    for j in range(1, max_page + 1):
        current_page = url + '#/page' + str(j)
        driver.get(current_page)
        print(current_page)
        for i in range(3):
            allelements = WebDriverWait(driver, 15).until(EC.visibility_of_all_elements_located(
                (By.CSS_SELECTOR, "td.name.table-participant >a[href^='/basketball/europe/euroleague']")))
            print(allelements[i].text)

            scores.append(allelements[i].text)
            games.append(allelements[i].text)

            driver.execute_script("arguments[0].click();", allelements[i])

            sleep(2)
            elem1 = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.LINK_TEXT, "AH"))).click()
            sleep(2)
            # .date
            date_ofGame = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".date")))
            print(date_ofGame.text)
            elem2 = driver.find_element_by_id("odds-data-table")
            scores.append(date_ofGame.text)
            scores.append(elem2.text)
            driver.back()
            sleep(2)
            driver.back()

results:
https://www.oddsportal.com/basketball/europe/euroleague/results/
Lyon-Villeurbanne - Alba Berlin
Friday, 20 Dec 2019, 13:45
Valencia - Khimki M.
Thursday, 21 Nov 2019, 14:00
Olimpia Milano - Fenerbahce
Friday, 25 Oct 2019, 13:45
https://www.oddsportal.com/basketball/europe/euroleague-2018-2019/results/
https://www.oddsportal.com/basketball/europe/euroleague-2017-2018/results/
https://www.oddsportal.com/basketball/europe/euroleague-2016-2017/results/
https://www.oddsportal.com/basketball/europe/euroleague-2015-2016/results/
https://www.oddsportal.com/basketball/europe/euroleague-2014-2015/results/
https://www.oddsportal.com/basketball/europe/euroleague-2013-2014/results/
https://www.oddsportal.com/basketball/europe/euroleague-2012-2013/results/
https://www.oddsportal.com/basketball/europe/euroleague-2011-2012/results/
etc....

所需结果:

 https://www.oddsportal.com/basketball/europe/euroleague/results/
        Lyon-Villeurbanne - Alba Berlin
        Friday, 20 Dec 2019, 13:45
        Valencia - Khimki M.
        Thursday, 21 Nov 2019, 14:00
        Olimpia Milano - Fenerbahce
        Friday, 25 Oct 2019, 13:45
    https://www.oddsportal.com/basketball/europe/euroleague-2018-2019/results/
        Lyon-Villeurbanne - Alba Berlin
        Friday, 20 Dec 2019, 13:45
        Valencia - Khimki M.
        Thursday, 21 Nov 2019, 14:00
        Olimpia Milano - Fenerbahce
        Friday, 25 Oct 2019, 13:45
    https://www.oddsportal.com/basketball/europe/euroleague-2016-2017/results/
        Lyon-Villeurbanne - Alba Berlin
        Friday, 20 Dec 2019, 13:45
        Valencia - Khimki M.
        Thursday, 21 Nov 2019, 14:00
        Olimpia Milano - Fenerbahce
        Friday, 25 Oct 2019, 13:45

推荐答案

您可以通过替换如下所示的循环来获得所需的输出.

You can get the desired output by replacing the loop as below.

for link in allyearslink:
    driver.get(link)
    url = driver.current_url
    print(url)
    # click on the last page button
    driver.find_element_by_xpath("(//div[@id='pagination']//span)[last()]").click()
    time.sleep(3) # we can handle this better
    max_page = int(driver.find_element_by_class_name('active-page').text)

    ##################### This is where I believe my problem is at ######################
    for j in range(1, max_page + 1):
        current_page = url + '#/page/' + str(j)
        driver.get(current_page)

        for i in range(3):
            allelements = WebDriverWait(driver, 15).until(EC.visibility_of_all_elements_located(
                (By.CSS_SELECTOR, "td.name.table-participant >a[href^='/basketball/europe/euroleague']")))
            print(allelements[i].text)

            scores.append(allelements[i].text)
            games.append(allelements[i].text)

            driver.execute_script("arguments[0].click();", allelements[i])

            time.sleep(2)
            elem1 = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.LINK_TEXT, "AH"))).click()
            time.sleep(2)
            # .date
            date_ofGame = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, ".date")))
            print(date_ofGame.text)
            elem2 = driver.find_element_by_id("odds-data-table")
            scores.append(date_ofGame.text)
            scores.append(elem2.text)
            driver.back()
            time.sleep(2)
            driver.back()

导致错误的原因是 td.name.table-参与者> a [href ^ ='/basketball/europe/euroleague/'] .

这是示例输出:

这篇关于如何在网络抓取期间修复python中的页面循环?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆