python - 使用 BeautifulSoup 抓取 ajax 网站 [英] python - web scraping an ajax website using BeautifulSoup

查看:31
本文介绍了python - 使用 BeautifulSoup 抓取 ajax 网站的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试抓取使用 ajax 调用加载下一页的电子商务网站.

I am trying to scrape e-commerce site that uses ajax call to load its next pages.

我能够抓取第 1 页上的数据,但当我将第 1 页滚动到底部时,第 2 页会通过 ajax 调用自动加载.

I am able to scrape data present on page 1 but page 2 loads automatically through ajax call when I scroll page 1 to bottom.

我的代码:

from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as ureq
my_url='http://www.shopclues.com/mobiles-smartphones.html'
page=ureq(my_url).read()
page_soup=soup(page,"html.parser")
containers=page_soup.findAll("div",{"class":"column col3"})
for container in containers:
   name=container.h3.text
   price=container.find("span",{'class':'p_price'}).text
   print("Name : "+name.replace(","," "))
   print("Price : "+price)
for i in range(2,7):
    my_url="http://www.shopclues.com/ajaxCall/moreProducts?catId=1431&filters=&pageType=c&brandName=&start="+str(36*(i-1))+"&columns=4&fl_cal=1&page="+str(i)
    page=ureq(my_url).read()
    print(page)
    page_soup=soup(page,"html.parser")
    containers=page_soup.findAll("div",{"class":"column col3"})
    for container in containers:
        name=container.h3.text
        price=container.find("span",{'class':'p_price'}).text
        print("Name : "+name.replace(","," "))
        print("Price : "+price)

我已经打印了 ureq 读取的 ajax 页面,以了解我是否能够打开 ajax 页面,并得到如下输出:

I have printed the ajax page read by ureq to know whether I am able to open the ajax page and I got an output as:

b' ' 是输出:打印(页面)

b' ' are the outputs of: print(page)

请为我提供一个抓取剩余数据的解决方案.

please provide me a solution to scrape the remaining data.

推荐答案

from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
from urllib2 import urlopen as ureq
import random
import time

chrome_options = webdriver.ChromeOptions()
prefs = {"profile.default_content_setting_values.notifications": 2}
chrome_options.add_experimental_option("prefs", prefs)

# A randomizer for the delay
seconds = 5 + (random.random() * 5)
# create a new Chrome session
driver = webdriver.Chrome(chrome_options=chrome_options)
driver.implicitly_wait(30)
# driver.maximize_window()

# navigate to the application home page
driver.get("http://www.shopclues.com/mobiles-smartphones.html")
time.sleep(seconds)
time.sleep(seconds)
# Add more to range for more phones
for i in range(1):
    element = driver.find_element_by_id("moreProduct")
    driver.execute_script("arguments[0].click();", element)
    time.sleep(seconds)
    time.sleep(seconds)
html = driver.page_source
page_soup = soup(html, "html.parser")
containers = page_soup.findAll("div", {"class": "column col3"})
for container in containers:
# Add error handling
    try:
        name = container.h3.text
        price = container.find("span", {'class': 'p_price'}).text
        print("Name : " + name.replace(",", " "))
        print("Price : " + price)
    except AttributeError:
        continue
driver.quit()

我使用 selenium 加载网站,然后单击按钮加载更多结果.然后获取生成的 html 并放入您的代码.

I used selenium to load the website and click the button to load more results. Then take the resulting html and put in your code.

这篇关于python - 使用 BeautifulSoup 抓取 ajax 网站的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆