用BeautifulSoup和Requests刮掉多个分页链接 [英] Scraping multiple paginated links with BeautifulSoup and Requests

查看:221
本文介绍了用BeautifulSoup和Requests刮掉多个分页链接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Python初学者在这里。我正在尝试从 dabs.com上的一个类别中删除所有产品。我已经设法在给定的页面上刮去所有产品,但是我在迭代所有分页链接时遇到了问题。



现在,我尝试用span class ='page-list来隔离所有的分页按钮,但即使这样也行不通,理想情况下,如何使抓取工具继续点击下一个,直到它抓住了所有页面上的所有产品。我该如何做到这一点?

真的很感激任何输入

  from bs4 import BeautifulSoup 

导入请求

base_url =http://www.dabs。 com
page_array = []
$ b $ def get_pages():
html = requests.get(base_url)
soup = BeautifulSoup(html.content,html。解析器)

page_list = soup.findAll('span',class =page-list)
pages = page_list [0] .findAll('a')

页面中的页面:
page_array.append(page.get('href'))

def scrape_page(page):
html = requests.get base_url)
soup = BeautifulSoup(html.content,html.parser)
Product_table = soup.findAll(table)
Products = P roduct_table [0] .findAll(tr)

if len(soup.findAll('tr'))> 0:
产品=产品[1:]

在产品行中:
cells = row.find_all('td')
data = {
'description':cells [0] .get_text(),
'price':cells [1] .get_text()
}
打印数据

get_pages ()
[page_array中的页面的scrape_page(base_url +页面)]


解决方案

 导入请求


from bs4导入BeautifulSoup为bs

url ='www.dabs.com/category/computing/11001/'
base_url ='http://www.dabs.com '

r = requests.get(url)

soup = bs(r.text)
elm = soup.find('a',{'title' :'Next'})

next_page_link = base_url + elm ['href']



希望有所帮助。


Python Beginner here. I'm trying to scrape all products from one category on dabs.com. I've managed to scrape all products on a given page, but I'm having trouble iterating over all the paginated links.

Right now, I've tried to isolate all the pagination buttons with the span class='page-list" but even that isn't working. Ideally, I would like to make the crawler keep clicking next until it has scraped all products on all pages. How can I do this?

Really appreciate any input

from bs4 import BeautifulSoup

import requests

base_url = "http://www.dabs.com"
page_array = []

def get_pages():
    html = requests.get(base_url)
    soup = BeautifulSoup(html.content, "html.parser")

    page_list = soup.findAll('span', class="page-list")
    pages = page_list[0].findAll('a')

    for page in pages:
        page_array.append(page.get('href'))

def scrape_page(page):
    html = requests.get(base_url)
    soup = BeautifulSoup(html.content, "html.parser")
    Product_table = soup.findAll("table")
    Products = Product_table[0].findAll("tr")

    if len(soup.findAll('tr')) > 0:
        Products = Products[1:]

    for row in Products:
        cells = row.find_all('td')
        data = {
            'description' : cells[0].get_text(),
            'price' : cells[1].get_text()
        }
        print data

get_pages()
[scrape_page(base_url + page) for page in page_array]

解决方案

Their next page button has a title of "Next" you could do something like:

import requests
from bs4 import BeautifulSoup as bs

url = 'www.dabs.com/category/computing/11001/'
base_url = 'http://www.dabs.com'

r = requests.get(url)

soup = bs(r.text)
elm = soup.find('a', {'title': 'Next'})

next_page_link = base_url + elm['href']

Hope that helps.

这篇关于用BeautifulSoup和Requests刮掉多个分页链接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆