抓取和解析多页(aspx)表 [英] Scraping and parsing multi-page (aspx) table

查看:71
本文介绍了抓取和解析多页(aspx)表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试收集有关赛狗比赛的信息.例如,我要抓取 http://www.gbgb.org .uk/RaceCard.aspx?dogName = Hardwick%20Serena .该页面显示了狗哈德威克·塞雷纳(Hardwick Serena)的所有搜索结果,但它分为几页.

I'm trying to scrape information on greyhound races. For example, I want to scrape http://www.gbgb.org.uk/RaceCard.aspx?dogName=Hardwick%20Serena. This page shows all results for the dog Hardwick Serena, but it is split over several pages.

检查页面,它显示在下一页"按钮下:

Inspecting the page, it shows under the 'next page' button:

<input type="submit" name="ctl00$ctl00$mainContent$cmscontent$DogRaceCard$lvDogRaceCard$ctl00$ctl03$ctl01$ctl12" value=" " title="Next Page" class="rgPageNext">. 

我希望有一个HTML链接,我可以将其用于刮擦的下一次迭代,但是没有运气. 通过查看网络流量进行的进一步检查,表明浏览器为__VIEWSTATE等发送了非常长的字符串(哈希?).可能保护数据库?

I was hoping for a HTML link, that I could use for the next iteration of the scrape, but no luck. Further inspection, by looking at network traffic, shows that the browser send a horribly long (hashed?) string for __VIEWSTATE, among others. Likely to protect the database?

我正在寻找一种方法来擦除一只狗的所有页面,方法是遍历所有页面,或者通过增加页面长度以在第1页上显示100多行.基础数据库是.aspx.

I'm looking for a way to scrape all pages of one dog, either by iterating over all pages, or by increasing the page length to show 100+ lines on page 1. The underlying database is .aspx.

我正在使用Python 3.5和BeautifulSoup.

I'm using Python 3.5 and BeautifulSoup.

当前代码:

    import requests
    from   bs4 import BeautifulSoup

    url = 'http://www.gbgb.org.uk/RaceCard.aspx?dogName=Hardwick%20Serena'

    with requests.session() as s:
        s.headers['user-agent'] = 'Mozilla/5.0'

        r    = s.get(url)
        soup = BeautifulSoup(r.content, 'html5lib')

        target = 'ctl00$ctl00$mainContent$cmscontent$DogRaceCard$btnFilter_input'

        data = { tag['name']: tag['value'] 
            for tag in soup.select('input[name^=ctl00]') if tag.get('value')
        }
        state = { tag['name']: tag['value'] 
            for tag in soup.select('input[name^=__]')
        }

        data.update(state)

        numberpages = int(str(soup.find('div', 'rgWrap rgInfoPart')).split(' ')[-2].split('>')[1].split('<')[0])
        # for page in range(last_page + 1):

        for page in range(numberpages):
            data['__EVENTTARGET'] = target.format(page)
            #data['__VIEWSTATE'] = target.format(page)
            print(10)
            r    = s.post(url, data=data)
            soup = BeautifulSoup(r.content, 'html5lib')

            tables = soup.findChildren('table')
            my_table = tables[9]
            rows = my_table.findChildren(['th', 'tr'])

            tabel = [[]]
            for i in range(len(rows)):
                 cells = rows[i].findChildren('td')
                 tabel.append([])
                 for j in range(len(cells)):
                     value = cells[j].string
                     tabel[i].append(value)

            table = []
            for i in range(len(tabel)):
                if len(tabel[i]) == 16:
                    del tabel[i][-2:]
                    table.append(tabel[i])

推荐答案

在这种情况下,对于所请求的每个页面,都会发出POST请求,该请求使用表单url编码的参数__EVENTTARGET& __VIEWSTATE:

In this case, for each page requested a POST request is issued with form url encoded parameter __EVENTTARGET & __VIEWSTATE :

    可以轻松地从input标记中提取
  • __VIEWSTATE
  • 每个页面的
  • __EVENTTARGET不同,并且每个页面链接的值都从javacript函数传递,因此您可以使用正则表达式将其提取:

  • __VIEWSTATE can be easily extracted from an input tag
  • __EVENTTARGET is different for each page and the value is passed from a javacript function for each page link so you can extract it with a regex :

<a href="javascript:__doPostBack('ctl00$ctl00$mainContent$cmscontent$DogRaceCard$lvDogRaceCard$ctl00$ctl03$ctl01$ctl07','')">
    <span>2</span>
</a>

python脚本:

from bs4 import BeautifulSoup
import requests
import re

# extract data from page
def extract_data(soup):
    tables = soup.find_all("div", {"class":"race-card"})[0].find_all("tbody")

    item_list = [
        (
            t[0].text.strip(), #date
            t[1].text.strip(), #dist
            t[2].text.strip(), #TP
            t[3].text.strip(), #StmHCP
            t[4].text.strip(), #Fin
            t[5].text.strip(), #By
            t[6].text.strip(), #WinnerOr2nd
            t[7].text.strip(), #Venue
            t[8].text.strip(), #Remarks
            t[9].text.strip(), #WinTime
            t[10].text.strip(), #Going
            t[11].text.strip(), #SP
            t[12].text.strip(), #Class
            t[13].text.strip()  #CalcTm
        )
        for t in (t.find_all('td') for t in tables[1].find_all('tr'))
        if t
    ]
    print(item_list)

session = requests.Session()

url = 'http://www.gbgb.org.uk/RaceCard.aspx?dogName=Hardwick%20Serena'

response = session.get(url)
soup = BeautifulSoup(response.content, "html.parser")

# get view state value
view_state = soup.find_all("input", {"id":"__VIEWSTATE"})[0]["value"]

# get all event target values
event_target = soup.find_all("div", {"class":"rgNumPart"})[0]
event_target_list = [
    re.search('__doPostBack\(\'(.*)\',', t["href"]).group(1)
    for t in event_target.find_all('a')
]

# extract data for the 1st page
extract_data(soup)

# extract data for each page except the first
for link in event_target_list[1:]:
    print("get page {0}".format(link))
    post_data = {
        '__EVENTTARGET': link,
        '__VIEWSTATE': view_state
    }
    response = session.post(url, data=post_data)
    soup = BeautifulSoup(response.content, "html.parser")
    extract_data(soup)

这篇关于抓取和解析多页(aspx)表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆