如何在不覆盖结果的情况下抓取多个网页? [英] How to scrape multiple webpages without overwriting the results?

查看:109
本文介绍了如何在不覆盖结果的情况下抓取多个网页?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

抓取并尝试从Transfermarkt抓取多个网页而又不覆盖前一个网页的新手.

New to scraping and trying to scrape multiple webpages from Transfermarkt without overwriting the previous one.

知道此问题以前已经提出过,但在这种情况下我无法解决.

Know that this question has been asked previously but I can't get it to work for this case.

from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']

for url in urls:
    r = requests.get(url,  headers = headers)
    soup = bs(r.content, 'html.parser')


    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)

df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv')

在抓取后能够区分网页也将很有帮助.

It would also be helpful to be able to differentiate between the webpages once scraped.

任何帮助将不胜感激.

推荐答案

您上面的代码会抓取每个URL的数据,将其解析为,而无需将其放入数据框,然后移至下一个URL .由于对pd.DataFrame()的调用发生在循环外部,因此您要从urls中的最后一个URL构造页面数据的数据框.

Your code above scrapes data for each URL, parses it without putting it in a dataframe, and then moves on to the next URL. Since your call to pd.DataFrame() occurs outside the loop, you are constructing a dataframe of page data from the very last URL in urls.

您需要在for循环之外创建一个数据框,然后将每个URL的传入数据附加到此数据框.

You need to create a dataframe outside of your for-loop, and then append incoming data for each URL to this dataframe.

from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']

#### Add this before for-loop. ####
# Create empty dataframe with expected column names.
df_full = pd.DataFrame(columns = df_headers)

for url in urls:
    r = requests.get(url,  headers = headers)
    soup = bs(r.content, 'html.parser')


    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]


    #### Add this to for-loop. ####

    # Create a dataframe for page data.
    df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)

    # Add page URL to index of page data.
    df.index = [url] * len(df)

    # Append page data to full data.
    df_full = df_full.append(df)

print(df_full)

这篇关于如何在不覆盖结果的情况下抓取多个网页?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆