使用Beautiful Soup抓取多个URL [英] Scrape Multiple URLs using Beautiful Soup

查看:79
本文介绍了使用Beautiful Soup抓取多个URL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从多个URL中提取特定的类.标签和类保持不变,但我需要我的python程序才能将所有内容都刮掉,因为我只需输入链接即可.

I'm trying to extract specific classes from multiple URLs. The tags and classes stay the same but I need my python program to scrape all as I just input my link.

这是我的作品的一个示例:

Here's a sample of my work:

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

url = input('insert URL here: ')
#scrape elements
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")

#print titles only
h1 = soup.find("h1", class_= "class-headline")
print(h1.get_text())

这适用于单个URL,但不适用于批处理.谢谢你帮我我从这个社区中学到了很多.

This works for individual URLs but not for a batch. Thanks for helping me. I learned a lot from this community.

推荐答案

具有网址列表并进行遍历.

Have a list of urls and iterate through it.

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

urls = ['www.website1.com', 'www.website2.com', 'www.website3.com', .....]
#scrape elements
for url in urls:
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")

    #print titles only
    h1 = soup.find("h1", class_= "class-headline")
    print(h1.get_text())

如果您要提示用户输入每个站点的信息,则可以通过这种方式完成

If you are going to prompt user for input for each site then it can be done this way

from bs4 import BeautifulSoup
import requests
import pprint
import re
import pyperclip

urls = ['www.website1.com', 'www.website2.com', 'www.website3.com', .....]
#scrape elements
msg = 'Enter Url, to exit type q and hit enter.'
url = input(msg)
while(url!='q'):
    response = requests.get(url)
    soup = BeautifulSoup(response.content, "html.parser")

    #print titles only
    h1 = soup.find("h1", class_= "class-headline")
    print(h1.get_text())
    input(msg)

这篇关于使用Beautiful Soup抓取多个URL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆