如何在python中与beautifulsoup并行抓取多个html页面? [英] How to scrape multiple html page in parallel with beautifulsoup in python?

查看:34
本文介绍了如何在python中与beautifulsoup并行抓取多个html页面?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 Django 网络框架在 Python 中制作一个网络抓取应用程序.我需要使用 beautifulsoup 库来抓取多个查询.这是我编写的代码的快照:

I'm making a webscraping app in Python with Django web framework. I need to scrape multiple queries using beautifulsoup library. Here is snapshot of code that I have written:

for url in websites:
    r = requests.get(url)
    soup = BeautifulSoup(r.content)
    links = soup.find_all("a", {"class":"dev-link"})

实际上这里的网页抓取是按顺序进行的,我想以并行方式运行它.我对 Python 中的线程不太了解.有人可以告诉我,我怎样才能以并行方式进行刮擦?任何帮助将不胜感激.

Actually here the scraping of webpage is going sequentially, I want to run it in parallel manner. I don't have much idea about threading in Python. can someone tell me, How can I do scrape in parallel manner? Any help would be appreciated.

推荐答案

如果你想使用多线程,

import threading
import requests
from bs4 import BeautifulSoup

class Scraper(threading.Thread):
    def __init__(self, threadId, name, url):
        threading.Thread.__init__(self)
        self.name = name
        self.id = threadId
        self.url = url

    def run(self):
        r = requests.get(self.url)
        soup = BeautifulSoup(r.content, 'html.parser')
        links = soup.find_all("a")
        return links
#list the websites in below list
websites = []
i = 1
for url in websites:
    thread = Scraper(i, "thread"+str(i), url)
    res = thread.run()
    # print res

这可能会有所帮助

这篇关于如何在python中与beautifulsoup并行抓取多个html页面?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆