如何通过多线程代码python提高Webscraping代码的速度 [英] how to improve the Webscraping code speed by multi threading code python

查看:80
本文介绍了如何通过多线程代码python提高Webscraping代码的速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

下面是我的代码,我在其中逐行编写(大约 900页,每页10行,每行5个数据),有什么方法可以使此操作更快.目前,将数据导出到csv中需要 80分钟.有什么方法可以对页面进行并行请求并提高此代码的效率.

below is my code in which i am writing row by row (there are around 900 pages with 10 rows and 5 data in each row) is there any way to make this faster. currently it's taking 80 min to export the data into csv.Is there any way to make parallel Request to pages and make this code more efficient.

import requests
from urllib3.exceptions import InsecureRequestWarning
import csv

requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
from bs4 import BeautifulSoup as bs

f = csv.writer(open('GEM.csv', 'w', newline=''))
f.writerow(['Bidnumber', 'Items', 'Quantitiy', 'Department', 'Enddate'])


def scrap_bid_data():
    page_no = 1
    while page_no < 910:
        print('Hold on creating URL to fetch data...')
        url = 'https://bidplus.gem.gov.in/bidlists?bidlists&page_no=' + str(page_no)
        print('URL created: ' + url)
        scraped_data = requests.get(url, verify=False)
        soup_data = bs(scraped_data.text, 'lxml')
        extracted_data = soup_data.find('div', {'id': 'pagi_content'})
        if len(extracted_data) == 0:
            break
        else:
            for idx in range(len(extracted_data)):
                if (idx % 2 == 1):
                    bid_data = extracted_data.contents[idx].text.strip().split('\n')

                    bidno = bid_data[0].split(":")[-1]
                    items = bid_data[5].split(":")[-1]
                    qnty = int(bid_data[6].split(':')[1].strip())
                    dept = (bid_data[10] + bid_data[12].strip()).split(":")[-1]
                    edate = bid_data[17].split("End Date:")[-1]
                    f.writerow([bidno, items, qnty, dept, edate])

            page_no=page_no+1
scrap_bid_data()

推荐答案

我已经对您的代码进行了一些重组,以确保您的CSV文件已关闭.我还收到以下错误消息:

I've restructured your code a bit to ensure that your CSV file is closed. I also got the following error message:

ConnectionError:HTTPSConnectionPool(host ='bidplus.gem.gov.in',port = 443):URL超过最大重试次数:/bidlists?bidlists& page_no = 1(由NewConnectionError('< urllib3.connection.HTTPSConnection对象位于0x0000012EB0DF1E80> ;:无法建立新连接:[WinError 10060]连接尝试失败,因为经过一段时间后连接方未正确响应,或者由于连接的主机未能响应而导致建立的连接失败',))

ConnectionError: HTTPSConnectionPool(host='bidplus.gem.gov.in', port=443): Max retries exceeded with url: /bidlists?bidlists&page_no=1 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000012EB0DF1E80>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond',))

您应该尝试使用 NUMBER_THREADS 值:

import requests
from urllib3.exceptions import InsecureRequestWarning
import csv
import concurrent.futures
import functools

requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
from bs4 import BeautifulSoup as bs


def download_page(session, page_no):
    url = 'https://bidplus.gem.gov.in/bidlists?bidlists&page_no=' + str(page_no)
    print('URL created: ' + url)
    resp = session.get(url, verify=False)
    return resp.text


def scrap_bid_data():
    NUMBER_THREADS = 30 # number of concurrent download requests
    with open('GEM.csv', 'w', newline='') as out_file:
        f = csv.writer(out_file)
        f.writerow(['Bidnumber', 'Items', 'Quantitiy', 'Department', 'Enddate'])
        with requests.Session() as session:
            page_downloader = functools.partial(download_page, session)
            with concurrent.futures.ThreadPoolExecutor(max_workers=NUMBER_THREADS) as executor:
                pages = executor.map(page_downloader, range(1, 910))
                page_no = 0
                for page in pages:
                    page_no += 1
                    soup_data = bs(page, 'lxml')
                    extracted_data = soup_data.find('div', {'id': 'pagi_content'})
                    if extracted_data is None or len(extracted_data) == 0:
                        print('No data at page number', page_no)
                        print(page)
                        break
                    else:
                        for idx in range(len(extracted_data)):
                            if (idx % 2 == 1):
                                bid_data = extracted_data.contents[idx].text.strip().split('\n')

                                bidno = bid_data[0].split(":")[-1]
                                items = bid_data[5].split(":")[-1]
                                qnty = int(bid_data[6].split(':')[1].strip())
                                dept = (bid_data[10] + bid_data[12].strip()).split(":")[-1]
                                edate = bid_data[17].split("End Date:")[-1]
                                f.writerow([bidno, items, qnty, dept, edate])
scrap_bid_data()

这篇关于如何通过多线程代码python提高Webscraping代码的速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆