asyncio / aiohttp不返回响应 [英] asyncio/aiohttp not returning response

查看:181
本文介绍了asyncio / aiohttp不返回响应的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过并行化从 https://www.officialcharts.com/ 抓取一些数据使用asyncio / aiohttp的网络请求。我实现了在链接此处

I am trying to scrape some data from https://www.officialcharts.com/ by parallelising web requests using asyncio/aiohttp. I implemented the code given at the link here.

我遵循两个不同的过程。第一个是这样的。

I followed two different procedures. The first one goes like this.

from bs4 import BeautifulSoup
from urllib.request import urlopen
from selenium import webdriver
import time
import pandas as pd
import numpy as np
import re
import json

import requests
from bs4 import BeautifulSoup
from datetime import date, timedelta
from IPython.display import clear_output
import memory_profiler

import spotipy
import spotipy.util as util
import pandas as pd
from  more_itertools import unique_everseen

weeks = []
d = date(1970, 1, 1) 
d += timedelta(days = 6 - d.weekday())

for i in range(2500):    
    weeks.append(d.strftime('%Y%m%d'))
    d += timedelta(days = 7)

import asyncio
from aiohttp import ClientSession
import nest_asyncio
nest_asyncio.apply()

result = []
async def fetch(url, session):
    async with session.get(url) as response:
        return await response.read()

async def run(r):  
    tasks = []

    # Fetch all responses within one Client session,
    # keep connection alive for all requests.
    async with ClientSession() as session:
        for i in range(r):
            url = 'https://www.officialcharts.com/charts/singles-chart/' + weeks[i] + '/'
            task = asyncio.ensure_future(fetch(url, session))
            tasks.append(task)

        responses = await asyncio.gather(*tasks)
        result.append(responses)


loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(5))
loop.run_until_complete(future)

print('Done')
print(result[0][0] == None)

上面的代码的问题是,当我同时发出1000个以上的请求时,它会失败。

The problem with above code is, it fails when I make more than simultaneous 1000 requests.

post 的作者实现了解决此问题的不同方法,他声称我们可以处理多达1万个请求。我遵循了他的第二个步骤,这是我的代码。

The author of the post implemented a different procedure to address this issue and he claims we can do as many as 10K requests. I followed along his second procedure and here is my code for that.

import random
import asyncio
from aiohttp import ClientSession
import nest_asyncio
nest_asyncio.apply()

result = []
async def fetch(url, session):
    async with session.get(url) as response:
        delay = response.headers.get("DELAY")
        date = response.headers.get("DATE")
        print("{}:{} with delay {}".format(date, response.url, delay))
        return await response.read()


async def bound_fetch(sem, url, session):
    # Getter function with semaphore.
    async with sem:
        await fetch(url, session)


async def run(r):
    tasks = []
    # create instance of Semaphore
    sem = asyncio.Semaphore(1000)

    # Create client session that will ensure we dont open new connection
    # per each request.
    async with ClientSession() as session:
        for i in range(r):         
            url = 'https://www.officialcharts.com/charts/singles-chart/' + weeks[i] + '/'
            task = asyncio.ensure_future(bound_fetch(sem, url, session))
            tasks.append(task)

        responses = await asyncio.gather(*tasks)
        result.append(responses)

number = 5

loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(number))
loop.run_until_complete(future)

print('Done')
print(result[0][0] == None)

由于某种原因,它不会返回任何响应。

For some reason, this doesn't return any responses.

PS:我不是来自CS背景,只是为了娱乐而编程。我不知道异步代码内部发生了什么。

PS:I am not from CS background and just program for fun. I have no clue what's going on inside the asyncio code.

推荐答案

尝试使用最新版本。

#!/usr/bin/env python3
# -*- coding: utf-8 -*-

from aiohttp import ClientSession, client_exceptions
from asyncio import Semaphore, ensure_future, gather, run
from json import dumps, loads

limit = 10
http_ok = [200]


async def scrape(url_list):
    tasks = list()
    sem = Semaphore(limit)

    async with ClientSession() as session:
        for url in url_list:
            task = ensure_future(scrape_bounded(url, sem, session))
            tasks.append(task)

        result = await gather(*tasks)

    return result


async def scrape_bounded(url, sem, session):
    async with sem:
        return await scrape_one(url, session)


async def scrape_one(url, session):
    try:
        async with session.get(url) as response:
            content = await response.read()
    except client_exceptions.ClientConnectorError:
        print('Scraping %s failed due to the connection problem', url)
        return False

    if response.status not in http_ok:
        print('Scraping%s failed due to the return code %s', url, response.status)
        return False

    content = loads(content.decode('UTF-8'))

    return content

if __name__ == '__main__':
    urls = ['http://demin.co/echo1/', 'http://demin.co/echo2/']
    res = run(scrape(urls))

    print(dumps(res, indent=4))

这是真实的项目,按预期工作。

This is a template of a real project that works as predicted.

您可以找到此源代码此处

这篇关于asyncio / aiohttp不返回响应的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆