在 Python 3.5 中使用 aiohttp 获取多个 url [英] Fetching multiple urls with aiohttp in Python 3.5

查看:35
本文介绍了在 Python 3.5 中使用 aiohttp 获取多个 url的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

自从 Python 3.5 引入 async with 以来,文档 中推荐的语法a> aiohttp 已更改.现在获取他们建议的单个网址:

Since Python 3.5 introduced async with the syntax recommended in the docs for aiohttp has changed. Now to get a single url they suggest:

import aiohttp
import asyncio

async def fetch(session, url):
    with aiohttp.Timeout(10):
        async with session.get(url) as response:
            return await response.text()

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    with aiohttp.ClientSession(loop=loop) as session:
        html = loop.run_until_complete(
            fetch(session, 'http://python.org'))
        print(html)

如何修改它以获取一组网址而不是一个网址?

How can I modify this to fetch a collection of urls instead of just one url?

在旧的 asyncio 示例中,您将设置任务列表,例如

In the old asyncio examples you would set up a list of tasks such as

    tasks = [
            fetch(session, 'http://cnn.com'),
            fetch(session, 'http://google.com'),
            fetch(session, 'http://twitter.com')
            ]

我尝试将这样的列表与上述方法结合起来,但失败了.

I tried to combine a list like this with the approach above but failed.

推荐答案

对于并行执行,您需要一个 异步任务

For parallel execution you need an asyncio.Task

我已将您的示例转换为从多个来源获取并发数据:

I've converted your example to concurrent data fetching from several sources:

import aiohttp
import asyncio

async def fetch(session, url):
    async with session.get(url) as response:
        if response.status != 200:
            response.raise_for_status()
        return await response.text()

async def fetch_all(session, urls):
    tasks = []
    for url in urls:
        task = asyncio.create_task(fetch(session, url))
        tasks.append(task)
    results = await asyncio.gather(*tasks)
    return results

async def main():    
    urls = ['http://cnn.com',
            'http://google.com',
            'http://twitter.com']
    async with aiohttp.ClientSession() as session:
        htmls = await fetch_all(session, urls)
        print(htmls)

if __name__ == '__main__':
    asyncio.run(main())

这篇关于在 Python 3.5 中使用 aiohttp 获取多个 url的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆