如何使用异步for循环遍历列表? [英] How to use an async for loop to iterate over a list?
问题描述
因此,我需要为列表中的所有项目调用async
函数.这可能是URL列表和使用aiohttp
的异步函数,该函数从每个URL返回响应.现在显然我不能执行以下操作:
So I need to call an async
function for all items in a list. This could be a list of URLs and an async function using aiohttp
that gets a response back from every URL. Now obviously I cannot do the following:
async for url in ['www.google.com', 'www.youtube.com', 'www.aol.com']:
我可以使用普通的for循环,但是我的代码将同步运行,并且失去了具有async
响应获取功能的好处和速度.
I can use a normal for loop but then my code will act synchronously and I lose the benefits and speed of having an async
response fetching function.
有什么办法可以转换列表以使上述工作正常?我只需要将列表的__iter__()
更改为__aiter__()
方法,对吗?可以通过对列表进行子类化来实现?也许将其封装在一个类中?
Is there any way I can convert a list such that the above works? I just need to change the list's __iter__()
to a __aiter__()
method right? Can this be achieved by subclassing a list? Maybe encapsulating it in a class?
推荐答案
使用或 asyncio.gather :>
results = await asyncio.gather(map(fetch, urls))
If you don't mind having an external dependency, you can use aiostream.stream.map:
from aiostream import stream, pipe
async def fetch_many(urls):
xs = stream.iterate(urls) | pipe.map(fetch, ordered=True, task_limit=10)
async for result in xs:
print(result)
您可以使用task_limit
参数控制同时运行的fetch
协程的数量,并选择是按顺序获取结果还是尽快获取结果.
You can control the amount of fetch
coroutine running concurrently using the task_limit
argument, and choose whether to get the results in order, or as soon as possible.
在此演示和 免责声明:我是项目维护者.
这篇关于如何使用异步for循环遍历列表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!