异步和协程 vs 任务队列 [英] asyncio and coroutines vs task queues

查看:25
本文介绍了异步和协程 vs 任务队列的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在阅读有关 python 3 中的 asyncio 模块,以及更广泛的关于 python 中的协程的内容,但我不明白是什么让 asyncio 成为如此出色的工具.我有一种感觉,你可以用协程做所有事情,你可以通过使用基于多处理模块(例如芹菜)的任务队列来做得更好.是否存在协程优于任务队列的用例?

I've been reading about asyncio module in python 3, and more broadly about coroutines in python, and I can't get what makes asyncio such a great tool. I have the feeling that all you can do with coroutines, you can do it better by using task queues based of the multiprocessing module (celery for example). Are there usecases where coroutines are better than task queues ?

推荐答案

不是正确答案,而是无法放入评论的提示列表:

Not a proper answer, but a list of hints that could not fit into a comment:

  • 您提到了 multiprocessing 模块(让我们也考虑一下 threading).假设您必须处理数百个套接字:您能否生成数百个进程或线程?

  • You are mentioning the multiprocessing module (and let's consider threading too). Suppose you have to handle hundreds of sockets: can you spawn hundreds of processes or threads?

再次,线程和进程:你如何处理对共享资源的并发访问?锁定等机制的开销是多少?

Again, with threads and processes: how do you handle concurrent access to shared resources? What is the overhead of mechanisms like locking?

像 Celery 这样的框架也增加了一个重要的开销.你可以使用它吗例如用于处理高流量 Web 服务器上的每个请求?顺便说一句,在这种情况下,谁负责处理套接字和连接(Celery 的性质不能为您处理)?

Frameworks like Celery also add an important overhead. Can you use it e.g. for handling every single request on a high-traffic web server? By the way, in that scenario, who is responsible for handling sockets and connections (Celery for its nature can't do that for you)?

请务必阅读异步背后的基本原理.这个理由(除其他外)提到了一个系统调用:writev() -- 不是比多个 write() 更有效率吗?

Be sure to read the rationale behind asyncio. That rationale (among other things) mentions a system call: writev() -- isn't that much more efficient than multiple write()s?

这篇关于异步和协程 vs 任务队列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆