性能:异步请求处理与工作池处理阻塞的任务 [英] Performance of: asynchronous request handler with blocking tasks handled by worker pool

查看:187
本文介绍了性能:异步请求处理与工作池处理阻塞的任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是怎么脚本的性能: http://tornadogists.org/2185380/ 下面复制。

 不时进口睡眠
从tornado.httpserver进口HTTPServer定位
从tornado.ioloop进口IOLoop
从tornado.web进口申请,异步,RequestHandler
从multiprocessing.pool进口线程池_workers =线程池(10)高清run_background(FUNC,回调,ARGS =(),kwds = {}):
    高清_callback(结果):
        IOLoop.instance()add_callback(拉姆达:回调(结果))。
    _workers.apply_async(FUNC,ARGS,kwds,_callback)#封锁任务,比如查询到MySQL
高清blocking_task(N):
    睡眠(N)
    返回否类处理程序(RequestHandler):
    @异步
    DEF得到(个体经营):
        run_background(blocking_task,self.on_complete,(10,))    高清on_complete(个体经营,RES):
        self.write(测试{0}< BR />中。格式(RES))
        self.finish()HTTPServer定位(应用程序([(/,处理程序)],调试= TRUE))。听(8888)
IOLoop.instance()。开始()


  1. 我的应用程序将大大超过 1000请求/秒

  2. 每个请求都将持续2-30秒,平均约6秒

    • 简单平均睡眠(6)


  3. 块IO使用类似 Redis的BLPOP Queue.get_nowait()


解决方案

总体格局是好的,需要提醒的是感谢GIL,你的线程池只能使用一个CPU,而您需要使用多个进程,充分利用现有的硬件。

在服用这些数字仔细一看,10个线程来说太小了,如果你的请求真的要平均每6秒。你有6000秒值得每一个第二次来的工作,所以你需要一个总在所有进程至少有6000线程(这是假设6秒真的只是阻止外部事件和蟒蛇过程中​​的CPU成本可忽略不计)。我不知道现代的系统可以多少线程处理,但6000 Python的线程不会听起来像一个伟大的想法。如果你真的得到每个请求阻塞6秒(和成千上万的请求/秒),这听起来像这将是值得这些阻塞函数转换为异步的。

How is the performance of this script: http://tornadogists.org/2185380/ copied below.

from time import sleep
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.web import Application, asynchronous, RequestHandler
from multiprocessing.pool import ThreadPool

_workers = ThreadPool(10)

def run_background(func, callback, args=(), kwds={}):
    def _callback(result):
        IOLoop.instance().add_callback(lambda: callback(result))
    _workers.apply_async(func, args, kwds, _callback)

# blocking task like querying to MySQL
def blocking_task(n):
    sleep(n)
    return n

class Handler(RequestHandler):
    @asynchronous
    def get(self):
        run_background(blocking_task, self.on_complete, (10,))

    def on_complete(self, res):
        self.write("Test {0}<br/>".format(res))
        self.finish()

HTTPServer(Application([("/", Handler)],debug=True)).listen(8888)
IOLoop.instance().start()

  1. My application will have way over 1,000 req/sec.
  2. Each request will last from 2-30 seconds, averaging about 6 seconds
    • Simply averaging sleep(6)
  3. Block IO by using something like redis BLPOP or Queue.get_nowait()

解决方案

The overall pattern is fine, with the caveat that thanks to the GIL, your thread pool will only be able to use a single CPU, and you'll need to use multiple processes to make full use of the available hardware.

Taking a closer look at the numbers, 10 threads is way too small if your requests are really going to average 6 seconds each. You've got 6000 seconds worth of work coming in every second, so you need a total of at least 6000 threads across all your processes (and that's assuming the 6 seconds is really just blocking on external events and the CPU cost in the python process is negligible). I'm not sure how many threads a modern system can handle, but 6000 Python threads doesn't sound like a great idea. If you've really got 6 seconds of blocking per request (and thousands of requests/sec) it sounds like it would be worthwhile to convert these blocking functions to be asynchronous.

这篇关于性能:异步请求处理与工作池处理阻塞的任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆