HTTP与Apache mod_wsgi的流 [英] HTTP Streaming with Apache mod_wsgi

查看:183
本文介绍了HTTP与Apache mod_wsgi的流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Ubuntu的服务器我在哪里运行多个web-app。
所有这些都被Apache使用命名VirtualHosts主持。
其中之一是一个瓶的应用程序,其经由mod_wsgi的运行。
这个应用程序是服务连续无限HTTP流。

这是否最终阻止我的应用程序/服务器/ apache的工人,如果有足够的客户端连接到流端点?
如果是,有没有办法?
其他无阻塞WSGI的服务器与VirtualHosts,不同的HTTP的流媒体模式,或一些魔法阿帕奇mod_wsgi的设置很好地发挥?

它的核心是这样的:

@ app.route('/流')
高清get_stream():
    高清无尽的():
        而真正的:
            产量get_stuff_from_redis()
            time.sleep(1)    返回响应(无尽的(),MIME类型=应用/ JSON)


解决方案

如果客户从未断开,是的,你最终会耗尽进程/线程来处理更多的请求。

您比可能最好使用异步架构,如龙卷风或扭曲这种特定类型的应用程序更多。如果你不习惯这个概念做异步编程可能会非常棘手。

有些人用协程系统,如GEVENT / eventlet,但他们也有自己的问题,你必须提防。

I've got an ubuntu-server where I am running multiple web-apps. All of them are hosted by Apache using named VirtualHosts. One of them is a Flask app, which is running via mod_wsgi. This app is serving continuous, unlimited HTTP streams.

Does this eventually block my app/server/apache worker, if enough clients are connecting to the streaming endpoint? And if yes, are there alternatives? Other non-blocking wsgi-servers that play nicely with VirtualHosts, a different http-streaming paradigm, or some magic apache mod_wsgi settings?

The core of it looks like:

@app.route('/stream')
def get_stream():
    def endless():
        while True:
            yield get_stuff_from_redis()
            time.sleep(1)

    return Response(endless(), mimetype='application/json')

解决方案

If the clients never disconnect, yes, you will eventually run out of processes/threads to handle more requests.

You are more than likely better off using a async framework such as Tornado or Twisted for this specific type of application. Doing async programming can be tricky if you aren't used to that concept.

Some people use coroutine system such as gevent/eventlet, but they also have their own problems you have to watch out for.

这篇关于HTTP与Apache mod_wsgi的流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆