在多个龙卷风实例之间共享数据 [英] Sharing data between multiple tornado instances

查看:80
本文介绍了在多个龙卷风实例之间共享数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有Nginx服务器对一些龙卷风实例的代理请求.每个龙卷风实例都基于Tornado附带的长时间轮询的聊天演示.该脚本有一个存储回调的数组,然后用于将消息分派回客户端.

I have nginx server proxying requests to a few tornado instances. Each tornado instance is based on the long-polling chat demo that comes with Tornado. The script has an array that stores the callbacks, which are then used to dispatch messages back to the client.

我的问题是,当有多个龙卷风实例时,nginx使用循环策略.由于回调是按实例存储的(而不是集中维护的),因此,取决于发出请求的时间,它将转到实例之一.因此,当必须推送数据时,它只会转到存储在同一龙卷风实例中的回调.

The problem I have is that when there are multiple tornado instances, nginx uses a round-robin strategy. Since the callbacks are stored per instance (and not maintained centrally), depending on when the request is made, it goes to one of the instances. Because of this, when the data has to be pushed, it only goes to the callbacks that are stored in the same tornado instance.

是否存在在多个龙卷风实例之间存储数据的标准做法?我当时在考虑使用memcached,但是如果我需要遍历存储中的所有键,那将是不可能的(尽管它并不是我一直都需要的东西).我只是想了解是否存在在多个python进程之间存储数据的标准做法.我还阅读了 mmap 但不确定如何与存储回调(python方法)一起使用.

Is there a standard practice of how to store the data between multiple tornado instances? I was thinking of using memcached, but then if I need to iterate all the keys in the store, that wouldnt be possible (although its not something that Id need all the time). I just wanted to find out if there is a standard practice for storing data between multiple python processes. I also read about mmap but wasnt sure how it would work with storing callbacks (which are python methods).

推荐答案

没有现成的配方,您可以使用mmap或RabbitMQ等消息提供程序,也可以使用Redis等简单的noSQL DB. 在您的情况下,我可能会尝试ZeroMQ.

There is no ready recipe, you can use mmap or message provider like RabbitMQ or simple noSQL DB like Redis. In your case I would have try ZeroMQ maybe.

这篇关于在多个龙卷风实例之间共享数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆