SignalR与Azure的EventHub向外扩展 [英] SignalR scaling out with Azure EventHub

查看:320
本文介绍了SignalR与Azure的EventHub向外扩展的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我要寻找SignalR高频率缩放的解决方案。我想知道如果我能与天青EventHub做到这一点。如果我使用EventHub作为我背板SignalR的消息,它会成为一个瓶颈给我吗?

I am looking high frequency scaling solution for SignalR. I am wondering if I can do it with Azure EventHub. If I use EventHub as my backplane for SignalR messages, will it become a bottleneck for me?

我检查页面,但并没有什么关于EventHub,因为它是相当新的。

I've checked this page but there is nothing about EventHub as it is fairly new.

推荐答案

我不说话SignalR的precise细节;但是,可以在原则上使用EventHubs的背板,但你需要知道的局限性。

I can't speak to the precise specifics of SignalR; however, you could in principle use EventHubs for a backplane, but you need to be aware of the limitations.

SignalR背板scaleout模式假设所有的服务器都将有机会获得所有的信息和presumptively处理它们。这提供了什么样一个背板可在商用硬件还是在云中做了相当明确的限制。在一个典型的云,你也许能维持100MB / s的数据吞吐量(对于1 Gb / s的网卡不错的回合数),商品硬件(和Azure中的HPC机)1000MB /秒(10千兆位/秒NIC)的上端。

SignalR's backplane scaleout pattern assumes that all the servers will have access to all the messages and presumptively process them all. This provides a fairly clear limit on what a single backplane can do on commodity hardware or in the cloud. In a typical cloud you might be able to sustain 100MB/s data throughput (nice round number for a 1 Gb/s nic), upper end of commodity hardware (and Azure's HPC machines) 1000MB/s (10 Gbit/second nic).

所以,接下来的问题是可以的Azure EventHubs带你到这个体系结构限制吞吐量?

So the question is then can Azure EventHubs take you to this architectural limitation on throughput?

这个问题的答案简直是肯定的。 100或1000分区会给你足够的写入吞吐量,和充分的阅读能力2服务器。

The answer to that is simply yes. 100 or a 1000 partitions will give you sufficient write throughput, and sufficient read capacity for 2 servers.

接下来的问题是,如果你只需要100MB /秒每台服务器的背板看了很多服务器如何读取数据(例如,如果你在广播100MB /股票第二蜱,其中的数据大小不增加服务器的数)。

The next question is, if you only need 100MB/second read in your backplane per server how many servers can read the data (ie if you're broadcasting 100MB/second of stock ticks where the data size doesn't increase with number of servers).

答案在这里,只要你想,但也有一些技巧。许多

The answer here is, as many as you want but there are some tricks.

通过划分所述数据流EventHubs规模。其中每一个的每个分区将具有最大读取吞吐量是跨所有读者共享2MB / s的。但是,你可以乘分区的数量来弥补分割(增加超过32需要交谈微软)。 EventHubs(如卡夫卡和室壁运动)的设计假设是,消费将在机器从而避免前面提到的背板限制进行分割。这正在共同努力,以读取流消费者一个消费群体(天青似乎需要一个名为CG甚至直接阅读器),在这个背板模型有不合乎逻辑的消费群体,所以问题是如何读取数据。

EventHubs scale by partitioning the data stream. Each partitions each of which will have a maximum read throughput of 2MB/s which is shared across all the readers. However, you can just multiply the number of partitions to make up for the split (adding more than 32 requires talking to Microsoft). The design assumption of EventHubs (like Kafka and Kinesis) is that consumption will be split across machines thereby avoiding the backplane limitation discussed earlier. Consumers that are working together to read the stream are a Consumer Group (Azure appears to require a named CG even for a direct reader), in this backplane model there are not logical consumer groups, so the question is how to read the data.

最简单的解决方案是可能使用与每个服务器有一个固定的名字为自身的消费群高度自动平衡事件处理器的主机。只有一个每个消费者组服务器中的每个服务器将收到的所有分区(500 10台服务器,达到100MB /秒,又名11K $ /月+ $ 0.028每百万事件)。

The simplest solution is likely to use the high level autobalancing Event Processor Host with each server being its own Consumer Group with a fixed name. With only one server in each consumer group each server will receive all the partitions (500 for 10 servers to hit 100MB/second, aka $11k/month + $0.028 per million events).

该方法有一个主要的限制:仅限于每个事件枢纽20消费群体。所以,你可以链事件集线器或一起做一棵树,这种方法获得任意号码。

This approach has one key limitation: you are limited to 20 consumer groups per event hub. So you can chain Event Hubs together or make a tree with this approach to get arbitrary numbers.

另外一种选择是使用哪种连接到特定的分区直接客户。 在一个消费群体的单个分区可以有5 的读者从而减少需要对由5倍链集线器由此通过的5倍切割每个事件成本(不降低吞吐量部要求)。

The other option is to use direct clients which connect to specific partitions. A single partition in a consumer group can have 5 readers thereby reducing the need for chaining hubs by a factor of 5 thereby cutting the per event cost by a factor of 5 (doesn't reduce the throughput unit requirements).

在总之,不应该成为一个瓶颈任何背板之前将成为瓶颈。但是,如果你指望它超越100MB /秒的流量不要在背板建立的东西。

In summary, it shouldn't become a bottle neck before any backplane would become a bottleneck. But don't build something on a backplane if you expect it to go beyond 100MB/second in traffic.

我没有说话有关的延迟,你需要测试自己,但机会是你没有在云做HFT还有一个原因,实时游戏是典型的实例。

I didn't speak about latency, you'll need to test that yourself, but chances are you're not doing HFT in the cloud and there's a reason realtime games are typically in instances.

这篇关于SignalR与Azure的EventHub向外扩展的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆