如何缩放Orion GE? [英] How to scale Orion GE?

查看:112
本文介绍了如何缩放Orion GE?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已在FILAB中部署了Orion实例,并配置了Cygnus inyector,以便将信息存储在Cosmos中.

I have deployed an Orion instance in FILAB and I have configured the Cygnus inyector in order to store information in Cosmos.

但是...让我们想象一下这样一种场景,其中实体的数量急剧增加.在这种假设的情况下,仅Orion GE的一个实例是不够的,因此有必要部署更多实例.

But...let us imagine a scenario in which the number of entities increases drastically. In this hypothetical scenario one instance of Orion GE wouldn't be enough so it would be necessary to deploy more instances.

将执行什么比例程序?考虑到最大配额为:

What would be the scale procedure? Taking into account the maximum quotas are:

VM实例:5 VCPU:10个 硬盘:100 GB 内存:10240 MB 公用IP:1

VM Instances: 5 VCPUs: 10 Hard Disk: 100 GB Memory: 10240 MB Public IP: 1

我知道配额可能会更改,但是免费帐户限额是多少?

I understand that quotas may be subject to changes but what would be the free account limit?

Cosmos头节点中的硬盘限制是多少? (理论上为5GB配额)

What would be the Hard disk limit in Cosmos Head Node? (Theoretically 5GB quota)

是否可以通过单个公共IP部署更多Orion Context Broker实例,还是有必要要求多个公共IP?怎么样?

Would it be possible to deploy more instances of Orion Context Broker with a single public IP or would it be necessary to ask for multiple public ips? How?

总而言之,我要求提供有关拟议方案的扩展程序和免费帐户限制(可能的最大配额)的信息.

To sum up, I request information about the scale procedure for the proposed scenario and the free account limits (maximum quotas possible).

先谢谢您. 亲切的问候.

Thank you in advance. Kind Regards.

拉蒙.

推荐答案

关于Orion的可伸缩性,它可能涉及两个维度:

Regarding Orion scalability, it could involve two dimensions:

  • 实体数量的可伸缩性.在这种情况下,稀缺资源是数据库,因此您需要扩展MongoDB层.扩展MongoDB的通常过程是使用碎片,请查看MongoDB官方文档.

  • Scalability in the number of entities. In this case, the scarce resource is the DB, so you would need to scale the MongoDB layer. The usual procedure to scale MongoDB is using shards, please check MongoDB official documentation abouit it.

操作中的可伸缩性要求管理此类实体.在这种情况下,您可以使用其他Orion节点(每个节点运行在单独的VM中,再在它们前面运行运行负载均衡器软件的其他VM来在Orion节点之间分配负载). Orion是一个无状态过程,可以在以下水平缩放配置中运行,只要: 1)您不使用ONTIMEINTERVAL订阅(请参阅

Scalability in the operation requests to manage such entities. In this case, you can use additional Orion nodes (each one running in a separate VM, plus an additional VM in front of them running the load balancer software to distribute the load among Orion nodes). Orion is a stateless process that can run in such horizontal scaling configuration as long as: 1) you don't use ONTIMEINTERVAL subscriptions (see details in this post) (see UPDATE2 note below), 2) you have to configure the -subCacheIval CLI parameter with a small enough value that ensures eventual consistency (basically, the value of the -subCacheIval parameter is the maximum time that may pass from a subscriptions with entity patterns is done until it is consolidated in all the Orion nodes).

在任何情况下,您都需要其他VM.只要系统仅需要一个公共IP(分配给负载均衡器的一个),并且其他所有通信都可以在内部进行,您就不需要其他IP. @flopez已在另一篇文章中回答了云配额信息.

In any case, you would need additional VMs. You don't need additional IPs, as long as the system only needs a public IP (the one assigned to the load balancer) and all the other communications can be done internally. Cloud quota information has been already answered by @flopez in another post.

通过Cygnus破坏Cosmos中数据的持久性,就像创建Orion进程场的方式一样,您可以添加更多Cygnus进程来负责从Orion场接收通知.只需为所有实体定义一个映射策略,定义有关哪些实体将被通知到哪个天鹅座进程A,其他哪个天鹅座进程B的订阅,等等.问题在于这些天鹅座服务器场和全局实例之间的连接性宇宙(位于Internet上).假设这些cygnus服务器场在具有专用地址的VM上运行,则必须在另一VM中安装某种代理才能访问Cosmos.

Ragarding the persistence of data in Cosmos through Cygnus, the same way you create a farm of Orion processes you may add more Cygnus processes in charge of receiving notifications from the Orion farm. Simply define a mapping strategy for all you entities, defining subscriptions about which entities are going to be notified to which Cygnus process A, which other to Cygnus process B, etc. The problem is on the connectivity between these Cygnus farm and the Global Instance of Cosmos (located in the Internet). Assuming these cygnus farm is running on top of VMs with private addressing, you must install some kind of proxy in another VM in order to access Cosmos.

关于HDFS配额,是的,默认情况下为5 GB,但可以根据需要进行更改.值得一提的是,新的HDFS群集将在短期内发布,具有更高的存储容量.

About the HDFS quota, yes, it is 5 GB by default, but can be changed on demand. It worths saying a new HDFS cluster will be released in the short-term, having a higher capacity of storage.

更新:此独立的问答集.

UPDATE2 :在Orion 1.0.0(2016年3月)中删除了ONTIMEINTERVAL订阅.

UPDATE2: ONTIMEINTERVAL subscriptions were removed in Orion 1.0.0 (March 2016).

这篇关于如何缩放Orion GE?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆