了解Local_Quorum [英] Understanding Local_Quorum

查看:138
本文介绍了了解Local_Quorum的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个3个DC(US,EU,ASIA),每个节点有3个节点,所以总共有9个节点。我们正在尝试,所以我们可以增加,如果我们需要。

We have a 3 DC's[US,EU,ASIA] with 3 nodes each, so totally 9 nodes.We are experimenting anyways, so we can increment if we need to.

我们计划使用每个DC 2的RF。在这种情况下,我们的定额为2。

We are planning to use a RF of 2 per DC. In that case, our quorum comes to 2.

使用 local_quorum 的R / W一致性容忍每个DC 1个节点的故障,我假设。只有当数据中心中的第二个节点发生故障时,我们才会遇到麻烦。

using a R/W Consistency of local_quorum as our we can tolerate a failure of 1 Node per DC, I assume. Only when a second node in a data-center goes down we are in trouble.

但是此计算器另有说明。 ,如果我们去集群大小为3和RF:2,WL / RL作为Quorum - 它说我们可以生存损失无节点。我缺少一些与Quorum大小和总

But this calculator states otherwise.Here,if we go for cluster-size of 3 and RF:2, WL/RL as Quorum - it says we can survive loss of No Nodes.Am I missing something related to the Quorum size and the total number of machines in a cluster ?

推荐答案

您提到的Quorum是大多数(1/2 + 1)。在RF = 2的情况下,多数是1 + 1 = 2。这意味着您需要从副本节点的2个确认,以便请求成功。因此,如果您的两个副本中的一个失败,您不能实现一致性,并且请求将失败。

Quorum as you mentioned is majority (1/2 + 1). In the case of RF=2, majority is 1+1=2. That means you need 2 acknowledgements from your replica nodes in order for the request to be successful. Thus, if one of your two replicas goes down, you cannot achieve consistency and the request will fail.

为了能够处理中断并仍然执行仲裁,我建议将复制因子设置为3.

To be able to handle an outage and still do quorum, I suggest upping replication factor to 3.

这篇关于了解Local_Quorum的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆