了解 Local_Quorum [英] Understanding Local_Quorum

查看:20
本文介绍了了解 Local_Quorum的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有 3 个 DC [US,EU,ASIA],每个 DC 有 3 个节点,所以总共有 9 个节点.无论如何我们都在试验,所以如果需要,我们可以增加.

We have a 3 DC's[US,EU,ASIA] with 3 nodes each, so totally 9 nodes.We are experimenting anyways, so we can increment if we need to.

我们计划对每个 DC 使用 2 个 RF.在这种情况下,我们的法定人数为 2.

We are planning to use a RF of 2 per DC. In that case, our quorum comes to 2.

使用 local_quorum 的 R/W 一致性作为我们可以容忍每个 DC 1 个节点的故障,我假设.只有当数据中心的第二个节点出现故障时,我们才会遇到麻烦.

using a R/W Consistency of local_quorum as our we can tolerate a failure of 1 Node per DC, I assume. Only when a second node in a data-center goes down we are in trouble.

但是这个计算器另有说明.在这里,如果我们选择集群大小为 3 和 RF:2,WL/RL 作为 Quorum - 它说我们可以在 无节点 丢失的情况下幸存下来.我是否遗漏了与 Quorum 大小和集群中的机器总数相关的内容?

But this calculator states otherwise.Here,if we go for cluster-size of 3 and RF:2, WL/RL as Quorum - it says we can survive loss of No Nodes.Am I missing something related to the Quorum size and the total number of machines in a cluster ?

推荐答案

你提到的法定人数是多数 (1/2 + 1).在RF=2的情况下,多数是1+1=2.这意味着您需要来自副本节点的 2 个确认才能使请求成功.因此,如果您的两个副本之一出现故障,您将无法实现一致性并且请求将失败.

Quorum as you mentioned is majority (1/2 + 1). In the case of RF=2, majority is 1+1=2. That means you need 2 acknowledgements from your replica nodes in order for the request to be successful. Thus, if one of your two replicas goes down, you cannot achieve consistency and the request will fail.

为了能够处理中断并保持仲裁,我建议将复制因子提高到 3.

To be able to handle an outage and still do quorum, I suggest upping replication factor to 3.

这篇关于了解 Local_Quorum的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆