Cassandra中的数据分区 [英] Data Partitioning in Cassandra

查看:112
本文介绍了Cassandra中的数据分区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

两个问题,

让我们说我有三个cassandra节点/环境设置,分别是节点1,节点2和节点3。

Lets say I have three cassandra nodes / environments setup, Node 1, Node 2 and Node 3.

在这里我为

Node 1 as 1 to 60, Node 2 as 61 to 120, Node 3 as 121 to 255.

1)根据Cassandra文档,对于匹配1的分区键到60,它应该在节点1 中,但是在复制过程中,此1到60的分区数据将复制到节点2和节点3 。那么为什么我们需要分区分隔呢?在这种情况下,将从哪个节点读取此分区的数据?

1) As per the Cassandra documentation, for the partition key matching 1 to 60 it should be there in Node 1 but during replication this partition data of 1 to 60 is replicated to Node 2 and Node 3. So why do we need the partition separation in it? In this case, from which node the read happens for this partitioned data?

下一个问题,
2)如果某个节点发生故障,是否会有RE在Cassandra节点之间进行-PARTITIONING?

Next question, 2) If a node goes down, Will there be a RE-PARTITIONING between Cassandra nodes?

推荐答案

1)由于节点数=复制因子,因此令牌的重要性并不重要。根据客户端中的平衡策略,令牌可能与哪个节点获取请求有关,例如TokenAwarePolicy。

1) Since number of nodes=replication factor the tokens won't matter as much. Depending on the balancing policy in your client the tokens can matter as to which node gets the request, e.g. TokenAwarePolicy.

2)令牌范围的分布仅在扩展群集时发生。

2) Distribution of token ranges only happen when scaling your cluster.

这篇关于Cassandra中的数据分区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆