如何解释Cassandra集群中看似奇怪的行为? [英] How can the seemingly odd behavior in Cassandra cluster be explained?

查看:231
本文介绍了如何解释Cassandra集群中看似奇怪的行为?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个50个节点的Apache Cassandra 2.1.2群集。我将集群命名为测试集群,默认。然后对于一些测试,我将一个节点从50个节点集群中分离出来。我关闭Cassandra,删除数据dirs,刷新nodetool。然后我编辑了单节点集群并将其称为单节点测试集群,我正确地编辑了种子,cluster_name和listen_address字段。我也正确设置JMX。现在这里是发生了什么。
1.当我在单个节点上运行nodetool状态时,我只看到一个节点已启动并正在运行。如果我运行nodetool describeecluster,我看到新的集群名称 - 单节点测试集群
2.当我在49个节点之一上运行nodetool命令时,我看到总共50个节点,单个节点为down,I将集群名称视为测试集群
3.在每个节点上安装了datastax代理,并且还设置了OpsCenter来监视集群。在OpsCenter中,我仍然看到50个节点和集群名称为测试集群
所以我的问题是为什么我看到这三个不同的描述相同的拓扑,这是预期的?


另一个问题是,当我在单个节点上启动Cassandra时,我仍然看到它以某种方式尝试与其他节点通信,并且我不断收到集群名称不匹配(Test Cluster!= Single Node Test Cluster)WARN



这是预期的还是这是Cassandra中的错误?

解决方案

是的,如果您从集群中删除一个节点,则需要通知集群的还原已消失。



您可以通过在节点仍在集群中时停用节点,或者通过说节点去掉节点时从另一个节点删除节点。 I.E.



如果您不执行上述操作,您仍会在另一个的system.peers表中看到该节点。


I created an Apache Cassandra 2.1.2 cluster of 50 nodes. I named the cluster as "Test Cluster", the default. Then for some testing, I separated one node out of the 50 node cluster. I shut down Cassandra, deleted data dirs, flushed nodetool. Then I edited the single node cluster and called it as "Single Node Test Cluster" I edited seeds, cluster_name and listen_address fields appropriately. I also setup JMX correctly. Now here is what happens. 1. When I run nodetool status on the single node, I see only one node as up and running. If I run nodetool describecluster, I see the new cluster name - "Single Node Test Cluster" 2. When I run nodetool commands on one of the 49 nodes, I see total 50 nodes with the single node as down and I see cluster name as "Test Cluster" 3. There are datastax-agents installed on each node and I had also setup OpsCenter to monitor the cluster. In OpsCenter, I still see 50 nodes as up and cluster name as "Test Cluster" So my question is why I am seeing these 3 different depictions of same topology and is this expected?

Another issue is, when I start Cassandra on the single node, I still see that it somehow tries to communicate with other nodes and I keep getting cluster name mismatch (Test Cluster != Single Node Test Cluster) WARN on the console before the single node cluster starts.

Is this as expected or is this is bug in Cassandra?

解决方案

Yes if you remove a node from your cluster you need to inform the restore of the cluster that it is gone.

You do that by decommissioning the node when its still in the cluster or by saying nodetool remove node from another node when the node is gone. I.E. you no longer have access to the box.

If you do neither of the above, you'll still see the node in the other's system.peers table.

这篇关于如何解释Cassandra集群中看似奇怪的行为?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆