无法闲聊任何种子,但由于节点位于其自己的种子列表中而无法继续 [英] Unable to gossip with any seeds but continuing since node is in its own seed list
问题描述
要从AWS的2个节点群集中删除节点,请运行
To remove a node from 2 node cluster in AWS I ran
nodetool removenode<主机ID>
在此之后,如果我将所有 cassandra.yaml
和 cassandra-rackdc.properties
正确。
我做到了,但仍然无法恢复群集。
After this I was supposed to get my cluster back if I put all the cassandra.yaml
and cassandra-rackdc.properties
correctly.
I did it but still, I am not able to get back my cluster.
nodetool状态
仅显示一个节点。
重要的系统。在cassandra上的日志是:
significant system.log on cassandra is :
INFO [main] 2017-08-14 13:03:46,409 StorageService.java:553 - Cassandra version: 3.9
INFO [main] 2017-08-14 13:03:46,409 StorageService.java:554 - Thrift API version: 20.1.0
INFO [main] 2017-08-14 13:03:46,409 StorageService.java:555 - CQL supported versions: 3.4.2 (default: 3.4.2)
INFO [main] 2017-08-14 13:03:46,445 IndexSummaryManager.java:85 - Initializing index summary manager with a memory pool size of 198 MB and a resize interval of 60 minutes
INFO [main] 2017-08-14 13:03:46,459 MessagingService.java:570 - Starting Messaging Service on /172.15.81.249:7000 (eth0)
INFO [ScheduledTasks:1] 2017-08-14 13:03:48,424 TokenMetadata.java:448 - Updating topology for all endpoints that have changed
WARN [main] 2017-08-14 13:04:17,497 Gossiper.java:1388 - Unable to gossip with any seeds but continuing since node is in its own seed list
INFO [main] 2017-08-14 13:04:17,499 StorageService.java:687 - Loading persisted ring state
INFO [main] 2017-08-14 13:04:17,500 StorageService.java:796 - Starting up server gossip
文件内容:
cassandra.yaml : https://pastebin.com/A3BVUUUr
cassandra-rackdc.properties :< a href = https://pastebin.com/xmmvwksZ rel = noreferrer> https://pastebin.com/xmmvwksZ
system.log : https://pastebin.com/2KA60Sve
netstat -atun https://pastebin.com/Dsd17i0G
两个节点都具有相同的错误日志。
Both the nodes have same error log.
所有必需的端口都打开。
All required ports are open.
任何建议吗?
推荐答案
如果数据中心中只有两个节点,通常最好的做法是每个DC拥有一个种子节点。在这种情况下,您不应该使每个节点都成为种子节点。
It's usually a best practice to have one seed node per DC if you have just two nodes available in your datacenter. You shouldn't make every node a seed node in this case.
我注意到node1具有-个种子: node1,node2
和node2的种子中有-个种子: node2,node1
。如果节点可以在-种子:...
部分的中找到它的IP地址,则该节点将默认启动而不接触任何其他种子。 cassandra.yml
配置文件。这也是您可以在日志中找到的内容:
I noticed that node1 has - seeds: "node1,node2"
and node2 has - seeds: "node2,node1"
in your configuration. A node will start by default without contacting any other seeds if it can find it's IP address as first element in - seeds: ...
section in the cassandra.yml
configuration file. That's what you can also find in your logs:
...无法八卦,但由于节点位于其自己的种子列表中而继续...
我怀疑,在您的情况下,node1和node2正在启动而不相互接触,因为它们将自己标识为种子节点。
I suspect, that in your case node1 and node2 are starting without contacting each other, since they identify themselves as seed nodes.
尝试在两个实例的配置中仅将 node1 用于种子节点,然后重新启动集群。
如果节点1发生故障而节点2发生故障,则必须更改节点1配置中的-种子:...
部分,使其仅指向节点2的IP地址,并且只需启动node1。
Try to use just node1 for seed node in both instance's configuration and reboot your cluster.
In case of node1 being down and node2 is up, you have to change - seeds: ...
section in node1 configuration to point just to node2's IP address and just boot node1.
如果您的节点由于防火墙配置错误而彼此找不到,通常这是一种验证特定端口是否可以从其他位置访问的好方法。例如。您可以使用 nc
检查某个端口是否打开:
If your nodes can't find each other because of firewall misconfiguration, it's usually a good approach to verify if a specific port is accessible from another location. E.g. you can use nc
for checking if a certain port is open:
nc -vz node1 7000
引用和链接
在以下链接中查看Cassandra正在使用的端口列表
http://docs.datastax.com/zh-CN/cassandra/3.0/cassandra/configuration/secureFireWall.html
另请参见详细文档使用大量示例命令运行多个节点上:
http ://docs.datastax.com/zh-CN/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html
See also a detailed documentation on running multiple nodes with plenty of sample commands: http://docs.datastax.com/en/cassandra/2.1/cassandra/initialize/initializeMultipleDS.html
这篇关于无法闲聊任何种子,但由于节点位于其自己的种子列表中而无法继续的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!