ElasticSearch:未分配的碎片,如何解决? [英] ElasticSearch: Unassigned Shards, how to fix?
问题描述
我有一个有4个节点的ES集群:
number_of_replicas:1
search01 - master:false,data :false
search02 - master:true,data:true
search03 - master:false,data:true
search04 - master:false,data:true
我不得不重新启动search03,当它回来时,它重新加入了集群没有问题,但是留下了7个未分配的分片。
{
cluster_name:tweedle,
status:yellow b $ btimed_out:false,
number_of_nodes:4,
number_of_data_nodes:3,
active_primary_shards:15,
active_shards:23,
relocating_shards:0,
initializing_shards:0,
unassigned_shards:7
}
现在我的群集处于黄色状态。解决这个问题的最好方法是什么?
- 删除(取消)碎片?
- 将分片移动到另一个节点?
- 将分片分配到节点?
- 将number_of_replicas更新为2? $ b $有趣的是,当添加了一个新的索引时,该节点开始工作,并且播放的很好,集群的其余部分,它刚刚离开了未分配的分片。
关注问题:我做错了事情,首先发生这种情况?当一个节点重新启动时,我对一个以这种方式行为的集群没有太多的信心。
注意:如果您由于某种原因运行单个节点群集,则可能需要执行以下操作:
curl -XPUT'localhost:9200 / _settings'-d'
{
index:{
number_of_replicas:0
}
}'
好的,我已经通过ES支持的一些帮助解决了这个问题。在所有节点(或您认为是导致问题的节点)上发出以下命令到API中:
curl -XPUT'localhost:9200 /< index> / _ settings'\
pre>
-d'{index.routing.allocation.disable_allocation:false}'
其中
< index>
是您认为是罪魁祸首的索引。如果你不知道,只需在所有节点上运行:curl -XPUT'localhost:9200 / _settings'\
-d'{index.routing.allocation.disable_allocation:false}'
我也将此行添加到我的yml配置,从那时起,服务器/服务的任何重新启动都是免费的。分片重新分配回来。
FWIW,为了回答一个追求的问题,将MAX_HEAP_SIZE设置为30G,除非您的机器具有小于60G的RAM,在这种情况下设置它是可用内存的一半。
I have an ES cluster with 4 nodes:
number_of_replicas: 1 search01 - master: false, data: false search02 - master: true, data: true search03 - master: false, data: true search04 - master: false, data: true
I had to restart search03, and when it came back, it rejoined the cluster no problem, but left 7 unassigned shards laying about.
{ "cluster_name" : "tweedle", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 4, "number_of_data_nodes" : 3, "active_primary_shards" : 15, "active_shards" : 23, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 7 }
Now my cluster is in yellow state. What is the best way to resolve this issue?
- Delete (cancel) the shards?
- Move the shards to another node?
- Allocate the shards to the node?
- Update 'number_of_replicas' to 2?
- Something else entirely?
Interestingly, when a new index was added, that node started working on it and played nice with the rest of the cluster, it just left the unassigned shards laying about.
Follow on question: am I doing something wrong to cause this to happen in the first place? I don't have much confidence in a cluster that behaves this way when a node is restarted.
NOTE: If you're running a single node cluster for some reason, you might simply need to do the following:
curl -XPUT 'localhost:9200/_settings' -d '
{
"index" : {
"number_of_replicas" : 0
}
}'
OK, I've solved this with some help from ES support. Issue the following command to the API on all nodes (or the nodes you believe to be the cause of the problem):
curl -XPUT 'localhost:9200/<index>/_settings' \
-d '{"index.routing.allocation.disable_allocation": false}'
where <index>
is the index you believe to be the culprit. If you have no idea, just run this on all nodes:
curl -XPUT 'localhost:9200/_settings' \
-d '{"index.routing.allocation.disable_allocation": false}'
I also added this line to my yml config and since then, any restarts of the server/service have been problem free. The shards re-allocated back immediately.
FWIW, to answer an oft sought after question, set MAX_HEAP_SIZE to 30G unless your machine has less than 60G RAM, in which case set it to half the available memory.
这篇关于ElasticSearch:未分配的碎片,如何解决?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!