太多的打开文件警告弹性搜索 [英] Too many open files warning from elasticsearch

查看:161
本文介绍了太多的打开文件警告弹性搜索的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

持续收到以下警告讯息。不知道应该做什么看到一些相关的帖子要求增加文件描述符的数量。

Getting the below warning messages continuously. Not sure what should be done. Saw some of the relevant posts asking to increase the number of file descriptors.

如何做同样的?

即使现在增加,我会在添加新索引时遇到同样的问题。
(目前正在使用大约400个索引,6个碎片和1个副本)。指数的数量趋于增长。

Even if I increase now, Will I encounter the same issue on addition of new indices. (presently working with around 400 indices, 6 shards and 1 replica). The number of indices tend to grow more.

[03:58:24,165][WARN ][cluster.action.shard     ] [node1] received shard failed for [index9][2], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index9][2] failed recovery]; nested: EngineCreationFailureException[[index9][2] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index9/2/index/segments_1 (Too many open files)]; ]] 
[03:58:24,166][WARN ][cluster.action.shard     ] [node1] received shard failed for [index15][0], node[node_hash2], [P], s[INITIALIZING], reason [Failed to create shard, message [IndexShardCreationException[[index15][0] failed to create shard]; nested: IOException[directory '/data/elasticsearch/whatever/nodes/0/indices/index15/0/index' exists and is a directory, but cannot be listed: list() returned null]; ]] 
[03:58:24,195][WARN ][cluster.action.shard     ] [node1] received shard failed for [index16][3], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index16][3] failed recovery]; nested: EngineCreationFailureException[[index16][3] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index16/3/index/segments_1 (Too many open files)]; ]] 
[03:58:24,196][WARN ][cluster.action.shard     ] [node1] received shard failed for [index17][0], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index17][0] failed recovery]; nested: EngineCreationFailureException[[index17][0] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index17/0/index/segments_1 (Too many open files)]; ]] 
[03:58:24,198][WARN ][cluster.action.shard     ] [node1] received shard failed for [index21][4], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index21][4] failed recovery]; nested: EngineCreationFailureException[[index21][4] failed to create engine]; nested: LockReleaseFailedException[Cannot forcefully unlock a NativeFSLock which is held by another indexer component: /data/elasticsearch/whatever/nodes/0/indices/index21/4/index/write.lock]; ]] 

节点输出api

curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'

{ 
  "ok" : true, 
  "cluster_name" : "whatever", 
  "nodes" : { 
    "node_hash1" : { 
      "name" : "node1", 
      "transport_address" : "transportip1", 
      "hostname" : "myhostip1", 
      "version" : "0.20.4", 
      "http_address" : "httpip1", 
      "attributes" : { 
        "data" : "false", 
        "master" : "true" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 8, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2133, 
          "total_cores" : 8, 
          "total_sockets" : 8, 
          "cores_per_socket" : 16, 
          "cache_size" : "4kb", 
          "cache_size_in_bytes" : 4096 
        }, 
        "mem" : { 
          "total" : "7gb", 
          "total_in_bytes" : 7516336128 
        }, 
        "swap" : { 
          "total" : "30gb", 
          "total_in_bytes" : 32218378240 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 26188, 
        "max_file_descriptors" : 16384 
      } 
    }, 
    "node_hash2" : { 
      "name" : "node2", 
      "transport_address" : "transportip2", 
      "hostname" : "myhostip2", 
      "version" : "0.20.4", 
      "attributes" : { 
        "master" : "false" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 4, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2400, 
          "total_cores" : 4, 
          "total_sockets" : 4, 
          "cores_per_socket" : 32, 
          "cache_size" : "20kb", 
          "cache_size_in_bytes" : 20480 
        }, 
        "mem" : { 
          "total" : "34.1gb", 
          "total_in_bytes" : 36700303360 
        }, 
        "swap" : { 
          "total" : "0b", 
          "total_in_bytes" : 0 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 24883, 
        "max_file_descriptors" : 16384 
      } 
    }, 
    "node_hash3" : { 
      "name" : "node3", 
      "transport_address" : "transportip3", 
      "hostname" : "myhostip3", 
      "version" : "0.20.4", 
      "attributes" : { 
        "master" : "false" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 4, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2666, 
          "total_cores" : 4, 
          "total_sockets" : 4, 
          "cores_per_socket" : 16, 
          "cache_size" : "8kb", 
          "cache_size_in_bytes" : 8192 
        }, 
        "mem" : { 
          "total" : "34.1gb", 
          "total_in_bytes" : 36700303360 
        }, 
        "swap" : { 
          "total" : "0b", 
          "total_in_bytes" : 0 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 25328, 
        "max_file_descriptors" : 16384 
      } 
    } 
  } 


推荐答案

如何增加最大值允许的打开文件的数量稍微依赖于您的linux发行版。以下是ubuntu和centos的一些说明:

How to increase the maximum number of allowed open files depends slightly on your linux distribution. Here are some instructions for ubuntu and centos:

http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/
http://pro.benjaminste.in/post/318453669/increase-the-number-of -file-descriptors-on-centos-and

弹性搜索文档建议将最大文件限制设置为32k或64k。由于你是16k,而且已经达到了极限,我可能会更高一些;像128k这样的东西请参阅: http://www.elasticsearch.org/guide/reference/setup/installation/

The elasticsearch documentation recommends setting the maximum file limit to 32k or 64k. Since you are at 16k and are already hitting a limit, I'd probably set it higher; something like 128k. See: http://www.elasticsearch.org/guide/reference/setup/installation/

在打开文件数量并重新启动弹性搜索后,您将需要通过重新运行您提到的卷曲命令来验证其是否正常工作:

After upping the number of open files and restarting elasticsearch you will want to verify that it worked by re-running the curl command you mentioned:

curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'

随着您添加更多索引(以及更多文档),您还将看到elasticsearch跟踪增加的文件数。如果您注意到所有指标和文档的性能下降,您可以尝试在集群中添加新节点: http ://www.elasticsearch.org/guide/reference/setup/configuration/ - 由于您已经有一个分片,复制的配置,这应该是一个相对无痛的过程。

As you add more indices (along with more documents), you will also see the number of files elasticsearch keeps track of increase. If you notice performance degradation with all of the indicies and documents, you can try adding a new node to your cluster: http://www.elasticsearch.org/guide/reference/setup/configuration/ - since you already have a sharded, replicated configuration, this should be a relatively painless process.

这篇关于太多的打开文件警告弹性搜索的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆