来自 elasticsearch 的打开文件过多警告 [英] Too many open files warning from elasticsearch

查看:26
本文介绍了来自 elasticsearch 的打开文件过多警告的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

不断收到以下警告消息.不确定应该做什么.看到一些相关帖子要求增加文件描述符的数量.

Getting the below warning messages continuously. Not sure what should be done. Saw some of the relevant posts asking to increase the number of file descriptors.

如何做同样的事情?

即使我现在增加,我是否会在添加新索引时遇到同样的问题.(目前使用大约 400 个索引、6 个分片和 1 个副本).指数的数量往往会增长得更多.

Even if I increase now, Will I encounter the same issue on addition of new indices. (presently working with around 400 indices, 6 shards and 1 replica). The number of indices tend to grow more.

[03:58:24,165][WARN ][cluster.action.shard     ] [node1] received shard failed for [index9][2], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index9][2] failed recovery]; nested: EngineCreationFailureException[[index9][2] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index9/2/index/segments_1 (Too many open files)]; ]] 
[03:58:24,166][WARN ][cluster.action.shard     ] [node1] received shard failed for [index15][0], node[node_hash2], [P], s[INITIALIZING], reason [Failed to create shard, message [IndexShardCreationException[[index15][0] failed to create shard]; nested: IOException[directory '/data/elasticsearch/whatever/nodes/0/indices/index15/0/index' exists and is a directory, but cannot be listed: list() returned null]; ]] 
[03:58:24,195][WARN ][cluster.action.shard     ] [node1] received shard failed for [index16][3], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index16][3] failed recovery]; nested: EngineCreationFailureException[[index16][3] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index16/3/index/segments_1 (Too many open files)]; ]] 
[03:58:24,196][WARN ][cluster.action.shard     ] [node1] received shard failed for [index17][0], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index17][0] failed recovery]; nested: EngineCreationFailureException[[index17][0] failed to open reader on writer]; nested: FileNotFoundException[/data/elasticsearch/whatever/nodes/0/indices/index17/0/index/segments_1 (Too many open files)]; ]] 
[03:58:24,198][WARN ][cluster.action.shard     ] [node1] received shard failed for [index21][4], node[node_hash3], [P], s[INITIALIZING], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[index21][4] failed recovery]; nested: EngineCreationFailureException[[index21][4] failed to create engine]; nested: LockReleaseFailedException[Cannot forcefully unlock a NativeFSLock which is held by another indexer component: /data/elasticsearch/whatever/nodes/0/indices/index21/4/index/write.lock]; ]] 

节点api的输出

curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'

{ 
  "ok" : true, 
  "cluster_name" : "whatever", 
  "nodes" : { 
    "node_hash1" : { 
      "name" : "node1", 
      "transport_address" : "transportip1", 
      "hostname" : "myhostip1", 
      "version" : "0.20.4", 
      "http_address" : "httpip1", 
      "attributes" : { 
        "data" : "false", 
        "master" : "true" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 8, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2133, 
          "total_cores" : 8, 
          "total_sockets" : 8, 
          "cores_per_socket" : 16, 
          "cache_size" : "4kb", 
          "cache_size_in_bytes" : 4096 
        }, 
        "mem" : { 
          "total" : "7gb", 
          "total_in_bytes" : 7516336128 
        }, 
        "swap" : { 
          "total" : "30gb", 
          "total_in_bytes" : 32218378240 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 26188, 
        "max_file_descriptors" : 16384 
      } 
    }, 
    "node_hash2" : { 
      "name" : "node2", 
      "transport_address" : "transportip2", 
      "hostname" : "myhostip2", 
      "version" : "0.20.4", 
      "attributes" : { 
        "master" : "false" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 4, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2400, 
          "total_cores" : 4, 
          "total_sockets" : 4, 
          "cores_per_socket" : 32, 
          "cache_size" : "20kb", 
          "cache_size_in_bytes" : 20480 
        }, 
        "mem" : { 
          "total" : "34.1gb", 
          "total_in_bytes" : 36700303360 
        }, 
        "swap" : { 
          "total" : "0b", 
          "total_in_bytes" : 0 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 24883, 
        "max_file_descriptors" : 16384 
      } 
    }, 
    "node_hash3" : { 
      "name" : "node3", 
      "transport_address" : "transportip3", 
      "hostname" : "myhostip3", 
      "version" : "0.20.4", 
      "attributes" : { 
        "master" : "false" 
      }, 
      "os" : { 
        "refresh_interval" : 1000, 
        "available_processors" : 4, 
        "cpu" : { 
          "vendor" : "Intel", 
          "model" : "Xeon", 
          "mhz" : 2666, 
          "total_cores" : 4, 
          "total_sockets" : 4, 
          "cores_per_socket" : 16, 
          "cache_size" : "8kb", 
          "cache_size_in_bytes" : 8192 
        }, 
        "mem" : { 
          "total" : "34.1gb", 
          "total_in_bytes" : 36700303360 
        }, 
        "swap" : { 
          "total" : "0b", 
          "total_in_bytes" : 0 
        } 
      }, 
      "process" : { 
        "refresh_interval" : 1000, 
        "id" : 25328, 
        "max_file_descriptors" : 16384 
      } 
    } 
  } 

推荐答案

如何增加允许打开的最大文件数略微取决于您的 Linux 发行版.以下是 ubuntu 和 centos 的一些说明:

How to increase the maximum number of allowed open files depends slightly on your linux distribution. Here are some instructions for ubuntu and centos:

http://posidev.com/blog/2009/06/04/set-ulimit-parameters-on-ubuntu/http://pro.benjaminste.in/post/318453669/increase-the-number-of-file-descriptors-on-centos-and

elasticsearch 文档建议将最大文件限制设置为 32k 或 64k.由于您已达到 16k 并且已经达到限制,因此我可能会将其设置得更高;像 128k 的东西.请参阅:http://www.elasticsearch.org/guide/reference/setup/installation/

The elasticsearch documentation recommends setting the maximum file limit to 32k or 64k. Since you are at 16k and are already hitting a limit, I'd probably set it higher; something like 128k. See: http://www.elasticsearch.org/guide/reference/setup/installation/

在增加打开文件的数量并重新启动 elasticsearch 后,您需要通过重新运行您提到的 curl 命令来验证它是否有效:

After upping the number of open files and restarting elasticsearch you will want to verify that it worked by re-running the curl command you mentioned:

curl -XGET 'http://localhost:9200/_nodes?os=true&process=true&pretty=true'

随着您添加更多索引(以及更多文档),您还将看到 elasticsearch 跟踪增加的文件数量.如果您发现所有索引和文档的性能下降,您可以尝试向集群添加一个新节点:http://www.elasticsearch.org/guide/reference/setup/configuration/ - 由于您已经有一个分片、复制的配置,所以这应该是一个相对轻松的过程.

As you add more indices (along with more documents), you will also see the number of files elasticsearch keeps track of increase. If you notice performance degradation with all of the indicies and documents, you can try adding a new node to your cluster: http://www.elasticsearch.org/guide/reference/setup/configuration/ - since you already have a sharded, replicated configuration, this should be a relatively painless process.

这篇关于来自 elasticsearch 的打开文件过多警告的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆