用于Elasticsearch输出插件的Logstash sprintf格式不起作用 [英] Logstash sprintf formatting for elasticsearch output plugin not working

查看:94
本文介绍了用于Elasticsearch输出插件的Logstash sprintf格式不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用sprintf来引用elasticsearch输出插件中的事件字段时遇到了麻烦,我不确定为什么.以下是从Filebeat接收到并在过滤完成后发送给Elasticsearch的事件:

I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete:

{
          "beat" => {
        "hostname" => "ca86fed16953",
            "name" => "ca86fed16953",
         "version" => "6.5.1"
    },
    "@timestamp" => 2018-12-02T05:13:21.879Z,
          "host" => {
        "name" => "ca86fed16953"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied",
        [1] "_grokparsefailure"
    ],
        "fields" => {
        "env" => "DEV"
    },
        "source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log",
      "@version" => "1",
    "prospector" => {
        "type" => "log"
    },
        "bgp_id" => "42313900",
       "message" => "{<some message here>}",
        "offset" => 1440990627,
         "input" => {
        "type" => "log"
    },
        "docker" => {
        "container" => {
            "id" => "logstash_DEV.log"
        }
    }
}

我正在尝试根据filebeat的环境为文件编制索引.这是我的配置文件:

I am trying to index the files this based on filebeat's environment. Here is my config file:

input {
  http { }
  beats {
    port => 5044
  }
}

filter {
  grok {
    patterns_dir => ["/usr/share/logstash/pipeline/patterns"]
    break_on_match => false
    match => { "message" => ["%{RUBY_LOGGER}"]
             }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "%{[fields][env]}-%{+yyyy.MM.dd}"
  }
  stdout { codec => rubydebug }
}

我认为所引用的事件字段在到达Elasticsearch输出插件时已经被填充了.但是,在kibana端,它不注册格式化索引.相反,其原因如下:

I would think the referenced event fields would have already been populated by the time it reaches the elasticsearch output plugin. However, on the kibana end, it doesnt not register the formatted index. Instead, its since like this:

我做错了什么?

推荐答案

在Elasticsearch Output插件文档中:
https://www.elastic.co/guide/zh-CN/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template

In Elasticsearch Output plugin docs:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template

是需要其他索引名称的支持,还是想要 通常,更改模板中的映射,自定义模板可以 通过将template设置为模板文件的路径来指定.

Should you require support for other index names, or would like to change the mappings in the template in general, a custom template can be specified by setting template to the path of a template file.

将manage_template设置为false将禁用此功能.如果您需要 更好地控制模板创建(例如创建索引) 动态地基于字段名称),您应该将manage_template设置为 并使用REST API手动应用模板.

Setting manage_template to false disables this feature. If you require more control over template creation, (e.g. creating indices dynamically based on field names) you should set manage_template to false and use the REST API to apply your templates manually.

默认情况下,如果使用除logstash-%{+ YYYY.MM.dd}以外的其他索引名称,elasticsearch要求您指定一个自定义模板.要禁用,我们需要包含 manage_template => false 键.

By default, elasticsearch requires you to specify a custom template if using different index names other than logstash-%{+YYYY.MM.dd}. To disable, we need to include the manage_template => false key.

因此,通过这套新的信息,有效的配置应为:

So with this new set of info, the working config should be:

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "%{[fields][env]}-%{+yyyy.MM.dd}"
    manage_template => false
  }
  stdout { codec => rubydebug }
}

这篇关于用于Elasticsearch输出插件的Logstash sprintf格式不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆