弹性搜索输出插件的 Logstash sprintf 格式不起作用 [英] Logstash sprintf formatting for elasticsearch output plugin not working

查看:34
本文介绍了弹性搜索输出插件的 Logstash sprintf 格式不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用 sprintf 引用 elasticsearch 输出插件中的事件字段时遇到问题,我不知道为什么.下面是从Filebeat接收到过滤完成后发送到Elasticsearch的事件:

I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete:

{
          "beat" => {
        "hostname" => "ca86fed16953",
            "name" => "ca86fed16953",
         "version" => "6.5.1"
    },
    "@timestamp" => 2018-12-02T05:13:21.879Z,
          "host" => {
        "name" => "ca86fed16953"
    },
          "tags" => [
        [0] "beats_input_codec_plain_applied",
        [1] "_grokparsefailure"
    ],
        "fields" => {
        "env" => "DEV"
    },
        "source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log",
      "@version" => "1",
    "prospector" => {
        "type" => "log"
    },
        "bgp_id" => "42313900",
       "message" => "{<some message here>}",
        "offset" => 1440990627,
         "input" => {
        "type" => "log"
    },
        "docker" => {
        "container" => {
            "id" => "logstash_DEV.log"
        }
    }
}

我正在尝试根据 filebeat 的环境索引文件.这是我的配置文件:

I am trying to index the files this based on filebeat's environment. Here is my config file:

input {
  http { }
  beats {
    port => 5044
  }
}

filter {
  grok {
    patterns_dir => ["/usr/share/logstash/pipeline/patterns"]
    break_on_match => false
    match => { "message" => ["%{RUBY_LOGGER}"]
             }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "%{[fields][env]}-%{+yyyy.MM.dd}"
  }
  stdout { codec => rubydebug }
}

我认为引用的事件字段在到达 elasticsearch 输出插件时已经填充.但是,在 kibana 端,它不会注册格式化的索引.相反,它是这样的:

I would think the referenced event fields would have already been populated by the time it reaches the elasticsearch output plugin. However, on the kibana end, it doesnt not register the formatted index. Instead, its since like this:

我做错了什么?

推荐答案

在 Elasticsearch 输出插件文档中:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template

In Elasticsearch Output plugin docs:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-manage_template

您是否需要支持其他索引名称,或者希望更改模板中的映射一般,自定义模板可以通过将模板设置为模板文件的路径来指定.

Should you require support for other index names, or would like to change the mappings in the template in general, a custom template can be specified by setting template to the path of a template file.

将 manage_template 设置为 false 会禁用此功能.如果您需要更好地控制模板创建,(例如创建索引动态地基于字段名称)您应该将 manage_template 设置为false 并使用 REST API 手动应用您的模板.

Setting manage_template to false disables this feature. If you require more control over template creation, (e.g. creating indices dynamically based on field names) you should set manage_template to false and use the REST API to apply your templates manually.

默认情况下,如果使用除 logstash-%{+YYYY.MM.dd} 之外的不同索引名称,elasticsearch 要求您指定自定义模板.要禁用,我们需要包含 ma​​nage_template => false 键.

By default, elasticsearch requires you to specify a custom template if using different index names other than logstash-%{+YYYY.MM.dd}. To disable, we need to include the manage_template => false key.

所以有了这组新信息,工作配置应该是:

So with this new set of info, the working config should be:

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "%{[fields][env]}-%{+yyyy.MM.dd}"
    manage_template => false
  }
  stdout { codec => rubydebug }
}

这篇关于弹性搜索输出插件的 Logstash sprintf 格式不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆