Logstash-如何在没有目标的情况下使用拆分过滤器拆分数组? [英] Logstash - how do I split an array using the split filter without a target?

查看:386
本文介绍了Logstash-如何在没有目标的情况下使用拆分过滤器拆分数组?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将JSON数组拆分为多个事件.这是一个示例输入:

I'm trying to split a JSON array into multiple events. Here's a sample input:

{"results" : [{"id": "a1", "name": "hello"}, {"id": "a2", "name": "logstash"}]}

这是我的过滤器和输出配置:

Here's my filter and output config:

filter {
  split {
    field => "results"
  }
}
stdout { 
  codec => "rubydebug"
}

这将产生2个事件,数组中的每个JSON都有一个.它与我正在寻找的东西很接近:

This produces 2 events, one for each of the JSONs in the array. And it's close to what I'm looking for:

{                                              
       "results" => {                          
          "id" => "a1",                        
        "name" => "hello"                      
    },                                         
      "@version" => "1",                       
    "@timestamp" => "2015-05-30T18:33:21.527Z",
          "host" => "laptop",                                      
}                                              
{                                              
       "results" => {                          
          "id" => "a2",                        
        "name" => "logstash"                   
    },                                         
      "@version" => "1",                       
    "@timestamp" => "2015-05-30T18:33:21.527Z",
          "host" => "laptop",                                   
}

问题是嵌套的结果"部分. 结果"是目标参数的默认值. 有没有一种方法可以使用拆分过滤器而不产生嵌套的JSON,并得到如下所示的信息:

The problem is the nested "results" part. "results" being the default value for the target parameter. Is there a way to use the split filter without producing the nested JSON, and get something like this:

{                                                                     
          "id" => "a1",                        
        "name" => "hello"                      
      "@version" => "1",                       
    "@timestamp" => "2015-05-30T18:33:21.527Z",
          "host" => "laptop",                                      
}                                              
{                                              
          "id" => "a2",                        
        "name" => "logstash"                   
      "@version" => "1",                       
    "@timestamp" => "2015-05-30T18:33:21.527Z",
          "host" => "laptop",                                   
}

目的是将其提供给ElasticSearch输出,每个事件都是带有document_id =>"id"的文档.任何好的解决方案都欢迎!

The purpose is to feed this to the ElasticSearch output with each event being a document with document_id => "id". Any good solutions are welcomed!

推荐答案

如果您知道所有字段都是什么样(看起来确实如此),则可以简单地重命名字段:

If you know what all of the fields will be (as it appears you do), you can simply rename the fields:

    mutate {
            rename => [
                    "[results][id]", "id",
                    "[results][name]", "name"
            ]
            remove_field => "results"
    }

如果您不知道所有字段是什么,则可以编写一个ruby代码过滤器,该代码过滤器执行event['results'].each...并从结果子字段中创建新字段.

If you didn't know what all of the fields were, you could write a ruby code filter that did a event['results'].each... and created new fields from the sub-fields of results.

这篇关于Logstash-如何在没有目标的情况下使用拆分过滤器拆分数组?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆