Logstash经过的过滤器 [英] Logstash elapsed filter

查看:124
本文介绍了Logstash经过的过滤器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在ELK堆栈中使用elapsed.rb过滤器,但似乎无法弄清楚.我对grok不太熟悉,我相信这就是我的问题所在.有人可以帮忙吗?

I am trying to use the elapsed.rb filter in the ELK stack and cant seem to figure it out. I am not very familiar with grok and I believe that is where my issue lives. Can anyone help?

示例日志文件:

{
    "application_name": "Application.exe",
    "machine_name": "Machine1",
    "user_name": "testuser",
    "entry_date": "2015-03-12T18:12:23.5187552Z",
    "chef_environment_name": "chefenvironment1",
    "chef_logging_cookbook_version": "0.1.9",
    "logging_level": "INFO",
    "performance": {
        "process_name": "account_search",
        "process_id": "Machine1|1|635617555435187552",
        "event_type": "enter"
    },
    "thread_name": "1",
    "logger_name": "TestLogger",
    "@version": "1",
    "@timestamp": "2015-03-12T18:18:48.918Z",
    "type": "rabbit",
    "log_from": "rabbit"
}

{
    "application_name": "Application.exe",
    "machine_name": "Machine1",
    "user_name": "testuser",
    "entry_date": "2015-03-12T18:12:23.7527462Z",
    "chef_environment_name": "chefenvironment1",
    "chef_logging_cookbook_version": "0.1.9",
    "logging_level": "INFO",
    "performance": {
        "process_name": "account_search",
        "process_id": "Machine1|1|635617555435187552",
        "event_type": "exit"
    },
    "thread_name": "1",
    "logger_name": "TestLogger",
    "@version": "1",
    "@timestamp": "2015-03-12T18:18:48.920Z",
    "type": "rabbit",
    "log_from": "rabbit"
}

.conf文件示例

input {
  rabbitmq {
    host => "SERVERNAME"
    add_field => ["log_from", "rabbit"]
    type => "rabbit"
    user => "testuser"
    password => "testuser"
    durable => "true"
    exchange => "Logging"
    queue => "testqueue"
    codec => "json"
    exclusive => "false"
    passive => "true"
  }
}


filter {

   grok {
     match => ["message", "%{TIMESTAMP_ISO8601} START id: (?<process_id>.*)"]
     add_tag => [ "taskStarted" ]
   }

   grok {
     match => ["message", "%{TIMESTAMP_ISO8601} END id: (?<process_id>.*)"]
     add_tag => [ "taskTerminated"]
   }

   elapsed {
    start_tag => "taskStarted"
    end_tag => "taskTerminated"
    unique_id_field => "process_id"
    timeout => 10000
    new_event_on_match => false
  }
}

output {
  file {
    codec => json { charset => "UTF-8" }
    path => "test.log"
  }
}

推荐答案

您不需要使用grok过滤器,因为您的输入已经是json格式.您需要执行以下操作:

You would not need to use a grok filter because your input is already in json format. You'd need to do something like this:

if [performance][event_type] == "enter" {
  mutate { add_tag => ["taskStarted"] }
} else if [performance][event_type] == "exit" {
  mutate { add_tag => ["taskTerminated"] }
}
elapsed {
  start_tag => "taskStarted"
  end_tag => "taskTerminated"
  unique_id_field => "performance.process_id"
  timeout => 10000
  new_event_on_match => false
}

我对unique_id_field并不满意-我认为它应该可以工作,但是如果不行,您可以将其更改为仅process_idadd_field => { "process_id" => "%{[performance][process_id]}" }

I'm not positive on that unique_id_field -- I think it should work, but if it doesn't you could just change it to process_id only and add_field => { "process_id" => "%{[performance][process_id]}" }

这篇关于Logstash经过的过滤器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆