logstash,syslog和grok [英] logstash, syslog and grok

查看:811
本文介绍了logstash,syslog和grok的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用ELK堆栈配置。 logstash-forwarder用作日志发货人,每种类型的日志都带有一个类型标签:

timeout:15 
},
files:[
{
paths:[
/ var / log / secure
],
fields:{
type:syslog
}
}
]
}

该部分工作正常...现在,我希望logstash在其部分中分割消息字符串;幸运的是,这已经在默认的Grok模式中实现了,所以logstash.conf到目前为止还是很简单的:

  input {
lumberjack {
port => 6782
ssl_certificate => ___ssl_key => ___
}
}
过滤器{
如果[type] ==syslog{
grok {
match => [message,%{SYSLOGLINE}]
}
}
}
输出{
elasticsearch {
cluster => ___
template => ___
template_overwrite => true
node_name => logstash -___
bind_host => ___
}
}

我在这里的问题是由弹性搜索接收的文档仍然保留在消息字段中的整行(包括时间戳等)。此外,@timestamp仍然显示logstash收到的消息的日期,这使得搜索不好,因为kibana会查询@timestamp以便按日期过滤...任何想法我在做错什么?

解决方案

你的消息字段包含原来的原因日志行(包括时间戳等)默认情况下,Grok过滤器将不允许覆盖现有字段。换句话说,即使 SYSLOGLINE模式

  SYSLOGLINE%{SYSLOGBASE2}%{GREEDYDATA:message} 

将消息捕获到消息字段中,它不会覆盖当前字段值。解决方案是设置 grok filter的覆盖参数

  grok {
match => [message,%{SYSLOGLINE}]
overwrite => [message]
}

要填充@timestamp字段,请使用日期过滤器。这可能适用于您:

  date {
match => [timestamp,MMM dd HH:mm:ss,MMM d HH:mm:ss]
}


I am working on an ELK-stack configuration. logstash-forwarder is used as a log shipper, each type of log is tagged with a type-tag:

{
  "network": {
    "servers": [ "___:___" ],
    "ssl ca": "___",
    "timeout": 15
  },
  "files": [
    {
      "paths": [
        "/var/log/secure"
      ],
      "fields": { 
        "type": "syslog"
      }
    }
  ]
}

That part works fine... Now, I want logstash to split the message string in its parts; luckily, that is already implemented in the default grok patterns, so the logstash.conf remains simple so far:

input {
    lumberjack {
        port => 6782
        ssl_certificate => "___" ssl_key => "___"
    }
}
filter {
    if [type] == "syslog" {
        grok {
            match => [ "message", "%{SYSLOGLINE}" ]
        }
    }
}
output {
    elasticsearch {
        cluster => "___"
        template => "___"
        template_overwrite => true
        node_name => "logstash-___"
        bind_host => "___"
    }
}

The issue I have here is that the document that is received by elasticsearch still holds the whole line (including timestamp etc.) in the message field. Also, the @timestamp still shows the date of when logstash has received the message which makes is bad to search since kibana does query the @timestamp in order to filter by date... Any idea what I'm doing wrong?

Thanks, Daniel

解决方案

The reason your "message" field contains the original log line (including timestamps etc) is that the grok filter by default won't allow existing fields to be overwritten. In other words, even though the SYSLOGLINE pattern,

SYSLOGLINE %{SYSLOGBASE2} %{GREEDYDATA:message}

captures the message into a "message" field it won't overwrite the current field value. The solution is to set the grok filter's "overwrite" parameter.

grok {
    match => [ "message", "%{SYSLOGLINE}" ]
    overwrite => [ "message" ]
}

To populate the "@timestamp" field, use the date filter. This will probably work for you:

date {
    match => [ "timestamp", "MMM dd HH:mm:ss", "MMM  d HH:mm:ss" ]
}

这篇关于logstash,syslog和grok的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆