Logstash - 将字段添加到包含单词的行(事件) [英] Logstash - add fields to lines(events) containting a word

查看:304
本文介绍了Logstash - 将字段添加到包含单词的行(事件)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是logstash超级新手,并搜索所有文档。我尝试了一些东西,但都没有成功。我有一个这样的行日志:

  [2014-06-03 17:00:27,696] [INFO] [node ] [Savage Steel]初始化
[2014-06-03 17:00:27,697] [INFO] [node] [Savage Steel]开始...
[2014-06-03 17:00: 27,824] [INFO] [transport] [Savage Steel] bound_address {inet [/ 0:0:0:9300]},publish_address {inet [/ 10.35.142.60:9300]}
[2014-06-03 17:00:30,981] [INFO] [cluster.service] [Savage Steel] new_master [Savage Steel] [Sb9jmVPZTgGsK1Yyj_xG-A] [20EX17512] [inet [/ 10.35.142.60:9300] ]原因:zen-disco-join(election_as_master)
[2014-06-03 17:00:00] [INFO] [discovery] [Savage Steel] elasticsearch / Sb9jmVPZTgGsK1Yyj_xG-A
[2014- 06-03 17:00:31,062] [信息] [网关] [野蛮钢铁]恢复[0]指数到cluster_state
[2014-06-03 17:00:00] [信息] [http] [野人钢] bound_address {inet [/ 0:0:0:0:9200]},publish_address {inet [/ 10.35.142.60:9200]}

如果您想知道, ElasticSearch日志。我想要捕捉其中包含bound_address这一术语的行,并添加一个名为test field的字段。



我的logstash配置文件如下:

  input {
档案{
编解码器=>多行{
pattern => ^ \s
what => previous
}
path => [C:\ Users \spanguluri\Downloads\ elasticsearch\logs\elasticsearch.log]
start_position => 开始
}
}

过滤器{
grok {
match => [message,%{YEAR:annual}]
add_field => {foo_field=> hello world,from%{host}}
}

if([message] =〜/ bound_address /){
add_field => {test_field=> test field}
}
}

输出{
elasticsearch {
protocol => http
host => localhost
port => 9200
index => logstash - %{+ YYYY.MM.dd}
}
}



<当logstash启动的时候,它会一直抱怨:预计在#,{在第18行,第12列(字节378)之后的一个过滤器 ..



有人可以看看这个吗?谢谢!

解决方案

没有名为 add_field 的过滤器。 >

你可以改变它:

  if([message] =〜/ bound_address /){
add_field => {test_field=> test field}
}

为了更像这样,使用 mutate 过滤器

  if([message] =〜/ bound_address /){
mutate {
add_field => {test_field=> test field}
}
}


I am super new to logstash, and searched all documentation. There are some things I tried, but none of them worked. I have a log with lines like this:

[2014-06-03 17:00:27,696][INFO ][node                     ] [Savage Steel] initialized
[2014-06-03 17:00:27,697][INFO ][node                     ] [Savage Steel] starting ...
[2014-06-03 17:00:27,824][INFO ][transport                ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.35.142.60:9300]}
[2014-06-03 17:00:30,981][INFO ][cluster.service          ] [Savage Steel] new_master [Savage Steel][Sb9jmVPZTgGsK1Yyj_xG-A][20EX17512][inet[/10.35.142.60:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-03 17:00:31,030][INFO ][discovery                ] [Savage Steel] elasticsearch/Sb9jmVPZTgGsK1Yyj_xG-A
[2014-06-03 17:00:31,062][INFO ][gateway                  ] [Savage Steel] recovered [0] indices into cluster_state
[2014-06-03 17:00:31,098][INFO ][http                     ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.35.142.60:9200]}

In case you're wondering, they're ElasticSearch logs. I want to capture lines which have the term "bound_address" in them, and add a field called "test field".

My logstash configuration file is as follows:

input {
    file {
        codec => multiline {
          pattern => "^\s"
          what => "previous"
        }
        path => ["C:\Users\spanguluri\Downloads\elasticsearch\logs\elasticsearch.log"]
        start_position => "beginning"
    }
}

filter {
    grok {
        match => [ "message", "%{YEAR:annual}" ]
        add_field => { "foo_field" => "hello world, from %{host}" }
    }

    if ([message] =~ /bound_address/) {
        add_field => { "test_field" => "test field" }
    }
}

output {
    elasticsearch {
        protocol => "http"
        host => "localhost"
        port => "9200"
        index => "logstash-%{+YYYY.MM.dd}"
    }
}

When logstash is started, it keeps complaining : expected one of #, { at line 18, column 12 (byte 378) after filter..

Can someone please look into this? thanks!

解决方案

There is no filter named add_field.

You can change this:

if ([message] =~ /bound_address/) {
    add_field => { "test_field" => "test field" }
}

To something more like this, using the mutate filter

if ([message] =~ /bound_address/) {
    mutate {
        add_field => { "test_field" => "test field" }
    }
}

这篇关于Logstash - 将字段添加到包含单词的行(事件)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆