Logstash - 将字段添加到包含单词的行(事件) [英] Logstash - add fields to lines(events) containting a word
问题描述
[2014-06-03 17:00:27,696] [INFO] [node ] [Savage Steel]初始化
[2014-06-03 17:00:27,697] [INFO] [node] [Savage Steel]开始...
[2014-06-03 17:00: 27,824] [INFO] [transport] [Savage Steel] bound_address {inet [/ 0:0:0:9300]},publish_address {inet [/ 10.35.142.60:9300]}
[2014-06-03 17:00:30,981] [INFO] [cluster.service] [Savage Steel] new_master [Savage Steel] [Sb9jmVPZTgGsK1Yyj_xG-A] [20EX17512] [inet [/ 10.35.142.60:9300] ]原因:zen-disco-join(election_as_master)
[2014-06-03 17:00:00] [INFO] [discovery] [Savage Steel] elasticsearch / Sb9jmVPZTgGsK1Yyj_xG-A
[2014- 06-03 17:00:31,062] [信息] [网关] [野蛮钢铁]恢复[0]指数到cluster_state
[2014-06-03 17:00:00] [信息] [http] [野人钢] bound_address {inet [/ 0:0:0:0:9200]},publish_address {inet [/ 10.35.142.60:9200]}
如果您想知道, ElasticSearch日志。我想要捕捉其中包含bound_address这一术语的行,并添加一个名为test field的字段。
我的logstash配置文件如下:
input {
档案{
编解码器=>多行{
pattern => ^ \s
what => previous
}
path => [C:\ Users \spanguluri\Downloads\ elasticsearch\logs\elasticsearch.log]
start_position => 开始
}
}
过滤器{
grok {
match => [message,%{YEAR:annual}]
add_field => {foo_field=> hello world,from%{host}}
}
if([message] =〜/ bound_address /){
add_field => {test_field=> test field}
}
}
输出{
elasticsearch {
protocol => http
host => localhost
port => 9200
index => logstash - %{+ YYYY.MM.dd}
}
}
<当logstash启动的时候,它会一直抱怨:预计在#,{在第18行,第12列(字节378)之后的一个过滤器
..
有人可以看看这个吗?谢谢!
没有名为 add_field
的过滤器。 >
你可以改变它:
if([message] =〜/ bound_address /){
add_field => {test_field=> test field}
}
为了更像这样,使用 mutate
过滤器
if([message] =〜/ bound_address /){
mutate {
add_field => {test_field=> test field}
}
}
I am super new to logstash, and searched all documentation. There are some things I tried, but none of them worked. I have a log with lines like this:
[2014-06-03 17:00:27,696][INFO ][node ] [Savage Steel] initialized
[2014-06-03 17:00:27,697][INFO ][node ] [Savage Steel] starting ...
[2014-06-03 17:00:27,824][INFO ][transport ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.35.142.60:9300]}
[2014-06-03 17:00:30,981][INFO ][cluster.service ] [Savage Steel] new_master [Savage Steel][Sb9jmVPZTgGsK1Yyj_xG-A][20EX17512][inet[/10.35.142.60:9300]], reason: zen-disco-join (elected_as_master)
[2014-06-03 17:00:31,030][INFO ][discovery ] [Savage Steel] elasticsearch/Sb9jmVPZTgGsK1Yyj_xG-A
[2014-06-03 17:00:31,062][INFO ][gateway ] [Savage Steel] recovered [0] indices into cluster_state
[2014-06-03 17:00:31,098][INFO ][http ] [Savage Steel] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.35.142.60:9200]}
In case you're wondering, they're ElasticSearch logs. I want to capture lines which have the term "bound_address" in them, and add a field called "test field".
My logstash configuration file is as follows:
input {
file {
codec => multiline {
pattern => "^\s"
what => "previous"
}
path => ["C:\Users\spanguluri\Downloads\elasticsearch\logs\elasticsearch.log"]
start_position => "beginning"
}
}
filter {
grok {
match => [ "message", "%{YEAR:annual}" ]
add_field => { "foo_field" => "hello world, from %{host}" }
}
if ([message] =~ /bound_address/) {
add_field => { "test_field" => "test field" }
}
}
output {
elasticsearch {
protocol => "http"
host => "localhost"
port => "9200"
index => "logstash-%{+YYYY.MM.dd}"
}
}
When logstash is started, it keeps complaining : expected one of #, { at line 18, column 12 (byte 378) after filter
..
Can someone please look into this? thanks!
There is no filter named add_field
.
You can change this:
if ([message] =~ /bound_address/) {
add_field => { "test_field" => "test field" }
}
To something more like this, using the mutate
filter
if ([message] =~ /bound_address/) {
mutate {
add_field => { "test_field" => "test field" }
}
}
这篇关于Logstash - 将字段添加到包含单词的行(事件)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!