从Lostash的beats.config中的filebeat自定义日志 [英] Customize logs from filebeat in the Lostash's beats.config

查看:226
本文介绍了从Lostash的beats.config中的filebeat自定义日志的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在将ELK与filebeat一起使用.我正在将日志从filebeat发送到Logstash,再从那里发送到Elastic,并在Kibana中进行可视化. 我要粘贴在kibana的日志结果中显示的json结果,如下所示:

I am using ELK with filebeat. I am sending logs from filebeat to Logstash and from there to Elastic and visualizing in Kibana. I am pasting the json result that is displayed in kibana's log result which is as below:

    {
  "_index": "filebeat-6.4.2-2018.10.30",
  "_type": "doc",
  "_source": {
    "@timestamp": "2018-10-30T09:15:31.697Z",
    "fields": {
      "server": "server1"
    },
    "prospector": {
      "type": "log"
    },
    "host": {
      "name": "kushmathapa"
    },
    "message": "{ \"datetime\": \"2018-10-23T18:04:00.811660Z\", \"level\": \"ERROR\", \"message\": \"No response from remote. Handshake timed out or transport failure detector triggered.\" }",
    "source": "C:\\logs\\batch-portal\\error.json",
    "input": {
      "type": "log"
    },
    "beat": {
      "name": "kushmathapa",
      "hostname": "kushmathapa",
      "version": "6.4.2"
    },
    "offset": 0,
    "tags": [
      "lighthouse1",
      "controller",
      "trt"
    ]
  },
  "fields": {
    "@timestamp": [
      "2018-10-30T09:15:31.697Z"
    ]
  }
}

我希望它显示为

    {
  "_index": "filebeat-6.4.2-2018.10.30",
  "_type": "doc",
  "_source": {
    "@timestamp": "2018-10-30T09:15:31.697Z",
    "fields": {
      "server": "server1"
    },
    "prospector": {
      "type": "log"
    },
    "host": {
      "name": "kushmathapa"
    },
    "datetime": 2018-10-23T18:04:00.811660Z,
    "log_level": ERROR,
    "message": "{ \"No response from remote. Handshake timed out or transport failure detector triggered.\" }",
    "source": "C:\\logs\\batch-portal\\error.json",
    "input": {
      "type": "log"
    },
    "beat": {
      "name": "kushmathapa",
      "hostname": "kushmathapa",
      "version": "6.4.2"
    },
    "offset": 0,
    "tags": [
      "lighthouse1",
      "controller",
      "trt"
    ]
  },
  "fields": {
    "@timestamp": [
      "2018-10-30T09:15:31.697Z"
    ]
  }
}

我的beats.config现在看起来像这样

My beats.config looks like this right now

  input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
  } stdout {
    codec => rubydebug { metadata => true }
  }
}

我已经应用了过滤器,但是我似乎缺少了一些东西.

I have applied filters but i seem to be missing something.

推荐答案

您可以使用看起来像这样的配置文件. 在grok过滤器中,将要提取的日志格式添加到elasticsearch中(例如,参考提到的配置).

You can go with the config file which looks something like this. In the grok filter, add the format of your log that you want to ingest to your elasticsearch(for example refer the mentioned config).

input {
beats {
port => 5044
id => "my_plugin_id"
tags => ["logs"]
type => "abc"
}
}
filter {
if [type] == "abc" {
 mutate {
    gsub => [ "message", "\r", "" ]
}

    grok {
        break_on_match => true
                match => {
                         "message" => [
                         "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log_level}%{SPACE}%{GREEDYDATA:message}"
                         ]
                  }
                  overwrite => [ "message" ]
    }

    grok {
        break_on_match => true
                match => {
                         "message" => [
                          "%{TIMESTAMP_ISO8601:timestamp}%{SPACE}%{LOGLEVEL:log_level}%{SPACE}%{GREEDYDATA:message}"
                         ]
                  }
                  overwrite => [ "message" ]
    }

date {
   match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
} 
}
}
output {
if [type] == "abc" {
elasticsearch { 
hosts => ["ip of elasticsearch:port_number of elasticsearch"]
index => "logfiles"
} 
}
else {
elasticsearch { 
hosts => ["ip of elasticsearch:port_number of elasticsearch"]
index => "task_log"
} 
}
stdout {
codec => rubydebug { metadata => true }
}
}

这篇关于从Lostash的beats.config中的filebeat自定义日志的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆