Logstash - 将输出从日志文件发送到elk [英] Logstash - Send output from log files to elk

查看:707
本文介绍了Logstash - 将输出从日志文件发送到elk的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在弹性搜索中有一个索引,它有一个名为locationCoordinates的字段。它正在从logstash发送到ElasticSearch。



此字段中的数据看起来像这样...

  -38.122,145.025 

当这个字段出现在ElasticSearch中时,它不是作为一个地理位置出现。



我知道如果我在下面这样做。

  {
mappings:{
logs:{
properties:{
http_request.locationCoordinates:{
键入:geo_point
}
}
}
}
}

但是我想知道的是如何更改logstash.conf文件,以便在启动时执行此操作。



目前我的logstash.conf看起来像这样...

 输入{

#默认GELF输入
gelf {
port => 12201
type => gelf
}

#默认TCP输入
tcp {
port => 5000
type => syslog
}

#默认UDP输入
udp {
port => 5001
type => prod
codec => json
}
文件{
path => [/tmp/app-logs/*.log]
codec => json {
charset => UTF-8
}
start_position => 开始
sincedb_path => / dev / null
}
}

过滤器{
json {
source => 消息
}
}

输出{
弹性搜索{
主机=> elasticsearch:9200
}
}

我最终得到这个在Kibana(没有小地理标志)。



解决方案

您只需修改 elasticsearch 输出即可配置索引模板,您可以在其中添加其他映射。 >

  output {
elasticsearch {
hosts => elasticsearch:9200
template_overwrite => true
template => /path/to/template.json
}
}

然后在 /path/to/template.json 中的文件中,您可以添加额外的 geo_point 映射

  {
template:logstash- *,
mappings:{
logs :{
properties:{
http_request.locationCoordinates:{
type:geo_point
}
}
}
}
}

如果你想保留官方的logstash模板,你可以下载它并添加您的具体 geo_point 映射到它。


I have an index in elastic search that has a field named locationCoordinates. It's being sent to ElasticSearch from logstash.

The data in this field looks like this...

-38.122, 145.025

When this field appears in ElasticSearch it is not coming up as a geo point.

I know if I do this below it works.

{
  "mappings": {
    "logs": {
      "properties": {
        "http_request.locationCoordinates": {
          "type": "geo_point"
        }
      }
    }
  }
}

But what I would like to know is how can i change my logstash.conf file so that it does this at startup.

At the moment my logstash.conf looks a bit like this...

input {

    # Default GELF input
    gelf {
        port => 12201
        type => gelf
    }

    # Default TCP input
    tcp {
        port => 5000
        type => syslog
    }

    # Default UDP input
    udp {
        port => 5001
        type => prod
        codec => json
    }
    file {
       path =>  [ "/tmp/app-logs/*.log" ]
       codec =>   json {
          charset => "UTF-8"
       }
       start_position => "beginning"
       sincedb_path => "/dev/null"
   }
}

filter {
   json{
      source => "message"
   }
}

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    }
}

And I end up with this in Kibana (without the little Geo sign).

解决方案

You simply need to modify your elasticsearch output to configure an index template in which you can add your additional mapping.

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        template_overwrite => true
        template => "/path/to/template.json"
    }
}

And then in the file at /path/to/template.json you can add your additional geo_point mapping

{
  "template": "logstash-*",
  "mappings": {
    "logs": {
      "properties": {
        "http_request.locationCoordinates": {
          "type": "geo_point"
        }
      }
    }
  }
}

If you want to keep the official logstash template, you can download it and add your specific geo_point mapping to it.

这篇关于Logstash - 将输出从日志文件发送到elk的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆