使用Logstash,如何基于每行匹配相同的时间戳来合并行? [英] With Logstash, how do you combine lines based on matching same timestamp on every line?

查看:92
本文介绍了使用Logstash,如何基于每行匹配相同的时间戳来合并行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

MarkLogic日志文件通过将具有相同时间戳的行打印到毫秒来表示多行.您可以在以下代码段中看到这一点:

MarkLogic log files denote mulitple lines by printing the lines with the same timestamp down to the millisecond. You can see that in this snippet:

2014-05-09 18:47:33.844 Info: Dev_Xdbc: [/file-store/get-file.xqy] url=/2.16.840.1.113883.3.21/d3c787ef-2244-48e4-a8a9-82ae5e7ad47a-296 tenant=perform001
2014-05-09 18:47:34.101 Notice: Dev_Xdbc: XDMP-MODNOTFOUND: (err:XQST0059) Module /2.6.0/utils/no-fn.xqy not found
2014-05-09 18:47:34.101 Notice: Dev_Xdbc:  [1.0-ml]

因此,这里以2014-05-09 18:47:34.101开头的两行是同一条日志消息的一部分.

So here the two lines starting with 2014-05-09 18:47:34.101 are part of the same log message.

我不确定如何告诉Logstash将以相同字符串开头的行组合在一起.多行编解码器/过滤器似乎不支持以这种方式组合行.有办法吗?

I'm not sure how to tell Logstash that lines starting with the same string are to be grouped together. The multiline codec/filter does not seem to support combining lines in this way. Is there a way to do this?

推荐答案

这似乎是一个有趣的问题,因此我为它编写了一个基于多行插件的插件,但存在一些复杂性.核心logstash代码中存在一个错误,但是该插件需要修复.

This seemed like an interesting problem, so I wrote a plugin for it, based loosely on the multiline plugin, but there are some complications. There is a bug in the core logstash code that needs to be fixed for the plugin though.

将以下内容提取到lib/filters/related.rb

Extract the following into lib/filters/related.rb

# encoding: utf-8
require "logstash/filters/base"
require "logstash/namespace"
require "set"
#
# This filter will collapse multiline messages from a single source into one Logstash event.
# if they are related based on a pattern defined.  Two events are related if the pattern
# match returns the same thing for consecutive events.
# 
# The config looks like this:
#
#     filter {
#       related {
#         type => "type"
#         pattern => "^%{TIMESTAMP:time}"
#         capture => "TIMESTAMP:time"
#       }
#     }
# 
# The `pattern` is the pattern that is used to match the lines
# The `capture` is the named capture that has to match between the lines
#
class LogStash::Filters::Related < LogStash::Filters::Base

  config_name "related"
  milestone 1

  # The regular expression to match.
  config :pattern, :validate => :string, :required => true
  config :capture, :validate => :string, :required => true

  # The stream identity is how the multiline filter determines which stream an
  # event belongs to. This is generally used for differentiating, say, events
  # coming from multiple files in the same file input, or multiple connections
  # coming from a tcp input.
  #
  # The default value here is usually what you want, but there are some cases
  # where you want to change it. One such example is if you are using a tcp
  # input with only one client connecting at any time. If that client
  # reconnects (due to error or client restart), then logstash will identify
  # the new connection as a new stream and break any multiline goodness that
  # may have occurred between the old and new connection. To solve this use
  # case, you can use "%{@source_host}.%{@type}" instead.
  config :stream_identity , :validate => :string, :default => "%{host}.%{path}.%{type}"

  # Logstash ships by default with a bunch of patterns, so you don't
  # necessarily need to define this yourself unless you are adding additional
  # patterns.
  #
  # Pattern files are plain text with format:
  #
  #     NAME PATTERN
  #
  # For example:
  #
  #     NUMBER \d+
  config :patterns_dir, :validate => :array, :default => []

  # Detect if we are running from a jarfile, pick the right path.
  @@patterns_path = Set.new
  if __FILE__ =~ /file:\/.*\.jar!.*/
    @@patterns_path += ["#{File.dirname(__FILE__)}/../../patterns/*"]
  else
    @@patterns_path += ["#{File.dirname(__FILE__)}/../../../patterns/*"]
  end

  public
  def initialize(config = {})
    super

    @threadsafe = false

    # This filter needs to keep state.
    @types = Hash.new { |h,k| h[k] = [] }
    @pending_unmatched = Hash.new
    @pending = Hash.new
    @previous = Hash.new
  end # def initialize

  public
  def register
    require "grok-pure" # rubygem 'jls-grok'

    @grok = Grok.new

    @patterns_dir = @@patterns_path.to_a + @patterns_dir
    @patterns_dir.each do |path|
      # Can't read relative paths from jars, try to normalize away '../'
      while path =~ /file:\/.*\.jar!.*\/\.\.\//
        # replace /foo/bar/../baz => /foo/baz
        path = path.gsub(/[^\/]+\/\.\.\//, "")
      end

      if File.directory?(path)
        path = File.join(path, "*")
      end

      Dir.glob(path).each do |file|
        @logger.info("Grok loading patterns from file", :path => file)
        @grok.add_patterns_from_file(file)
      end
    end

    @grok.compile(@pattern)

    @logger.debug("Registered multiline plugin", :type => @type, :config => @config)
  end # def register

  public
  def filter(event)
    key = event.sprintf(@stream_identity)

    ## if there's anything pending unmatched, we need to push it out
    ## and then push the event back on the filter queue (yield)
    if @pending_unmatched[key]
      @logger.info("Related", :unmatched => key)
    clone = event.clone
        event.overwrite(@pending_unmatched[key])
        @pending_unmatched.delete(key)
    yield clone
    return
    end
    return unless filter?(event);

    if event["message"].is_a?(Array)
      match = @grok.match(event["message"].first)
    else
      match = @grok.match(event["message"])
    end
    pending = @pending[key]

    @logger.debug("Related", :pattern => @pattern, :message => event["message"],
                  :match => match, :capture => @capture)

    if !match 
      if pending
        @pending_unmatched[key] = event.clone
        event.overwrite(pending)
    @pending.delete(key)
    collapse_event!(event)
        filter_matched(event)
      end
      return
    end
    ## from here out, we've matched
    if pending
      if match.captures[@capture] == @previous[key]
          pending.append(event)
          pending.tag "related"
      event.cancel
      else
        @pending[key] = event.clone
        @previous[key] = match.captures[@capture]
        event.overwrite(pending)
    collapse_event!(event)
        filter_matched(event)
      end
    else
       @pending[key] = event
       @previous[key] = match.captures[@capture]
       event.cancel
    end
  end # def filter

  # Flush any pending messages. This is generally used for unit testing only.
  #
  # Note: flush is disabled now; it is preferable to use the multiline codec.
  public
  def __flush
    events = []
    @pending.each do |key, value|
      value.uncancel
      events << value
    end
    @pending.clear
    return events
  end # def flush

  def collapse_event!(event)
    event["message"] = event["message"].join("\n") if event["message"].is_a?(Array)
    event["@timestamp"] = event["@timestamp"].first if event["@timestamp"].is_a?(Array)
    event
  end
end # class LogStash::Filters::Related

然后修改lib/event.rb以添加注释行

And then modify lib/event.rb to add the commented line

public
def overwrite(event)
  @data = event.to_hash
  #convert timestamp if it is a String
  if @data[TIMESTAMP].is_a?(String)
    @data[TIMESTAMP] = LogStash::Time.parse_iso8601(@data[TIMESTAMP])
  end
  # add the line below.  This is needed because if we are overwriting data["message"]
  # with an array, we need to regenerate the accessors otherwise it uses the message
  # before it was overwritten
  @accessors = LogStash::Util::Accessors.new(@data)
end

这篇关于使用Logstash,如何基于每行匹配相同的时间戳来合并行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆