在GNU AWK中缓冲记录 [英] Buffer records in GNU awk

查看:92
本文介绍了在GNU AWK中缓冲记录的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试制作一个可以实时观看一些日志的脚本.我有一个请求日志,每个请求的格式都像这样,

I'm trying to make a script that will live watch some logs. I have a log of requests with each request formatted something like this,

---
id=273482
Result=Error
---
id=342345
Result=Success
---

第二个日志,其中每一行都有请求的ID.我需要实时观看请求日志,并将其与第二个日志进行交叉引用.

And a second log where each line has the id of the request. I need to live watch the request log and cross reference it with the second log.

 tail -f requestLog | awk \"BEGIN { RS = \"---\" } /Error/\" | grep --line-buffered id | sed -u ...

我在requestLog的末尾添加-f,使用awk在"---"上拆分记录,然后grep出id行.然后,我将所有内容传递给sed -u来提取id和xargs,以便在第二个日志中查找与错误请求相关的行.

I tail -f the requestLog, use awk to split records on the "---", and then grep out the id lines. Then later I pass all that to sed -u to extract the id and xargs to go grep the second log for lines that were related to the bad requests.

问题是结果出来确实很延迟,因为某些东西(我认为awk)在缓冲错误.如何在每次看到不良记录时使awk连续读取输入并刷新输出?顺便说一下,我正在使用GNU awk.

The problem is the results are coming out really delayed because something (I think awk) is buffering wrong. How can I make awk read the input nonstop and flush the output everytime it sees a bad record? I'm using GNU awk by the way.

推荐答案

GNU awk有一个fflush()可用于刷新缓冲区:

GNU awk has a fflush() you can use to flush buffers:

.. | awk 'BEGIN { RS = "---" } /Error/ { print; fflush(); }' | ..

通过此操作,您可以对管道中的所有阶段进行缓冲.

With this you've line buffered all the stages in the pipeline.

如果将来您还有其他不支持awk fflushgrep --line-bufferedsed -u之类的程序,则GNU coreutils具有更通用的stdbuf,您可以将其与任何程序一起使用:

If you in the future have any other programs in the pipeline that don't support something like awk fflush, grep --line-buffered or sed -u, GNU coreutils has a more general stdbuf you can use with any program:

.. | stdbuf -o 0 any_other_command | ..

这篇关于在GNU AWK中缓冲记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆