使用bash工具处理日志文件 [英] Processing log files using bash tools

查看:110
本文介绍了使用bash工具处理日志文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个.csv文件,大约有10列正在记录数据。我想用这样的东西:

I have a .csv file with approx 10 columns that is logging data. I want to use something like this:

How to get the first line of a file in a bash script?

它抓取每个文件的第一行然后处理日志。但是,一旦处理完该行,我想将其标记为已处理(可以像添加一个新列或添加一个***一样简单)。所以基本上,我想抓住第一行没有处理,处理它,并转移到下一个未处理的行等。

Where it is grabbing the first line of each file and then processing the logs. However, once the line is processed, I want to mark it as processed (can be as simple as adding a new column on the end with a *** or something). So basically, I want to grab the first line not processed, process it, and move to the next unprocessed line, etc.

我需要使用sed,awk ,grep和/或其他标准。 bash脚本将在无限循环中坐在后台运行。本质上,我试图实时阅读和处理这个日志文件,但需要记录日志。

I need to do this using sed, awk, grep, and/or other standards. The bash script will sit and run in the background on an infinite while loop. Essentially, I am trying to read and process this log file in real-time, but need the log for history.

编辑:
另外,我需要这个来标记已经在文件中读取的行。这样,如果服务器停止,我可以知道在哪里接受处理。因此,如果我能找到一种方法来做到这一点,尾巴就可以工作。

Also, I need this to mark the lines that have been read in the file. That way if the server stops, I can know right where to pick up processing. So tail will work if I can figure out a way to do that.

谢谢!

Thanks!

推荐答案

Rather than sitting in a infinite loop you could do this:

your_log_file | some_processing_pipeline

tail -n +1 -f your_log_file | some_processing_pipeline

这将开始在第1行读取日志文件,然后等待新行出现并通过他们到 some_processing_pipeline

This will start reading your logfile at line 1, then continuously wait for new lines to appear and pass them to some_processing_pipeline.

这篇关于使用bash工具处理日志文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆