解析Apache的错误日志中唯一的错误 [英] Parsing Apache Error Logs For Unique Errors

查看:92
本文介绍了解析Apache的错误日志中唯一的错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一些不守规矩的apache的错误日志,我想通过解析并获得唯一的错误。

I have some unruly apache error logs that I would like to parse through and get unique errors.

[周五9月21日6时54分24秒2012] [错误] [客户xxx.xxx.xxx.xxx] PHP致命错误:<错误信息>中引用:LT; URL>

我想我只是想PHP致命一节砍线,丢弃上半年通过uniq的运行下半场。我的目标是找出所有的错误,但有太多的行通过每个手工看,由于对很多重复的错误。

I think I just want to chop the lines at the "PHP Fatal" section, discarding the first half and running the second half through uniq. My goal is to identify all the errors, but there are too many lines to look through each manually, due to to the many duplicate errors.

什么是实现这一目标的最佳方式?

What is the best way to accomplish this?

推荐答案

尝试的grep -o'\\ [错误\\] * $。文件|排序| uniq的

这将只显示匹配正则表达式(而不是整个包含匹配行)的事情。

This will show only thing which match the regex (rather than the whole of a line which contains the match).

然后把排序项similair彼此相邻,这样能的uniq确保没有重复。

Then sort puts similair entries next to each other, so that uniq can ensure there are no duplicates.

如果你想之前排序/ uniq'ing,使用删除客户端位
 的grep -o'。\\ [错误\\] * $文件| [。?客户端* \\ \\] S / \\ //'SED |排序| uniq的

If you want to remove the client bit before sorting / uniq'ing, use grep -o '\[error\].*$' file | sed 's/\[client.*\?\]//' | sort | uniq

这篇关于解析Apache的错误日志中唯一的错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆