使日志阅读效率 [英] Make log reading efficient

查看:213
本文介绍了使日志阅读效率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个用于监视数据库的Perl脚本,我试图写它作为一个PowerShell脚本。

I have a perl script that is used to monitor databases and I'm trying to write it as a powershell script.

在perl脚本有一个功能,通过读取错误日志并筛选出它是什么事项,其返回。它还保存日志文件的当前位置,以便它具有在下一次读取日志它可以就在那里留下的,而不是再次读取整个日志。这是通过使用完成的告诉的功能。

In the perl script there is a function that reads through the errorlog and filters out what it is that matters and returns it back. It also saves the current position of the log file so that the next time it has to read the log it can start where it left of instead of reading the whole log again. This is done by using the tell function.

我有一个想法,以使用Get-Content命令,并开始阅读在最后的位置,过程的每一行,直至文件的末尾,然后保存的位置。

I have an idea to use the Get-Content cmdlet, and start reading at the last position, process each line until the end of the file and then saving the position.

你知道什么花招,这样我可以在日志文件中的位置,看完之后,使读取开始在特定的位置。

Do you know any tricks so that I can get the position in the log file after reading and make the read start at a particular location.

还是有一个更好的和/或更简单的方法来实现这一目标?

Or is there an better and/or easier way to achieve this?

吉斯利

编辑:这必须通过脚本和不与其他一些工具来完成。

This has to be done through the script and not with some other tool.

编辑:所以我有了一些使用.NET API,但它并不完全为我工作。 我发现了一个有用的链接<一个href="http://stackoverflow.com/questions/4192072/how-to-process-a-file-in-powershell-line-by-line-as-a-stream">here和<一href="http://stackoverflow.com/questions/1262965/c-how-do-i-read-a-specified-line-in-a-text-file">here

So I'm getting somewhere with the .NET API but it's not quite working for me. I found a helpful links here and here

下面是我到目前为止有:

Here is what I have so far:

function check_logs{
param($logs, $logpos)
$count = 1
$path = $logs.file
$br = 0
$reader = New-Object System.IO.StreamReader("$path")
$reader.DiscardBufferedData()
$reader.BaseStream.Seek(5270, [System.IO.SeekOrigin]::Begin)
for(;;){
    $line = $reader.ReadLine()
    if($line -ne $null){$br = $br + [System.Text.Encoding]::UTF8.GetByteCount($line)}
    if($line -eq $null -and $count -eq 0){break}
    if($line -eq $null){$count = 0}
    elseif($line.Contains('Error:')){
        $l = $line.split(',')
        Write-Host "$line  $br"
    }
}

}

我还没有找到一种方法来正确使用搜索功能。有人能指出我朝着正确的方向?

I haven't found a way to use the seek function correctly. Can someone point me in the right direction?

如果我运行这个它输出5270,但如果我跑这与出行,我试试基本流我得到了求:

If I run this it outputs 5270 but if I run this with out the line where I try to seek in the base stream I get:

2011-08-12 08:49:36.51 Logon       Error: 18456, Severity: 14, State: 38.  5029
2011-08-12 08:49:37.30 Logon       Error: 18456, Severity: 14, State: 38.  5270
2011-08-12 16:11:46.58 spid18s     Error: 1474, Severity: 16, State: 1.  7342
2011-08-12 16:11:46.68 spid18s     Error: 17054, Severity: 16, State: 1.  7634
2011-08-12 16:11:46.69 spid29s     Error: 1474, Severity: 16, State: 1.  7894

其中第一部分是从日志和数量在端重新$ P $读取线psents读取该点的字节。因此,大家可以看到我现在尝试使用搜索功能跳过第一个错误行,但正如我前面所说的输出是5270,如果我使用搜索功能。

Where the first part is the line read from the log and the number at the end represents the bytes read at that point. So as you can see I'm now trying to use the seek function to skip the first error line but as I said earlier the output is 5270 if I use the seek function.

我在想什么?????

What am I missing?????

吉斯利

推荐答案

你可能能够与一些.NET对象做到这一点等等...

you would probably be able to do this with some .net objects etc...

如果它是一个更标准格式的日志文件,我不看太多过去的 LOGPARSER 虽然。这是真棒时间之前,仍然是真棒!

If it's a more standard formatted log file I don't look much past logparser though. It was awesome before time and is still awesome!

您可以通过命令行或COM PowerShell的使用。它以纪念它是在一个文件中,并从那里拿起能力(存储在一个LPC文件中的信息)。

You can use it via the command line or COM with PowerShell. It has the ability to mark where it was in a file and pick up from there (stores the info in a lpc file).

可能有人会想出这样做的一个很好的方式,但如果没有,你也可以看看切换到写入错误信息的事件查看器。您可以将您搜索事件查看器的最后一个ID或最后一次有每次从检查。

May be someone will come up with a good way of doing this but if not you could also look at switching to writing error information to the event viewer. You can store the last id or last time you searched the event viewer and check from there each time.

希望有更好的东西...

hopefully there is something better...

编辑:

如果该文件是制表符分隔,您可以使用import-CSV命令和存储的最后一个号码(它会要么计数或计数1,如果头被包含在计数)。随着最后一个数字就可以跳到最后一点在文件

If the file is tab delimited you can use the import-csv command and store the last number (it'd either be count or count-1 if the the header is included in count). With the last number you can jump to the last point in the file

# use Import-CliXML to get the $last_count
Import-CliXML $path_to_last_count_xml
$file = Import-Csv filename -delimiter "`t"
for ($i=$last_count; $i -gte $file.count; $i++) {
    $line = $file[$i]
    # do something with $line
    ...
}
$last_count = $file.count | Export-CliXML $path_to_last_count_xml

# use this to clear the memory
Remove-Variable $file
[GC]::collect

或者你可以使用sp_readerrorlog直接查询数据库;使用最后一次像上次计数以上。

or you could directly query the DB using sp_readerrorlog; using the last time like the last count above.

这篇关于使日志阅读效率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆