你怎么有Windows下共享日志文件? [英] How do you have shared log files under Windows?

查看:142
本文介绍了你怎么有Windows下共享日志文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有几个不同的过程,我想他们希望所有的日志到同一个文件。这些进程在Windows 7系统上运行。有些是Python脚本和其他人 CMD 的批处理文件。

I have several different processes and I would like them to all log to the same file. These processes are running on a Windows 7 system. Some are python scripts and others are cmd batch files.

在Unix系统下你只需要大家打开追加模式的文件,并写入了。只要每个进程中写道小于 PIPE_BUF 在一个单一的消息字节,每个调用将保证不交错与任何其他

Under Unix you'd just have everybody open the file in append mode and write away. As long as each process wrote less than PIPE_BUF bytes in a single message, each write call would be guaranteed to not interleave with any other.

有没有办法做到这一点在Windows下?天真的类Unix的办法文件,因为Windows不喜欢有公开默认情况下为一次写入文件有多个进程。

Is there a way to make this happen under Windows? The naive Unix-like approach files because WIndows doesn't like more than one process having a file open for writing at a time by default.

推荐答案

有可能有多个批处理过程安全地写入到一个日志文件。我什么都不知道了Python,但我想在这个答案的概念可能与Python集成。

It is possible to have multiple batch processes safely write to a single log file. I know nothing about Python, but I imagine the concepts in this answer could be integrated with Python.

Windows允许至多一个进程具有开放在任何时间点的写访问特定文件。这可以被用来实现,保证活动在多个进程序列基于文件锁定机构。请参见 http://stackoverflow.com/a/9048097/1012053 并的 http://www.dostips.com/forum/viewtopic.php?p=12454 一些例子。

Windows allows at most one process to have a specific file open for write access at any point in time. This can be used to implement a file based lock mechanism that guarantees events are serialized across multiple processes. See http://stackoverflow.com/a/9048097/1012053 and http://www.dostips.com/forum/viewtopic.php?p=12454 for some examples.

由于所有的你正在试图做的是写入日志,你可以使用日志文件本身作为锁。日志操作封装在试图以附加模式打开日志文件的子程序。如果打开失败,则程序循环返回并再次尝试。一旦打开成功写入日志,然后关闭,并且程序返回到调用者。在程序中写入stdout的任何命令传递给它的例程执行,以及任何被重定向到日志中。

Since all you are trying to do is write to a log, you can use the log file itself as the lock. The log operation is encapsulated in a subroutine that tries to open the log file in append mode. If the open fails, the routine loops back and tries again. Once the open is successful the log is written and then closed, and the routine returns to the caller. The routine executes whatever command is passed to it, and anything written to stdout within the routine is redirected to the log.

下面是测试批处理脚本,创建5子进程,每个写入日志文件的20倍。写操作都安全地交错。

Here is a test batch script that creates 5 child processes that each write to the log file 20 times. The writes are safely interleaved.

@echo off
setlocal
if "%~1" neq "" goto :test

:: Initialize
set log="myLog.log"
2>nul del %log%
2>nul del "test*.marker"
set procCount=5
set testCount=10

:: Launch %procCount% processes that write to the same log
for /l %%n in (1 1 %procCount%) do start "" /b "%~f0" %%n

:wait for child processes to finish
2>nul dir /b "test*.marker" | find /c "test" | >nul findstr /x "%procCount%" || goto :wait

:: Verify log results
for /l %%n in (1 1 %procCount%) do (
  <nul set /p "=Proc %%n log count = "
  find /c "Proc %%n: " <%log%
)

:: Cleanup
del "test*.marker"
exit /b

==============================================================================
:: code below is the process that writes to the log file

:test
set instance=%1
for /l %%n in (1 1 %testCount%) do (
  call :log echo Proc %instance% says hello!
  call :log dir "%~f0"
)
echo done >"test%1.marker"
exit

:log command args...
2>nul (
  >>%log% (
    echo ***********************************************************
    echo Proc %instance%: %date% %time%
    %*
    (call ) %= This odd syntax guarantees the inner block ends with success  =%
            %= We only want to loop back and try again if redirection failed =%
  )
) || goto :log
exit /b

下面是一个演示,所有20写入成功为每个进程

Here is the output that demonstrates that all 20 writes were successful for each process

Proc 1 log count = 20
Proc 2 log count = 20
Proc 3 log count = 20
Proc 4 log count = 20
Proc 5 log count = 20

您可以打开生成的myLog.log的文件怎么看已经被写入安全地交错。但输出过大在这里发表。

You can open the resulting "myLog.log" file to see how the writes have been safely interleaved. But the output is too large to post here.

这是很容易证明从多个流程,同时写入可以通过修改失败:日志例程,以便它不会失败时重试

It is easy to demonstrate that simultaneous writes from multiple processes can fail by modifying the :log routine so that it does not retry upon failure.

:log command args...
>>%log% (
  echo ***********************************************************
  echo Proc %instance%: %date% %time%
  %*
)
exit /b

下面是一些样品后的结果破了:日志常规

Here are some sample results after "breaking" the :log routine

The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
Proc 1 log count = 12
Proc 2 log count = 16
Proc 3 log count = 13
Proc 4 log count = 18
Proc 5 log count = 14

这篇关于你怎么有Windows下共享日志文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆