Windows下如何共享日志文件? [英] How do you have shared log files under Windows?

查看:47
本文介绍了Windows下如何共享日志文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有几个不同的进程,我希望它们都记录到同一个文件中.这些进程在 Windows 7 系统上运行.有些是 python 脚本,有些是 cmd 批处理文件.

在 Unix 下,您只需让每个人都以追加模式打开文件并写入.只要每个进程在单个消息中写入的字节数少于 PIPE_BUF 字节,就可以保证每个 write 调用不会与任何其他调用交错.

有没有办法在 Windows 下实现这一点?天真的类 Unix 方法失败了,因为 Windows 不喜欢默认情况下一次打开多个文件进行写入的进程.

解决方案

可以将多个批处理安全地写入单个日志文件.我对 Python 一无所知,但我想这个答案中的概念可以与 Python 集成.

Windows 最多允许一个进程在任何时间点打开一个特定文件以进行写访问.这可用于实现基于文件的锁定机制,以保证事件在多个进程之间序列化.请参阅 https://stackoverflow.com/a/9048097/1012053http://www.dostips.com/forum/viewtopic.php?p=12454 一些例子.>

由于您所做的只是写入日志,因此您可以使用日志文件本身作为锁.日志操作封装在一个子例程中,该子例程尝试以追加模式打开日志文件.如果打开失败,例程会循环返回并再次尝试.一旦打开成功,日志被写入然后关闭,并且例程返回到调用者.该例程执行传递给它的任何命令,并且在例程中写入 stdout 的任何内容都将重定向到日志.

这是一个测试批处理脚本,它创建了 5 个子进程,每个子进程写入日志文件 20 次.写入是安全交错的.

@echo off设置本地如果 "%~1" neq "" 转到 :test:: 初始化设置日志=myLog.log"2>nul del %log%2>nul del "test*.marker"设置 procCount=5设置 testCount=10:: 启动写入同一日志的 %procCount% 个进程for/l %%n in (1 1 %procCount%) do start ""/b "%~f0" %%n:等待子进程完成2>nul dir/b "test*.marker" |find/c "test" |>nul findstr/x "%procCount%" ||转到:等待:: 验证日志结果for/l %%n in (1 1 %procCount%) do (<nul set/p "=Proc %%n 日志计数 = "find/c "Proc %%n: " <%log%):: 清理del "test*.marker"退出/b==============================================================================:: 下面的代码是写入日志文件的过程:测试设置实例=%1for/l %%n in (1 1 %testCount%) 做 (call :log echo Proc %instance% 说你好!调用:日志目录%~f0")echo done >"test%1.marker"出口:log 命令参数...2>空(>>%log% (回声**************************************************************echo Proc %instance%: %date% %time%%*(call ) %= 这个奇怪的语法保证内部块以成功结束 =%%= 如果重定向失败,我们只想循环返回并重试 =%)) ||转到:日志退出/b

这里的输出表明每个进程的所有 20 次写入都成功

Proc 1 日志计数 = 20Proc 2 日志计数 = 20Proc 3 日志计数 = 20Proc 4 日志计数 = 20Proc 5 日志计数 = 20

您可以打开生成的myLog.log"文件以查看写入是如何安全交错的.但是输出太大,无法在此处发布.

通过修改 :log 例程使其不会在失败时重试,很容易证明来自多个进程的同时写入可能会失败.

:log 命令参数...>>%log% (回声**************************************************************echo Proc %instance%: %date% %time%%*)退出/b

以下是破坏":log 例程后的一些示例结果

该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.该进程无法访问该文件,因为它正被另一个进程使用.Proc 1 日志计数 = 12Proc 2 日志计数 = 16Proc 3 日志计数 = 13Proc 4 日志计数 = 18Proc 5 日志计数 = 14

I have several different processes and I would like them to all log to the same file. These processes are running on a Windows 7 system. Some are python scripts and others are cmd batch files.

Under Unix you'd just have everybody open the file in append mode and write away. As long as each process wrote less than PIPE_BUF bytes in a single message, each write call would be guaranteed to not interleave with any other.

Is there a way to make this happen under Windows? The naive Unix-like approach fails because Windows doesn't like more than one process having a file open for writing at a time by default.

解决方案

It is possible to have multiple batch processes safely write to a single log file. I know nothing about Python, but I imagine the concepts in this answer could be integrated with Python.

Windows allows at most one process to have a specific file open for write access at any point in time. This can be used to implement a file based lock mechanism that guarantees events are serialized across multiple processes. See https://stackoverflow.com/a/9048097/1012053 and http://www.dostips.com/forum/viewtopic.php?p=12454 for some examples.

Since all you are trying to do is write to a log, you can use the log file itself as the lock. The log operation is encapsulated in a subroutine that tries to open the log file in append mode. If the open fails, the routine loops back and tries again. Once the open is successful the log is written and then closed, and the routine returns to the caller. The routine executes whatever command is passed to it, and anything written to stdout within the routine is redirected to the log.

Here is a test batch script that creates 5 child processes that each write to the log file 20 times. The writes are safely interleaved.

@echo off
setlocal
if "%~1" neq "" goto :test

:: Initialize
set log="myLog.log"
2>nul del %log%
2>nul del "test*.marker"
set procCount=5
set testCount=10

:: Launch %procCount% processes that write to the same log
for /l %%n in (1 1 %procCount%) do start "" /b "%~f0" %%n

:wait for child processes to finish
2>nul dir /b "test*.marker" | find /c "test" | >nul findstr /x "%procCount%" || goto :wait

:: Verify log results
for /l %%n in (1 1 %procCount%) do (
  <nul set /p "=Proc %%n log count = "
  find /c "Proc %%n: " <%log%
)

:: Cleanup
del "test*.marker"
exit /b

==============================================================================
:: code below is the process that writes to the log file

:test
set instance=%1
for /l %%n in (1 1 %testCount%) do (
  call :log echo Proc %instance% says hello!
  call :log dir "%~f0"
)
echo done >"test%1.marker"
exit

:log command args...
2>nul (
  >>%log% (
    echo ***********************************************************
    echo Proc %instance%: %date% %time%
    %*
    (call ) %= This odd syntax guarantees the inner block ends with success  =%
            %= We only want to loop back and try again if redirection failed =%
  )
) || goto :log
exit /b

Here is the output that demonstrates that all 20 writes were successful for each process

Proc 1 log count = 20
Proc 2 log count = 20
Proc 3 log count = 20
Proc 4 log count = 20
Proc 5 log count = 20

You can open the resulting "myLog.log" file to see how the writes have been safely interleaved. But the output is too large to post here.

It is easy to demonstrate that simultaneous writes from multiple processes can fail by modifying the :log routine so that it does not retry upon failure.

:log command args...
>>%log% (
  echo ***********************************************************
  echo Proc %instance%: %date% %time%
  %*
)
exit /b

Here are some sample results after "breaking" the :log routine

The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
The process cannot access the file because it is being used by another process.
Proc 1 log count = 12
Proc 2 log count = 16
Proc 3 log count = 13
Proc 4 log count = 18
Proc 5 log count = 14

这篇关于Windows下如何共享日志文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆