UNIX:我可以并行写入同一文件而不会丢失条目吗? [英] unix: can i write to the same file in parallel without missing entries?

查看:73
本文介绍了UNIX:我可以并行写入同一文件而不会丢失条目吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了一个脚本,可以并行执行命令.我让他们都将条目写入同一日志文件.顺序错误或条目被交错都没关系,但是我注意到某些条目丢失了.我可能应该在写入之前锁定文件,但是,如果多个进程尝试同时写入文件,是否会导致条目丢失,这是真的吗?

I wrote a script that executes commands in parallel. I let them all write an entry to the same log file. It does not matter if the order is wrong or entries are interleaved, but i noticed that some entries are missing. I should probably lock the file before writing, however, is it true that if multiple processes try to write to a file simultaneously, it will result in missing entries?

推荐答案

是的,如果不同的进程独立打开并写入同一文件,则可能导致写入重叠和数据丢失.发生这种情况是因为每个进程都将获得自己的文件指针,该指针仅通过本地写入来推进.

Yes, if different processes independently open and write to the same file, it may result in overlapping writes and missing data. This happens because each process will get its own file pointer, that advances only by local writes.

代替锁定,更好的选择可能是在所有工作进程的祖先中打开日志文件,让它跨fork()继承,并由它们用于日志记录.这意味着将有一个共享文件指针,当任何进程写入新条目时该指针都会前进.

Instead of locking, a better option might be to open the log file once in an ancestor of all worker processes, have it inherited across fork(), and used by them for logging. This means that there will be a single shared file pointer, that advances when any of the processes writes a new entry.

这篇关于UNIX:我可以并行写入同一文件而不会丢失条目吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆