在整个Linux进程共享数据 [英] Sharing data across processes on linux

查看:103
本文介绍了在整个Linux进程共享数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的申请,我有一个过程,叉掉一个孩子,说child1,而这个子进程的磁盘并退出写入一个巨大的二进制文件。然后,父进程派生了另一个子进程的child2,其内容在这个庞大的文件做进一步处理。

In my application, I have a process which forks off a child, say child1, and this child process writes a huge binary file on the disk and exits. The parent process then forks off another child process, child2, which reads in this huge file to do further processing.

文件转储和重新加载正在我的应用程序慢,我想可能
避免磁盘I / O彻底的方法。我已经确定可能的方式是RAM盘或tmpfs的。
我能以某种方式实现的RAM磁盘或tmpfs的从我的应用程序中?或者有没有其他的
途径,使我能够避免磁盘I / O和完全跨进程发送数据可靠。

The file dumping and re-loading is making my application slow and I'm thinking of possible ways of avoiding disk I/O completely. Possible ways I have identified are ram-disk or tmpfs. Can I somehow implement ram-disk or tmpfs from within my application? Or is there any other way by which I can avoid disk I/O completely and send data across processes reliably.

推荐答案

如果两个子进程没有在同一时间管或插座运行不会为你工作 - 他们的缓冲区将是巨大太小二进制文件'和第一过程将阻塞等待任何读取数据

If the two sub-processes do not run at the same time pipes or sockets won't work for you – their buffers would be too small for the 'huge binary file' and the first process will block waiting for anything for reading the data.

在这种情况下,你更需要某种共享内存。您可以使用SysV的IPC共享内存API,POSIX共享内存API(它在内部最近的Linux使用的tmpfs),或在tmpfs的使用文件(通常安装在/ dev / shm的,有时上的/ tmp)文件直接系统。

In such case you rather need some kind of shared memory. You can use the SysV IPC shared memory API, POSIX shared memory API (which internally uses tmpfs on recent Linux) or use files on a tmpfs (usually mounted on /dev/shm, sometimes on /tmp) file system directly.

这篇关于在整个Linux进程共享数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆