在 Linux 上更快地分叉大型进程? [英] Faster forking of large processes on Linux?

查看:21
本文介绍了在 Linux 上更快地分叉大型进程?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在现代 Linux 上实现与fork-execve 组合来自大型进程相同效果的最快、最好的方法是什么?

What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ?

我的问题是进程分叉是 ~500MByte 大,并且一个简单的基准测试仅从进程中实现了大约 50 分叉/秒(参见从最小规模的进程中大约 1600 分叉/秒),这对于预期来说太慢了申请.

My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application.

一些谷歌搜索出现 vfork 被发明作为解决这个问题的方法......但也有关于 不要使用它.现代Linux似乎已经获得了相关的cloneposix_spawn调用;这些可能有帮助吗?vfork 的现代替代品是什么?

Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ?

我在 i7 上使用 64 位 Debian Lenny(如果 posix_spawn 有帮助,该项目可以转移到 Squeeze).

I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

推荐答案

结果:我打算按照此处其他答案的建议沿着早期生成的帮助程序子流程路线走,但后来我来了这个重新使用大页面支持来改进前叉性能.

Outcome: I was going to go down the early-spawned helper subprocess route as suggested by other answers here, but then I came across this re using huge page support to improve fork performance.

我自己尝试过使用 libhugetlbfs 来简单地让我的所有应用程序的 malloc 分配大页面,我'我现在获得大约 2400 次叉/秒不管进程大小(无论如何我都感兴趣).太棒了.

Having tried it myself using libhugetlbfs to simply make all my app's mallocs allocate huge pages, I'm now getting around 2400 forks/s regardless of the process size (over the range I'm interested in anyway). Amazing.

这篇关于在 Linux 上更快地分叉大型进程?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆