在Linux上更快地分派大流程? [英] Faster forking of large processes on Linux?

查看:119
本文介绍了在Linux上更快地分派大流程?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在现代Linux上实现与 fork相同效果的最快速,最佳方式是 - execve combo 从一个大的过程



我的问题是,进程分叉大约为500MByte,简单的基准测试只能获得约50个分支从目标应用程序的过程(cf〜1600 fks / s,从最小程度的过程中)。



有些Google搜索结果 vfork 被发明为解决这个问题的方法...而且还警告关于不使用它。现代Linux似乎已经获得了相关的克隆 posix_spawn 调用;这些可能有帮助吗?我现在在i7上使用64位Debian Lenny(该项目可能会移动),现在取代 vfork



<如果 posix_spawn 将有所帮助,可以挤压。

解决方案

:我将按照其他答案在这里提出的,推出早期生成的辅助子进程路线,但后来我发现了这个使用巨大的页面支持来提高fork性能。



尝试过我自己使用 libhugetlbfs 简单地让我所有的应用程序的malloc分配巨大的页面,我现在大约2400叉/ s 无论流程大小如何(在我感兴趣的范围内)。惊人的。


What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ?

My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application.

Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ?

I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

解决方案

Outcome: I was going to go down the early-spawned helper subprocess route as suggested by other answers here, but then I came across this re using huge page support to improve fork performance.

Having tried it myself using libhugetlbfs to simply make all my app's mallocs allocate huge pages, I'm now getting around 2400 forks/s regardless of the process size (over the range I'm interested in anyway). Amazing.

这篇关于在Linux上更快地分派大流程?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆