阻止R在UNIX/Linux上使用虚拟内存? [英] Prevent R from using virtual memory on unix/linux?

查看:73
本文介绍了阻止R在UNIX/Linux上使用虚拟内存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有没有一种方法可以防止R在Unix机器上使用任何虚拟内存?每当发生这种情况是因为我搞砸了,然后我想中止计算.

Is there a way to prevent R from ever using any virtual memory on a unix machine? Whenever it happens it is because I screwed up and I then want to abort the computation.

我正在与其他几个人共享的功能强大的计算机上使用大型数据集.有时我会触发需要更多 RAM 的命令,这会导致 R 开始交换并最终冻结整个机器.通常,我可以通过在我的〜/.bashrc

I am working with a big datasets on a powerful computer shared with several other people. Sometimes I set off commands that requires more RAM than is available, which causes R to start swapping and eventually freeze the whole machine. Normally I can solve this by setting a ulimit in my ~/.bashrc

ulimit -m 33554432 -v 33554432  # 32 GB RAM of the total 64 GB

,这会导致R在尝试分配超出可用内存的内存时抛出错误并中止.但是,如果在并行化(通常使用 snow 程序包)时犯了这种错误,则 ulimit 无效,并且计算机还是会崩溃.我猜这是因为 snow 将工作程序作为不在bash中运行的单独进程启动.如果我改为尝试在我的〜/.Rprofile 中设置 ulimit ,我会得到一个错误:

which causes R to throw an error and abort when trying to allocate more memory than is available. However, if I make a misstake of this sort when parallelizing (typically using the snow package) the ulimit has no effect and the machine crashes anyway. I guess that is because snow launches the workers as separate processes that are not run in bash. If I instead try to set the ulimit in my ~/.Rprofile I just get an error:

> system("ulimit -m 33554432 -v 33554432")
ulimit: 1: too many arguments

有人可以帮我找出实现此目标的方法吗?

Could someone help me figure out a way to accomplish this?

为什么不能在 bash 中将虚拟内存的 ulimit 设置为0?

Why can I not set a ulimit of 0 virtual memory in bash?

$ ulimit -m 33554432 -v 0

如果我这样做,它将迅速关闭.

If I do it quickly shuts down.

推荐答案

当您运行在子进程中执行的 system("ulimit")时.父级不会从父级继承 ulimit .(这与进行 system("cd dir") system("export ENV_VAR = foo")类似.

When you run system("ulimit") that is executing in a child process. The parent does not inherit the ulimit from the parent. (This is analgous to doing system("cd dir"), or system("export ENV_VAR=foo").

在启动环境的shell中进行设置是正确的方法.该限制在并行情况下不起作用,这很可能是因为它是每个进程的限制,而不是全局系统的限制.

Setting it in the shell from which you launch the environment is the correct way. The limit is not working in the parallel case most likely because it is a per-process limit, not a global system limit.

在Linux上,您可以配置严格(过度)过量使用记帐,以防止内核处理无法由物理内存支持的 mmap 请求.

On Linux you can configure strict(er) overcommit accounting which tries to prevent the kernel from handling out a mmap request that cannot be backed by physical memory.

这是通过调整sysctl参数 vm.overcommit_memory vm.overcommit_ratio 来完成的.(有关这些内容的Google.)

This is done by tuning the sysctl parameters vm.overcommit_memory and vm.overcommit_ratio. (Google about these.)

这可能是防止发生颠簸情况的有效方法.但是要权衡的是,当事情表现良好(将更多/更大的进程塞入内存)时,您会失去过量使用所提供的好处.

This can be an effective way to prevent thrashing situations. But the tradeoff is that you lose the benefit that overcommit provides when things are well-behaved (cramming more/larger processes into memory).

这篇关于阻止R在UNIX/Linux上使用虚拟内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆