new []不会减少可用内存,直到填充为止 [英] new[] doesn't decrease available memory until populated

查看:73
本文介绍了new []不会减少可用内存,直到填充为止的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是在使用G ++ 4.1.2的CentOS 64位上的C ++中.

This is in C++ on CentOS 64bit using G++ 4.1.2.

我们正在编写一个测试应用程序,以将系统上的内存使用量加载n GB.其想法是通过SNMP等监视整个系统负载.因此,这只是一种执行监视的方式.

We're writing a test application to load up the memory usage on a system by n Gigabytes. The idea being that the overall system load gets monitored through SNMP etc. So this is just a way of exercising the monitoring.

然而,我们看到的是简单地做:

What we've seen however is that simply doing:

char* p = new char[1000000000];

不会影响top或free -m

doesn't affect the memory used as shown in either top or free -m

将内存写入以下地址后,内存分配似乎才变得真实":

The memory allocation only seems to become "real" once the memory is written to:

memcpy(p, 'a', 1000000000);   //shows an increase in mem usage of 1GB

但是我们必须写入所有内存,仅写入第一个元素不会显示已用内存的增加:

But we have to write to all of the memory, simply writing to the first element does not show an increase in the used memory:

p[0] = 'a';    //does not show an increase of 1GB.

这是否正常,实际上是否已完全分配了内存?我不确定是不是我们使用的工具(top和free -m)显示了错误的值,还是不确定编译器或运行时和/或内核中发生了什么聪明的事情.

Is this normal, has the memory actually been allocated fully? I'm not sure if it's the tools we are using (top and free -m) that are displaying incorrect values or whether there is something clever going on in the compiler or in the runtime and/or kernel.

即使在关闭优化的调试版本中也可以看到这种行为.

This behavior is seen even in a debug build with optimizations turned off.

据我了解,new []立即分配了内存.C ++运行时是否将这种实际分配延迟到以后访问它时才进行.在那种情况下,是否可以将内存不足异常推迟到实际分配内存之后,直到访问内存?

It was my understanding that a new[] allocated the memory immediately. Does the C++ runtime delay this actual allocation until later on when it is accessed. In that case can an out of memory exception be deferred until well after the actual allocation of the memory until the memory is accessed?

这对我们来说不是问题,但是很高兴知道为什么会这样!

As it is it is not a problem for us, but it would be nice to know why this is occurring the way it is!

干杯!

我不想知道我们应该如何使用Vector,这不是OO/C ++/当前的工作方式等.我只是想知道为什么这样发生而不是提供其他尝试方法的建议.

I don't want to know about how we should be using Vectors, this isn't OO / C++ / the current way of doing things etc etc. I just want to know why this is happening the way it is, rather than have suggestions for alternative ways of trying it.

推荐答案

请检查是否过量使用.默认情况下,Linux在访问之前不会保留内存.而且,如果最终由于需要的内存多于可用内存,则不会出现错误,但是会杀死一个随机进程.您可以使用/proc/sys/vm/* 控制此行为.

Please look up for overcommit. Linux by default doesn't reserve memory until it is accessed. And if you end up by needing more memory than available, you don't get an error but a random process is killed. You can control this behavior with /proc/sys/vm/*.

IMO,过量使用应该是每个进程设置,而不是全局设置.并且默认值不应为过量使用.

IMO, overcommit should be a per process setting, not a global one. And the default should be no overcommit.

这篇关于new []不会减少可用内存,直到填充为止的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆