为什么不重新分配小块时分配大块内存失败 [英] Why does allocating large chunks of memory fail when reallocing small chunks doesn't

查看:158
本文介绍了为什么不重新分配小块时分配大块内存失败的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此代码导致x指向一块大小为100GB的内存.

This code results in x pointing to a chunk of memory 100GB in size.

#include <stdlib.h>
#include <stdio.h>

int main() {
    auto x = malloc(1);
    for (int i = 1; i< 1024; ++i) x = realloc(x, i*1024ULL*1024*100);
    while (true); // Give us time to check top
}

虽然此代码分配失败.

#include <stdlib.h>
#include <stdio.h>

int main() {
    auto x = malloc(1024ULL*1024*100*1024);
    printf("%llu\n", x);
    while (true); // Give us time to check top
}

推荐答案

我的猜测是,系统的内存大小小于您要分配的100 GiB.尽管Linux确实过量使用了内存,但它仍然无法满足超出其能力范围的请求.这就是第二个示例失败的原因.

My guess is, that the memory size of your system is less than the 100 GiB that you are trying to allocate. While Linux does overcommit memory, it still bails out of requests that are way beyond what it can fulfill. That is why the second example fails.

另一方面,第一个示例的许多小增量都远低于该阈值.因此,每个内核都成功了,因为内核知道您还不需要任何先前的内存,因此没有迹象表明它将无法支持这100个额外的MiB.

The many small increments of the first example, on the other hand, are way below that threshold. So each one of them succeeds as the kernel knows that you didn't require any of the prior memory yet, so it has no indication that it won't be able to back those 100 additional MiB.

我认为,来自进程的内存请求失败的阈值是相对于可用RAM的,并且可以调整(尽管我不记得具体如何).

I believe that the threshold for when a memory request from a process fails is relative to the available RAM, and that it can be adjusted (though I don't remember how exactly).

这篇关于为什么不重新分配小块时分配大块内存失败的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆