是否有与堆内存分配相关的内存开销(例如堆中的标记)? [英] Is there a memory overhead associated with heap memory allocations (eg markers in the heap)?

查看:135
本文介绍了是否有与堆内存分配相关的内存开销(例如堆中的标记)?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Windows上使用最新的Visual Studio C ++编译器来思考C ++,我想知道堆实现:



假设我使用的是版本编译器,我不关心内存碎片/打包问题,是否有与分配内存在堆上相关的内存开销?如果是这样,大概每个分配可能有多少字节?
64位代码比32位大吗?



我真的不知道现代堆的实现,但我想知道是否每个分配都有一些标记写入堆,或者是否维护某种类型的表(如文件分配表)。



在一个相关点上m首先考虑标准库功能,如'map'),微软标准库实现是否使用自己的分配器(对于像树节点这样的东西),以优化堆使用?

解决方案

是的,绝对。



每个分配的内存块都有一个头的常量开销,以及一个小的可变部分(通常在结尾)。确切多少是取决于使用的确切的C运行时库。在过去,我实验发现它是大约32-64字节每分配。可变部分是为了处理对齐 - 每个存储器块将对准一些好的2 ^ n基地址 - 通常为8或16字节。



我不熟悉 std :: map 或类似作品的内部设计,非常怀疑他们有特殊的优化。



您可以轻松地通过以下方式测试开销:

  char * a,* b; 

a = new char;
b = new char;

ptrdiff_t diff = a - b;

cout<< a =<< a<< b =<< b<< diff =< diff;

[注意事项,这可能是这里的大多数常规,上面的ab表达式调用undefined行为,因为减去一个分配的地址和另一个的地址,是未定义的行为。这是为了处理没有线性存储器地址的机器,例如。分段存储器或不同类型的数据基于其类型存储在位置。以上应该肯定可以在任何基于x86的操作系统,不使用分段的内存模型与多个数据段在堆 - 这意味着它适用于Windows和Linux的32和64位模式肯定]。



您可能想用不同类型运行它 - 只要记住diff是在类型的数字,所以如果你使它 int * a,* b 将以四个字节为单位。你可以创建一个 reinterpret_cast< char *>(a) - reinterpret_cast< char *> b);



[diff可能为负值,如果你在一个循环中运行此方法(不删除 / code>和 b ),您可能会发现突然跳跃,其中一个大部分的内存已用尽,运行时库分配了另一个大块]


Thinking in particular of C++ on Windows using a recent Visual Studio C++ compiler, I am wondering about the heap implementation:

Assuming that I'm using the release compiler, and I'm not concerned with memory fragmentation / packing issues, is there a memory overhead associated with allocating memory on the heap? If so, roughly how many bytes per allocation might this be? Would it be larger in 64-bit code than 32-bit?

I don't really know a lot about modern heap implementations, but am wondering whether there are markers written into the heap with each allocation, or whether some kind of table is maintained (like a file allocation table).

On a related point (because I'm primarily thinking about standard-library features like 'map'), does the Microsoft standard-library implementation ever use its own allocator (for things like tree nodes) in order to optimise heap usage?

解决方案

Yes, absolutely.

Every block of memory allocated will have a constant overhead of a "header", as well as a small variable part (typically at the end). Exactly how much that is depends on the exact C runtime library used. In the past, I've experimentally found it to be around 32-64 bytes per allocation. The variable part is to cope with alignment - each block of memory will be aligned to some nice even 2^n base-address - typically 8 or 16 bytes.

I'm not familiar with how the internal design of std::map or similar works, but I very much doubt they have special optimisations there.

You can quite easily test the overhead by:

char *a, *b;

a = new char;
b = new char;

ptrdiff_t diff = a - b;

cout << "a=" << a << " b=" << b << " diff=" << diff;

[Note to the pedants, which is probably most of the regulars here, the above a-b expression invokes undefined behaviour, since subtracting the address of one piece of allocated and the address of another, is undefined behaviour. This is to cope with machines that don't have linear memory addresses, e.g. segmented memory or "different types of data is stored in locations based on their type". The above should definitely work on any x86-based OS that doesn't use a segmented memory model with multiple data segments in for the heap - which means it works for Windows and Linux in 32- and 64-bit mode for sure].

You may want to run it with varying types - just bear in mind that the diff is in "number of the type, so if you make it int *a, *b will be in "four bytes units". You could make a reinterpret_cast<char*>(a) - reinterpret_cast<char *>(b);

[diff may be negative, and if you run this in a loop (without deleting a and b), you may find sudden jumps where one large section of memory is exhausted, and the runtime library allocated another large block]

这篇关于是否有与堆内存分配相关的内存开销(例如堆中的标记)?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆