stque :: deque的内存开销是什么? [英] What the heque is going on with the memory overhead of std::deque?
问题描述
我正在使用一个使用 std :: queue
的外部排序算法,必须小心地限制其内存使用。我注意到,在合并阶段(使用几个 std :: queue
的固定长度),我的内存使用增加到约2.5X我所期望的。由于 std :: queue
默认使用 std :: deque
作为其底层容器,我对 std :: deque
以确定其内存开销。以下是运行在VC ++ 9上的结果,在发布模式下,使用64位进程:
I am working on an external sorting algorithm that uses std::queue
and must carefully constrain its memory usage. I have noticed that during the merge phase (which uses several std::queue
s of fixed length), my memory usage increases to about 2.5X what I expected. Since std::queue
by default uses std::deque
as its underlying container, I ran some tests on std::deque
to determine its memory overhead. Here are the results, running on VC++ 9, in release mode, with a 64-bit process:
添加100,000,000时 char
s到 std :: deque
,内存使用量增加到252,216K。注意,100M char
(1个字节)应该占用97,656K,因此这是154,560K的开销。
When adding 100,000,000 char
s to a std::deque
, the memory usage grows to 252,216K. Note that 100M char
s (1 byte) should occupy 97,656K, so this is an overhead of 154,560K.
我用 double
s(8字节)重复测试,看到内存增长到1,976,676K,而100M double
应该占用781,250K,开销为1,195,426K!
I repeated the test with double
s (8 bytes) and saw memory grow to 1,976,676K, while 100M double
s should occupy 781,250K, for an overhead of 1,195,426K!!
现在我明白了 std :: deque
通常实现为块的链接列表。如果这是真的,那么为什么开销与元素大小成比例(因为当然指针大小应该固定为8字节)?
Now I understand that std::deque
is normally implemented as a linked list of "chunks." If this is true, then why is the overhead proportional to the element size (because of course the pointer size should be fixed at 8 bytes)? And why is it so danged huge?
任何人都可以了解为什么 std :: deque
使用这么多danged内存?我想我应该将 std :: queue
底层容器切换到 std :: vector
,因为没有开销(假设适当的大小为 reserve
ed)。我认为 std :: deque
的好处很大程度上被否定的事实,它有这么大的开销(导致缓存未命中,页面错误等)并且考虑到总体内存使用量低得多,复制 std :: vector
元素的成本可能会更低。这是否只是Microsoft的 std :: deque
的一个坏实现?
Can anybody shed some light on why std::deque
uses so much danged memory? I'm thinking I should switch my std::queue
underlying containers to std::vector
as there is no overhead (given that the appropriate size is reserve
ed). I'm thinking the benefits of std::deque
are largely negated by the fact that it has such a huge overhead (resulting in cache misses, page faults, etc.), and that the cost of copying std::vector
elements may be less, given that the overall memory usage is so much lower. Is this just a bad implementation of std::deque
by Microsoft?
推荐答案
查看_DEQUESIZ的代码(每个块的元素数量):
Look at the code for _DEQUESIZ (number of elements per block):
#define _DEQUESIZ (sizeof (_Ty) <= 1 ? 16 \
: sizeof (_Ty) <= 2 ? 8 \
: sizeof (_Ty) <= 4 ? 4 \
: sizeof (_Ty) <= 8 ? 2 : 1) /* elements per block (a power of 2) */
如果元素较大,它变小。只有大于8个字节的元素才会得到预期的行为(随着元素大小的增加,开销的百分比减少)。
It gets smaller if the element is larger. Only for elements larger than 8 bytes will you get the expected behavior (percentual decrease of overhead with increase of element size).
这篇关于stque :: deque的内存开销是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!