推力:sort_by_key由于内存分配而变慢 [英] Thrust: sort_by_key slow due to memory allocation
问题描述
我正在用大小为8000万的键值int数组进行sort_by_key
.
该设备是具有 2GB VRAM的 GTX 560 Ti .何时可用(免费)
在sort_by_key为1200MB
之前存储,它将在200ms
中完成排序.
但是,当可用内存降至600MB
时,
相同的键值数组需要1.5-3s
!
I am doing a sort_by_key
with key-value int arrays of size 80 million.
The device is a GTX 560 Ti with 2GB VRAM. When the available (free)
memory before the sort_by_key is 1200MB
, it finishes sorting in 200ms
.
But, when the available memory drops to 600MB
, the sort_by_key for the
same key-value arrays takes 1.5-3s
!
我在 Compute Visual Profiler 下运行了该程序.我发现GPU
在sort_by_key
之前的最后一个内核之间,时间戳跳1.5-3s
和sort_by_key
内的第一个内核调用(这是一个
RakingReduction
).
I ran the program under Compute Visual Profiler. I found that the GPU
timestamp jumps by 1.5-3s between the last kernel before sort_by_key
and the first kernel call inside sort_by_key
(which is a
RakingReduction
).
我怀疑在sort_by_key
内部正在进行内存分配,
在调用它的第一个内部内核之前. sort_by_key
的内存
需求是可用的(即使可用内存为600MB
),因为
sort_by_key
可以工作,即使速度较慢.我看到那台电脑
发生这种情况时冻结1秒.我也看到了CPU的颠簸
物理内存图,如果我保持 Process Explorer 打开.
I suspect there is a memory allocation being done inside sort_by_key
,
before it calls its first internal kernel. The memory that sort_by_key
needs is available (even when available memory is 600MB
) since the
sort_by_key
works, even though it is slower. I see that the computer
freezes for 1s when this happens. I also see a bump in the CPU
Physical Memory graph if I keep Process Explorer open.
有什么我可以做的,以使此sort_by_key
尽快运行
什么时候可用内存较少?此外,
设备和主机导致内存增加和临时
冻结?
Is there anything I can do to make this sort_by_key
work just as fast
when available memory is lesser? Also, what is happening between the
device and host that is causing the memory bump and temporary
freezing?
推荐答案
thrust :: sort_by_key确实分配了O(N)的临时空间-如果基数排序大于可完成的操作,则基数排序不是就地排序单个多处理器.因此,输入数据至少需要80M * 2 * sizeof(int)= 640MB,再加上临时存储空间,这种存储空间至少必须为320MB.我不确定到底为什么当您没有足够的内存时排序不会仅仅失败-也许600 MB的估计值很低,或者推力回落到了CPU执行上(我怀疑这样做).
thrust::sort_by_key indeed allocates temporary space of O(N) -- radix sort is not an in-place sort when it is larger than can be done by a single multiprocessor. Therefore you need at least 80M * 2 * sizeof(int) = 640MB for the input data, plus space for the temporaries, which must be at least 320MB for this sort. I'm not sure exactly why the sort doesn't just fail when you don't have enough memory -- perhaps 600 MB is a low estimate, or perhaps thrust is falling back to CPU execution (I doubt it does that).
关于性能下降的另一个想法是,当您几乎需要所有可用内存时,驱动程序/运行时必须按顺序处理这些可用内存中的碎片.分配如此大的数组,导致额外的开销.
Another idea about the performance drop is that when you need almost all of the available memory, there might be a bit of fragmentation in the available memory that the driver/runtime has to deal with in order to allocate such large arrays, causing extra overhead.
顺便说一句,您如何测量可用内存?
BTW, how are you measuring available memory?
这篇关于推力:sort_by_key由于内存分配而变慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!