如何在张量流中解释Poolallocator消息? [英] How to interpret Poolallocator messages in tensorflow?

查看:49
本文介绍了如何在张量流中解释Poolallocator消息?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在训练tensorflow seq2seq模型时,我看到以下消息:

While training a tensorflow seq2seq model I see the following messages :


W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 27282 get requests, put_count=9311 evicted_count=1000 eviction_rate=0.1074 and unsatisfied allocation rate=0.699032
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 100 to 110
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 13715 get requests, put_count=14458 evicted_count=10000 eviction_rate=0.691659 and unsatisfied allocation rate=0.675684
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 110 to 121
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6965 get requests, put_count=6813 evicted_count=5000 eviction_rate=0.733891 and unsatisfied allocation rate=0.741421
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 133 to 146
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=9058 evicted_count=9000 eviction_rate=0.993597 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 46 get requests, put_count=9062 evicted_count=9000 eviction_rate=0.993158 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 4 get requests, put_count=1029 evicted_count=1000 eviction_rate=0.971817 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 2 get requests, put_count=1030 evicted_count=1000 eviction_rate=0.970874 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=6074 evicted_count=6000 eviction_rate=0.987817 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 12 get requests, put_count=6045 evicted_count=6000 eviction_rate=0.992556 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 2 get requests, put_count=1042 evicted_count=1000 eviction_rate=0.959693 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 44 get requests, put_count=6093 evicted_count=6000 eviction_rate=0.984737 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 4 get requests, put_count=1069 evicted_count=1000 eviction_rate=0.935454 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 17722 get requests, put_count=9036 evicted_count=1000 eviction_rate=0.110668 and unsatisfied allocation rate=0.550615
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 792 to 871
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6 get requests, put_count=1093 evicted_count=1000 eviction_rate=0.914913 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 6 get requests, put_count=1101 evicted_count=1000 eviction_rate=0.908265 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 3224 get requests, put_count=4684 evicted_count=2000 eviction_rate=0.426985 and unsatisfied allocation rate=0.200062
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 1158 to 1273
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 17794 get requests, put_count=17842 evicted_count=9000 eviction_rate=0.504428 and unsatisfied allocation rate=0.510228
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:239] Raising pool_size_limit_ from 1400 to 1540
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 31 get requests, put_count=1185 evicted_count=1000 eviction_rate=0.843882 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 40 get requests, put_count=8209 evicted_count=8000 eviction_rate=0.97454 and unsatisfied allocation rate=0
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 0 get requests, put_count=2272 evicted_count=2000 eviction_rate=0.880282 and unsatisfied allocation rate=-nan
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 0 get requests, put_count=2362 evicted_count=2000 eviction_rate=0.84674 and unsatisfied allocation rate=-nan
W tensorflow/core/common_runtime/gpu/pool_allocator.cc:227] PoolAllocator: After 38 get requests, put_count=5436 evicted_count=5000 eviction_rate=0.919794 and unsatisfied allocation rate=0

这是什么意思,是否意味着我遇到一些资源分配问题?在Titan X 3500+ CUDA和12 GB GPU上运行

What does it mean , does it mean I am having some resource allocation issues? Am running on Titan X 3500+ CUDA ,12 GB GPU

推荐答案

TensorFlow具有多个内存分配器,用于将以不同方式使用的内存.他们的行为具有某些适应性方面.

TensorFlow has multiple memory allocators, for memory that will be used in different ways. Their behavior has some adaptive aspects.

在您的特定情况下,由于您使用的是GPU,因此有一个用于CPU内存的PoolAllocator已预先向GPU注册了用于快速DMA的内存.将从该池中分配一个预期从CPU转移到GPU的张量.

In your particular case, since you're using a GPU, there is a PoolAllocator for CPU memory that is pre-registered with the GPU for fast DMA. A tensor that is expected to be transferred from CPU to GPU, e.g., will be allocated from this pool.

PoolAllocators试图通过保留一个池中已分配的然后释放的,可立即重用的块的池来分摊调用更昂贵的基础分配器的成本.它们的默认行为是缓慢增长,直到逐出速度降到某个常数以下. (收回率是免费呼叫所占的比例,在空闲呼叫中,我们从池中将未使用的块返回到基础池中,以便不超过大小限制.)在上面的日志消息中,您看到"Raising pool_size_limit_"行,显示了该池规模不断扩大.假设您的程序实际上具有稳定状态的行为,并具有所需最大块数的集合,则该池将增长以容纳该块,然后不再增长.它的行为方式不是简单地保留曾经分配的所有块,这样就很少或仅在程序启动期间需要的大小不太可能保留在池中.

The PoolAllocators attempt to amortize the cost of calling a more expensive underlying allocator by keeping around a pool of allocated then freed chunks that are eligible for immediate reuse. Their default behavior is to grow slowly until the eviction rate drops below some constant. (The eviction rate is the proportion of free calls where we return an unused chunk from the pool to the underlying pool in order not to exceed the size limit.) In the log messages above, you see "Raising pool_size_limit_" lines that show the pool size growing. Assuming that your program actually has a steady state behavior with a maximum size collection of chunks it needs, the pool will grow to accommodate it, and then grow no more. It behaves this way rather than simply retaining all chunks ever allocated so that sizes needed only rarely, or only during program startup, are less likely to be retained in the pool.

这些消息仅应在内存不足的情况下引起关注.在这种情况下,日志消息可能有助于诊断问题.还要注意,只有在内存池增长到适当的大小之后,才能达到峰值执行速度.

These messages should only be a cause for concern if you run out of memory. In such a case the log messages may help diagnose the problem. Note also that peak execution speed may only be attained after the memory pools have grown to the proper size.

这篇关于如何在张量流中解释Poolallocator消息?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆