如何处理 UserWarning:将稀疏的 IndexedSlices 转换为未知形状的密集张量 [英] How to deal with UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape

查看:37
本文介绍了如何处理 UserWarning:将稀疏的 IndexedSlices 转换为未知形状的密集张量的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Tensorflow 中收到以下警告:UserWarning:将稀疏的 IndexedSlices 转换为形状未知的密集张量.这可能会消耗大量内存.

I am having the following warning in Tensorflow: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.

我得到这个的原因是:

import tensorflow as tf
# Flatten batch elements to rank-2 tensor where 1st max_length rows 
    #belong to first batch element and so forth
all_timesteps = tf.reshape(raw_output, [-1, n_dim])  # (batch_size*max_length, n_dim)
# Indices to last element of each sequence.
# Index to first element is the sequence order number times max 
    #sequence length.
# Index to last element is the index to first element plus sequence 
    #length.
row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)
# Gather rows with indices to last elements of sequences
# http://stackoverflow.com/questions/35892412/tensorflow-dense-gradient-explanation
# This is due to gather returning IndexedSlice which is later 
    #converted into a Tensor for gradient
# calculation.
last_timesteps = tf.gather(all_timesteps, row_inds)  # (batch_size,n_dim)  

tf.gather 导致了这个问题.直到现在我一直忽略它,因为我的架构并不是很大.但是,现在,我拥有更大的架构和大量数据.在使用大于 10 的批量大小进行训练时,我面临内存不足问题.我相信处理此警告将使我能够将我的模型放入 GPU.

tf.gather is causing the issue. I have been ignoring it until now because my architectures were not really big. However, now, I have bigger architectures and a lot of data. I am facing Out of memory issues when training with batch sizes bigger than 10. I believe that dealing with this warning would allow me to fit my models inside the GPU.

请注意,我使用的是 Tensorflow 1.3.

Please note that I am using Tensorflow 1.3.

推荐答案

我设法通过使用 tf.dynnamic_partition 而不是 tf.gather 解决了这个问题.我像这样替换了上面的代码:

I managed to solve the issue by using tf.dynnamic_partition instead of tf.gather . I replaced the above code like this:

# Flatten batch elements to rank-2 tensor where 1st max_length rows belong to first batch element and so forth
all_timesteps = tf.reshape(raw_output, [-1, n_dim])  # (batch_size*max_length, n_dim)
# Indices to last element of each sequence.
# Index to first element is the sequence order number times max sequence length.
# Index to last element is the index to first element plus sequence length.
row_inds = tf.range(0, batch_size) * max_length + (seq_len - 1)
# Creating a vector of 0s and 1s that will specify what timesteps to choose.
partitions = tf.reduce_sum(tf.one_hot(row_inds, tf.shape(all_timesteps)[0], dtype='int32'), 0)
# Selecting the elements we want to choose.
last_timesteps = tf.dynamic_partition(all_timesteps, partitions, 2)  # (batch_size, n_dim)
last_timesteps = last_timesteps[1]

这篇关于如何处理 UserWarning:将稀疏的 IndexedSlices 转换为未知形状的密集张量的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆