如何确定seq2seq张量流RNN训练模型的最大批处理大小 [英] How to determine maximum batch size for a seq2seq tensorflow RNN training model
问题描述
当前,我正在使用默认的64作为seq2seq张量流模型的批处理大小.最大批处理大小,层大小等是多少,我可以使用具有12 GB RAM和Haswell-E xeon 128 GB RAM的单个Titan X GPU.输入数据将转换为嵌入.以下是我正在使用的一些有用的参数,似乎单元格的输入大小为1024:
Currently, I am using the default 64 as the batch size for the seq2seq tensorflow model. What is the maximum batch size , layer size etc I can go with a single Titan X GPU with 12 GB RAM with Haswell-E xeon 128GB RAM. The input data is converted to embeddings. Following are some helpful parameters I am using , it seems the cell input size is 1024:
encoder_inputs: a list of 2D Tensors [batch_size x cell.input_size].
decoder_inputs: a list of 2D Tensors [batch_size x cell.input_size].
tf.app.flags.DEFINE_integer("size", 1024, "Size of each model layer.")
那么根据我的硬件,我可以进行的最大批处理大小,层数和输入大小是多少?目前,GPU显示99%的内存已被占用.
So based on my hardware what is the maximum batch size , layers, input size I can go? Currently the GPU shows that 99% memory is occupied.
推荐答案
默认情况下,Tensorflow占用所有可用的GPU内存.但是,有一种方法可以改变这一点.在我的模型中,我这样做:
By default, Tensorflow occupies all GPU memory available. However, there is a way to change this. In my model, I do this:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
然后,您可以在开始会话时使用此配置:
Then you can use this config when you start your session:
with tf.Session(config=config) as sess:
现在,模型将仅使用所需的内存,然后您可以尝试使用不同的批处理大小,并查看何时内存用完.
Now, the model will only use as much memory as it needs, and then you can try with different batch sizes and see when it runs out of memory.
这篇关于如何确定seq2seq张量流RNN训练模型的最大批处理大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!