Tensorflow 的内存成本在非常简单的“for 循环"中逐渐增加 [英] Tensorflow's memory cost gradually increasing in very simple “for loop”

查看:36
本文介绍了Tensorflow 的内存成本在非常简单的“for 循环"中逐渐增加的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 tensorflow 上有一个很奇怪的问题.我将我的问题简化为以下版本:

I have a very strange problem on tensorflow. I simplified my question to the following version:

我问这个是因为我需要运行一系列训练,我只是将它们放在一个 for 循环中,然后我为每次迭代使用不同的参数.

I ask this because I need run a series of training, I just put them in a for loop, then I use different parameters for each iteration.

为了简化问题,我只用tensorflow语言写了一个简单的矩阵乘法,然后我把这个矩阵乘法训练"放在一个for循环"中(当然你可以把其他复杂的函数放在for循环中,结论一样).

To simplify the question, I just write a simple Matrix multiplication in tensorflow language, then I put this "Matrix multiplication training" in a "for loop"(Of course you can put other complicated functions in the for loop, the conclusion is same).

我设置了 100000 次迭代,这意味着我将运行 10000 个训练示例.并打印每个循环中的耗时,然后我可以观察到每次迭代的时间消耗是相同的,这是没有问题的.但是内存成本增加得很快,最后我得到了错误:内存不足"(我期望每次迭代的内存应该保持不变)

I set 100000 iterations times,which means I will run 10000 training examples. and print the time-consuming in each loop, then I can observe that the consumption of time is same for each iteration, this is no problem. But the memory cost increase very quickly and finally I got the Error:" run out of memory "(what I expect is the memory should keep same for each iteration)

import tensorflow as tf
import numpy as np
import datetime  

for i in range(100000):   # I must put the following code in this for loop
    starttime = datetime.datetime.now()
    graph=tf.Graph()
    with graph.as_default():  
        with tf.device("/cpu:0"):
            a=np.arange(100).reshape(1,-1)
            b=np.arange(10000).reshape(100,100)
            A = tf.placeholder(tf.float32, [1,100])
            B = tf.placeholder(tf.float32, [100,100])

            sess = tf.InteractiveSession()


            RESULT =tf.matmul(A,B)





            RESULT_o=sess.run(RESULT,feed_dict={A: a, B: b})
    endtime = datetime.datetime.now()   
    print(endtime-starttime)

我知道原因是在每次迭代中,程序都会创建一个新的操作,这会增加内存.我想知道有没有办法在每次迭代后释放内存成本?(这个内存问题对于 GPU 情况是一样的)

I know the reason is that in each iteration, the program created a new operation,this will increase memory. I want to know is there any way to release the memory cost after each iteration? (this memory problem is same for GPU situation)

推荐答案

你的代码结构应该是这样的:

Your code should be structured like this:

import tensorflow as tf
import numpy as np

A = tf.placeholder(tf.float32, [1,100])
B = tf.placeholder(tf.float32, [100,100])
result = tf.matmul(A,B)

init_op = tf.global_variables_initializer()

# Later, when launching the model
with tf.Session() as sess:
    # Run the init operation. 
    # This will make sure that memory is only allocated for the variable once.
    sess.run(init_op)

    for i in range(100000):
        a = np.arange(100).reshape(1,-1)
        b = np.arange(10000).reshape(100,100)
        sess.run(result, feed_dict={A: a, B: b})
        if i % 1000 == 0:
            print(i, "processed")

在这里,您将为第一次迭代分配一次内存,并在后续迭代中继续重用相同的内存块.

Here you will allocate memory once for the first iteration and keep reusing the same memory block in successive iterations.

这篇关于Tensorflow 的内存成本在非常简单的“for 循环"中逐渐增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆