每次迭代后处理时间越来越长(TensorFlow) [英] Processing time gets longer and longer after each iteration (TensorFlow)

查看:46
本文介绍了每次迭代后处理时间越来越长(TensorFlow)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 TensorFlow 训练 CNN 以进行医学图像应用.

I am training a CNN with TensorFlow for medical images application.

由于我没有大量数据,因此我尝试在训练循环期间对训练批次进行随机修改,以人为地增加训练数据集.我在不同的脚本中创建了以下函数并在我的训练批次中调用它:

As I don't have a lot of data, I am trying to apply random modifications to my training batch during the training loop to artificially increase my training dataset. I made the following function in a different script and call it on my training batch:

def randomly_modify_training_batch(images_train_batch, batch_size):

    for i in range(batch_size):
        image = images_train_batch[i]
        image_tensor = tf.convert_to_tensor(image)

        distorted_image = tf.image.random_flip_left_right(image_tensor)
        distorted_image = tf.image.random_flip_up_down(distorted_image)
        distorted_image = tf.image.random_brightness(distorted_image, max_delta=60)
        distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)

        with tf.Session():
            images_train_batch[i] = distorted_image.eval()  # .eval() is used to reconvert the image from Tensor type to ndarray

return images_train_batch

该代码适用于对我的图像进行修改.

The code works well for applying modifications to my images.

问题是:

在我的训练循环(前馈 + 反向传播)的每次迭代之后,将这个相同的函数应用到我的下一个训练批次比上一次稳定地需要 5 秒.

After each iteration of my training loop (feedfoward + backpropagation), applying this same function to my next training batch steadily takes 5 seconds longer than the last time.

处理时间大约为 1 秒,经过 10 次多一点的迭代后,处理时间达到了 1 多分钟.

It takes around 1 second to process and reaches over a minute of processing after a bit more than 10 iterations.

是什么导致了这种放缓?我该如何预防?

What causes this slowing? How can I prevent it?

(我怀疑 distorted_image.eval() 有问题,但我不太确定.每次都打开一个新会话?TensorFlow 不应该在我使用时自动关闭会话一个with tf.Session()"块?)

(I suspect something with distorted_image.eval() but I'm not quite sure. Am opening a new session each time? TensorFlow isn't supposed to close automatically the session as I use in a "with tf.Session()" block?)

推荐答案

您在每次迭代中调用该代码,因此每次迭代都将这些操作添加到图形中.你不想那样做.您想在开始时构建图形,并且在训练循环中仅执行它.另外,为什么您需要在之后再次转换为 ndimage,而不是一次将内容放入您的 TF 图中并一直使用张量?

You call that code in each iteration, so each iteration you add these operations to the graph. You don't want to do that. You want to build the graph at the start and in the training loop only execute it. Also, why do you need to convert to ndimage again afterwards, instead of putting things into your TF graph once and just use tensors all the way through?

这篇关于每次迭代后处理时间越来越长(TensorFlow)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆