Tensorflow中卷积神经网络的内存估计 [英] Memory Estimation for Convolution Neural Network in Tensorflow

查看:239
本文介绍了Tensorflow中卷积神经网络的内存估计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用张量流和卷积神经网络处理图像分类问题. 我的模型有以下几层.

I am working on a Image classification problem using tensorflow and Convolution Neural Network. My model is having following layers.

  • 输入的图片大小为2456x2058
  • 3个卷积层{Con1-shape(10,10,1,32); Con2-shape(5,5,32,64); Con3-shape(5,5,64,64)}
  • 3个最大池2x2层
  • 1个完全连接的层.

我曾尝试使用NVIDIA-SMI工具,但是它显示了模型运行时的GPU内存消耗.
我想知道在GPU上运行模型之前是否有任何方法或方法可以找到内存的估算值.这样我就可以在考虑可用内存的情况下设计模型.
我尝试使用此估算方法,但是我计算出的内存和观察到的内存利用率彼此之间无处可寻.

I have tried using the NVIDIA-SMI tool but it shows me the GPU memory consumption as the model runs.
I would like to know if there is any method or a way to find the estimate of memory before running the model on GPU. So that I can design models with the consideration of available memory.
I have tried using this method for estimation but my calculated memory and observed memory utilisation are no where near to each other.

推荐答案

据我了解,当您使用tensorflow-gpu打开会话时,它将在GPUS中分配所有可用的内存.因此,当您查看nvidia-smi输出时,即使实际上只使用了一部分内存,也始终会看到相同数量的已用内存.打开会话时有一些选项可强制Tensorflow仅分配一部分可用内存(请参阅

As far as I understand, when you open a session with tensorflow-gpu, it allocates all the memory in the GPUS that are available. So, when you look at the nvidia-smi output, you will always see the same amount of used memory, even if it actually uses only a part of it. There are options when opening a session to force tensorflow to allocate only a part of the available memory (see How to prevent tensorflow from allocating the totality of a GPU memory? for instance)

这篇关于Tensorflow中卷积神经网络的内存估计的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆