是否存在解决方案以通过tensorflow在GPU上用尽内存? [英] Is there a workaround for running out of memory on GPU with tensorflow?

查看:57
本文介绍了是否存在解决方案以通过tensorflow在GPU上用尽内存?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在为视频分类建立3D卷积网络。主要问题是我太容易耗尽内存。即使将batch_size设置为1,也仍然没有足够的内存来按我的方式训练CNN。

I am currently building a 3d convolutional network, for video classification. The main problem is that I run out of memory too easily. Even if I set my batch_size to 1, there is still not enough memory to train my CNN the way I want.

我正在使用带有4Gb VRAM的GTX 970(张量流免费使用3.2Gb)。我希望它仍然可以训练我的网络,也许使用我的RAM存储器作为备份,或者分部分进行计算。但是直到现在,我只能运行它使CNN更简单,这会直接影响性能。

I am using a GTX 970 with 4Gb of VRAM (3.2Gb free to use by tensorflow). I was expecting it to still train my network, maybe using my RAM memory as a backup, or doing the calculations in parts. But until now I could only run it making the CNN simpler, which affects performance directly.

我认为我可以在CPU上运行,但是速度明显慢得多,因此不能一个好的解决方案。

I think I can run on CPU, but it is significantly slower, making it not a good solution either.

有比购买更好的GPU更好的解决方案吗?

Is there a better solution than to buy a better GPU?

请先感谢。

推荐答案

使用梯度检查点将有助于限制内存。

Using gradient checkpointing will help with memory limits.

这篇关于是否存在解决方案以通过tensorflow在GPU上用尽内存?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆