Tensorflow - 避免张量大小限制 [英] Tensorflow - Avoid Tensor Size Limit

查看:42
本文介绍了Tensorflow - 避免张量大小限制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在研究 中描述的 FCN-32 网络的实现很长的 Shelhamer 论文,但在上采样时遇到了障碍.为了上采样到原始大小,其他实现使用conv2d_transpose 层,带有双线性滤波器,内核大小为 64x64.在您开始使用大量类之前,这可以正常工作.

I'm working on an implementation of the FCN-32 net described in the Long, Shelhamer paper, but have run into a roadblock when upsampling. In order to upsample to original size, other implementations use a conv2d_transpose layer with a bilinear filter w/kernel size 64x64. This works fine until you start using lots of classes.

对于任意数量 > ~375 的类,转置层中的 filters 变量 > 2 gb ( 64 x 64 x (>375) x (>375) ) 所以 Tensorflow 会抱怨并死掉,说

For any number of classes > ~375, the filters variable in the transpose layer is > 2 gb ( 64 x 64 x (>375) x (>375) ) so Tensorflow complains and dies, saying

ValueError:无法创建内容大于 2GB 的张量原型.

有什么办法可以避免这个大小限制吗?我的第一个想法是生成张量,但如果这样的构造存在或可能存在,我找不到任何关于如何创建的文档.

Is there any way to avoid this size limit? My first thought would be generative tensor, but I can't find any documentation on how to create if such a construct exists or is possible.

推荐答案

您可以将输出类拆分为多个操作,并在最后连接它们.

You can split the output classes into multiple operations and concatenate them at the end.

Backprop 将通过 concat 操作正常工作.它应该像创建两个 conv2d_transpose 操作一样简单,每个操作都有一半的类并适当地连接结果并从那里继续损失函数.

Backprop will work just fine through the concat operation. It should be as trivial as creating two conv2d_transpose operations, each with half the classes and concat the results appropriately and continue to the loss function from there.

根据需要创建 2 个以上 conv2d_transpose 操作也同样有效.

Creating more than 2 conv2d_transpose operations as necessary will work just as well.

考虑到这一点后,我相信它会奏效.如果有问题,请告诉我,我会更新答案.

After thinking about this I'm confident it will work. If there's an issue let me know and I'll update the answer.

这篇关于Tensorflow - 避免张量大小限制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆