用于图像分割的Tensorflow:更改小批量大小停止学习 [英] Tensorflow for image segmentation: Changing minibatch size stops learning

查看:122
本文介绍了用于图像分割的Tensorflow:更改小批量大小停止学习的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个用于图像分割的网,特别是脑肿瘤.带有代码的Jupyter笔记本位于此处

I have created a net for image segmentation, in particular brain tumors. The jupyter notebook with the code is here.

当我以1的最小批量训练CNN时,我得到了相当不错的结果:

When I train the CNN with minibatch size of 1, I get a fairly good result:

但是当我将大小更改为更大的值(2个或更多)时,结果将很糟糕:

But when I change the size to something larger ( 2 or more) the results are terrible:

Tensorboard显示出不同的损失.显然,batchsize 2的网络并未使损失最小化(蓝色)

Tensorboard shows the diference in the loss. Clearly the the net with batchsize 2 is not minimizing the loss (blue)

关于为什么可能会出现这种情况的任何想法?

Any ideas on why this could be the case?

推荐答案

我发现了问题.我用张量板检查了图形,发现在CONV1/S1中,我没有将ReLu的输出连接到下一层(CONV1/S2),而是直接将了conv2d的输出连接了起来.

I found the problem. I checked my graph with tensorboard and I notice that in the CONV1/S1 I was not connecting the output of the ReLu to the next layer (CONV1/S2), instead I was connecting the output of the conv2d directly.

我在代码中更改了这一行,一切都按预期进行.

I changed that line in the code and everything is working as expected.

这篇关于用于图像分割的Tensorflow:更改小批量大小停止学习的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆