U-net低对比度测试图像,预测输出为灰色框 [英] U-net low contrast test images, predict output is grey box

查看:1480
本文介绍了U-net低对比度测试图像,预测输出为灰色框的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从 https://github.com/zhixuhao/unet 运行unet ,但是当我运行了unet预测的图像都是灰色的.我收到一条错误消息,说我的测试数据为低对比度图像,是否有人解决了这个问题?

I am running the unet from https://github.com/zhixuhao/unet but when I run the unet the predicted images are all grey. I get an error saying low contrast image for my test data, any one had or resolved this problem?

我正在训练50幅超声图像,并在增强后的5个时期(每个时期300步,批次大小为2)获得大约2000/3000的图像.

I am training with 50 ultrasound images and get around 2000/3000 after augmentation, on 5 epochs with 300 steps per epoch and batch size of 2.

非常感谢 海伦娜(Helena)

Many thanks in advance Helena

推荐答案

在确保数据管道正确之后.这里有几件事要考虑,我希望上面提到的波纹管之一对您有所帮助:

After you made sure that your data pipeline is correct. There are a few things to consider here, I hope one of the bellow mentioned helps:

1.选择正确的损失功能 二进制交叉熵可能会使您的网络朝着优化所有标签的方向发展,现在,如果图像中的标签数量不平衡,它可能会吸引您的网络来只给出白色,灰色或黑色图像预测.尝试使用骰子系数损失

1. Choose the right loss function Binary crossentropy might lead your network in the direction of optimizing for all labels, now if you have an unbalanced amount of labels in your image, it might draw your network to just give back either white, gray or black image predictions. Try using the dice coefficient loss

2.更改testGenerator中的行 以下行似乎是data.pytestGenerator方法中的一个问题:

2. Change the line in testGenerator A thing that seems to be an issue in data.py and the testGenerator method is the following line:

img = img / 255

将其更改为:

img /=255. 

3.降低学习率 如果您的学习率太高,您可能会收敛于不充分的最优,这也往往仅针对灰色,黑色或白色预测进行优化. 尝试在Adam(lr = 3e-5)左右的学习率并训练足够的时期,您应该打印骰子损失而不是准确性来检查收敛.

3. Reduce learning rate if your learning rate is too high you might converge in non-sufficient optima, which also tend to optimize for gray, black or white predictions only. Try a learning rate around Adam(lr = 3e-5) and train for a sufficient amount of epochs, you should print dice loss and not accuracy to check your convergence.

4.不要对最后一组卷积使用激活函数 对于最后一组卷积,即128-> 64-> 64-> 1,不应使用激活函数!激活功能导致值消失!

4. Do not use activation functions for the last set of convolutions For the last set of convolutions, that is 128-> 64 -> 64 -> 1, the activation function should not be used! The activation function causes the values to vanish!

5.您的保存方法可能会出现错误" ,请确保在保存之前将图像缩放到0到25​​5之间的值. Skimage通常会向您发出低对比度图像警告.

5. Your saving method could have a "bug" make sure you scale your image to values between 0 and 255 before saving. Skimage usually warns you with a low contrast image warning.

from skimage import img_as_uint

io.imsave(os.path.join(save_path,"%d_predict.tif"%(i)),img_as_uint(img))

6.您的保存格式可能会出现错误" ,请确保您以正确的格式保存图片.我经历过将.png保存为仅能产生黑色或灰色图像的情况,而.tif文件的工作方式就像一种吊饰.

6. Your saving format could have a "bug" make sure you save your image in a proper format. I experienced that saving as .png gives only black or gray images, whereas .tif works like a charm.

7.您可能只是不够训练.通常,当网络无法正常运行时,您会很反感,并中止训练.很有可能,额外的培训时机正是它所需要的.

7. You might just not train enough often you'll just freak out when your network does not do as you would like it to and abort the training. Chance is, additional training epochs is exactly what it would have needed.

这篇关于U-net低对比度测试图像,预测输出为灰色框的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆