在python中显示原始图像像素而不是蒙版 [英] Show original image pixels instead of mask in python
问题描述
我有一个深度学习模型,该模型会向我返回一个数组,该数组在绘制时像这样
I have a deep learning model which returns to me an array which when plotted like this
res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1) #remove single dimension values, gives the indices of maximum values in the array
plt.imshow(labels[:-pad_x])
(上面的最后一行只是在绘制之前清除了一些不清楚的线)
(the last line above just removes some unclear lines before plotting them)
看起来像这样
looks like this
原始图片是这样
当我做
print(labels[labels>0])
print(labels.shape)
print(len(labels))
我明白了
[12 12 12 ... 12 12 12]
(512, 512)
512
我想在出现遮罩的原始图像中显示彩色像素,然后将其他所有颜色都变为黑色(或者我会选择模糊或其他颜色),该怎么办?
I want to show the colored pixels in the original image where mask appears and turn everything else to black (or blur or some other color I'll choose), how can I do that?
推荐答案
我能够扭转这种局面,实现我想要的目标
I was able to reverse this and achieve what I wanted to get
mask = labels[:-pad_x] == 0
resizedOrig = cv2.resize(frame, (512,384))
resizedOrig[mask] = 0
mask = labels[:-pad_x] == 0
resizedOrig = cv2.resize(frame, (512,384))
resizedOrig[mask] = 0
这篇关于在python中显示原始图像像素而不是蒙版的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!