Keras错误的图像尺寸 [英] Keras wrong image size

查看:125
本文介绍了Keras错误的图像尺寸的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想测试test-images的CNN模型的准确性.以下是将mha格式的地面真实图像转换为png格式的代码.

I want to test the accuracy of my CNN model for the test-images. Following is the code for converting Ground-truth images in mha format to png format.

def save_labels(fns):
    '''
    INPUT list 'fns': filepaths to all labels
    '''
    progress.currval = 0
    for label_idx in progress(xrange(len(fns))):
        slices = io.imread(fns[label_idx], plugin = 'simpleitk')
        for slice_idx in xrange(len(slices)):
        '''
        commented code in order to reshape the image slices. I tried reshaping but it did not work 
        strip=slices[slice_idx].reshape(1200,240)
        if np.max(strip)!=0:
        strip /= np.max(strip)
            if np.min(strip)<=-1:
        strip/=abs(np.min(strip))
        '''
        io.imsave('Labels2/{}_{}L.png'.format(label_idx, slice_idx), slices[slice_idx])

此代码以png格式生成240 X 240图像.但是,其中大多数对比度较低或完全变黑.继续,现在,我将这些图像传递给用于计算已知图像的类的函数.

This code is producing 240 X 240 images in png format. However most of them are low contrast or completely blackened. Moving on, Now I pass these images to function for calculating knowing the class of labelled image.

   def predict_image(self, test_img, show=False):
        '''
        predicts classes of input image
        INPUT   (1) str 'test_image': filepath to image to predict on
                (2) bool 'show': True to show the results of prediction, False to return prediction
        OUTPUT  (1) if show == False: array of predicted pixel classes for the center 208 x 208 pixels
                (2) if show == True: displays segmentation results
        '''
        imgs = io.imread(test_img,plugin='simpleitk').astype('float').reshape(5,240,240)
        plist = []

        # create patches from an entire slice
        for img in imgs[:-1]:
            if np.max(img) != 0:
                img /= np.max(img)
            p = extract_patches_2d(img, (33,33))
            plist.append(p)
        patches = np.array(zip(np.array(plist[0]), np.array(plist[1]), np.array(plist[2]), np.array(plist[3])))

        # predict classes of each pixel based on model
        full_pred = keras.utils.np_utils.probas_to_classes(self.model_comp.predict(patches))
        fp1 = full_pred.reshape(208,208)
        if show:
            io.imshow(fp1)
            plt.show
        else:
            return fp1

我正在获取ValueError: cannot reshape array of size 172800 into shape (5,240,240).我将5更改为3,以便3X240X240 = 172800.但是接着是一个新问题,接着是ValueError: Error when checking : expected convolution2d_input_1 to have 4 dimensions, but got array with shape (43264, 33, 33).

I am getting ValueError: cannot reshape array of size 172800 into shape (5,240,240). I changed 5 to 3 so that 3X240X240=172800. But then there is new problem then ValueError: Error when checking : expected convolution2d_input_1 to have 4 dimensions, but got array with shape (43264, 33, 33).

我的模型如下:

        single = Sequential()
        single.add(Convolution2D(self.n_filters[0], self.k_dims[0], self.k_dims[0], border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg), input_shape=(self.n_chan,33,33)))
        single.add(Activation(self.activation))
        single.add(BatchNormalization(mode=0, axis=1))
        single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
        single.add(Dropout(0.5))
        single.add(Convolution2D(self.n_filters[1], self.k_dims[1], self.k_dims[1], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
        single.add(BatchNormalization(mode=0, axis=1))
        single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
        single.add(Dropout(0.5))
        single.add(Convolution2D(self.n_filters[2], self.k_dims[2], self.k_dims[2], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
        single.add(BatchNormalization(mode=0, axis=1))
        single.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
        single.add(Dropout(0.5))
        single.add(Convolution2D(self.n_filters[3], self.k_dims[3], self.k_dims[3], activation=self.activation, border_mode='valid', W_regularizer=l1l2(l1=self.w_reg, l2=self.w_reg)))
        single.add(Dropout(0.25))

        single.add(Flatten())
        single.add(Dense(5))
        single.add(Activation('softmax'))

        sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
        single.compile(loss='categorical_crossentropy', optimizer='sgd')
        print 'Done.'
        return single

我正在使用keras 1.2.2.请在此处此代码在上述代码中的full_predict中发生了变化),这是我之前发布的有关背景信息的信息.请参阅,以了解为什么这些特定尺寸之所以为33,33.

I am using keras 1.2.2. Please refer here and here( is it due to this change in full_predict in above code) for my previous post for background information. Please refer this for knowing why these specific sizes like 33,33.

推荐答案

您应检查patch数组的形状.这应该有4个维度(nrBatches,nrChannels,Width,Height).根据您的错误消息,只有3个维度.因此,似乎您将渠道维度与批次维度合并了.

You should check the shape of the patches array. This should have 4 dimensions (nrBatches, nrChannels, Width, Height). According to your error message there are only 3 dimensions. Therefore it seems like you merged your channel dimension with your batch dimension.

这篇关于Keras错误的图像尺寸的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆