ValueError:负尺寸大小是由输入形状为[?,1,1,64]的'max_pooling2d_6/MaxPool'(op:'MaxPool')的1中减去2引起的 [英] ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_6/MaxPool' (op: 'MaxPool') with input shapes: [?,1,1,64]

查看:1274
本文介绍了ValueError:负尺寸大小是由输入形状为[?,1,1,64]的'max_pooling2d_6/MaxPool'(op:'MaxPool')的1中减去2引起的的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我将输入图像的高度和宽度保持在362X362以下时,出现负尺寸大小错误.我很惊讶,因为此错误通常是由于错误的输入尺寸引起的.我没有发现数字或行和列会导致错误的任何原因.下面是我的代码-

I am getting an error of Negative dimension size when I am keeping height and width of the input image anything below 362X362. I am surprised because this error is generally caused because of wrong input dimensions. I did not find any reason why number or rows and columns can cause an error. Below is my code-

batch_size = 32
num_classes = 7
epochs=50
height = 362
width = 362

train_datagen = ImageDataGenerator(
        rotation_range=40,
        width_shift_range=0.2,
        height_shift_range=0.2,
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True,
        fill_mode='nearest')

test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
    'train',
        target_size=(height, width),
        batch_size=batch_size,
        class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
     'validation',
        target_size=(height, width),
        batch_size=batch_size,
        class_mode='categorical')

base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=
(height,width,3))

x = base_model.output
x = Conv2D(32, (3, 3), use_bias=True, activation='relu') (x) #line2
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), activation='relu') (x) #line3
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(batch_size, activation='relu')(x) #line1
x = (Dropout(0.5))(x)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)

for layer in base_model.layers:
    layer.trainable = False

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=
['accuracy'])

model.fit_generator(
        train_generator,
        samples_per_epoch=128,
        nb_epoch=epochs,
        validation_data=validation_generator,
        verbose=2)

for i, layer in enumerate(base_model.layers):
    print(i, layer.name)

for layer in model.layers[:309]:
    layer.trainable = False
for layer in model.layers[309:]:
    layer.trainable = True

from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), 
loss='categorical_crossentropy', metrics=['accuracy'])

model.save('my_model.h5')
model.fit_generator(
        train_generator,
        samples_per_epoch=512,
        nb_epoch=epochs,
        validation_data=validation_generator,
        verbose=2)

推荐答案

InceptionV3非常积极地对输入图像进行降采样.对于输入的362x362图像,base_model.output张量是(?, 9, 9, 2048)-很容易看出您是否写

InceptionV3 downsamples the input image very aggressively. For the input 362x362 image, base_model.output tensor is (?, 9, 9, 2048) - that is easy to see if you write

base_model.summary()

之后,您的模型会进一步对(?, 9, 9, 2048)张量进行下采样(例如在此问题中):

After that, your model downsamples the (?, 9, 9, 2048) tensor even further (like in this question):

(?, 9, 9, 2048)  # input
(?, 7, 7, 32)    # after 1st conv-2d
(?, 3, 3, 32)    # after 1st max-pool-2d
(?, 1, 1, 64)    # after 2nd conv-2d
error: can't downsample further!

您可以通过添加padding='same'参数来防止conv层减小张量大小,即使那样也会使错误消失.或仅通过减少下采样的数量即可.

You can prevent the conv layer from reducing the tensor size by adding padding='same' parameter, even that will make the error disappear. Or by simply reducing the number of downsamples.

这篇关于ValueError:负尺寸大小是由输入形状为[?,1,1,64]的'max_pooling2d_6/MaxPool'(op:'MaxPool')的1中减去2引起的的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆