ValueError:sequence_9层的输入0与该层不兼容:预期ndim = 4,找到的ndim = 0.收到的完整形状:[] [英] ValueError: Input 0 of layer sequential_9 is incompatible with the layer: expected ndim=4, found ndim=0. Full shape received: []
问题描述
这是输入管道的代码.它将图像的大小调整为输入(224,224,3)和输出(224,224,2).
This is the code for the input pipeline. It's resizing images to (224,224,3) as input and (224,224,2) as output.
image_path_list = glob.glob('/content/drive/My Drive/datasets/imagenette/*')
data = tf.data.Dataset.list_files(image_path_list)
def tf_rgb2lab(image):
im_shape = image.shape
[image,] = tf.py_function(color.rgb2lab, [image], [tf.float32])
image.set_shape(im_shape)
return image
def preprocess(path):
image = tf.io.read_file(path)
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [224, 224])
image = tf_rgb2lab(image)
L = image[:,:,0]/100.
ab = image[:,:,1:]/128.
input = tf.stack([L,L,L], axis=2)
return input, ab
train_ds = data.map(preprocess, tf.data.experimental.AUTOTUNE).batch(64).repeat()
train_ds = data.prefetch(tf.data.experimental.AUTOTUNE)
以下是该模型的代码. 我认为该模型没有任何问题,因为当我在图像上调用model.predict()时,它可以工作. 因此,我假设输入管道出了点问题,但自从我第一次使用tf.data以来,我一直无法弄清楚到底是什么.
The following is the code for the model. I don't think anything is wrong with the model, since it works when I call model.predict() on an image. So I'm assuming something is wrong with the input pipeline but I can't figure out what is since its my first time working with tf.data.
vggmodel = tf.keras.applications.VGG16(include_top=False, weights='imagenet')
model = tf.keras.Sequential()
for i,layer in enumerate(vggmodel.layers):
model.add(layer)
for layer in model.layers:
layer.trainable=False
model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.UpSampling2D((2,2)))
model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.UpSampling2D((2,2)))
model.add(tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.UpSampling2D((2,2)))
model.add(tf.keras.layers.Conv2D(16, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.UpSampling2D((2,2)))
model.add(tf.keras.layers.Conv2D(8, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.Conv2D(2, (3,3), padding='same', activation='tanh'))
model.add(tf.keras.layers.UpSampling2D((2,2)))
无论如何,我打印(train_ds)时都会得到:
Anyways when I print(train_ds) I get:
<PrefetchDataset shapes: (), types: tf.string>
我尝试了以下代码:
path = next(iter(train_ds))
L,ab = preprocess(path)
L.shape
我知道了
TensorShape([224, 224, 3])
这意味着它正在返回3维张量. 那为什么在我打电话时出现错误:
which means it is returning a 3 dimensional tensor. Then why do I get the error when I call:
model.fit(train_ds, epochs=1, steps_per_epoch=steps, callbacks=[model_checkpoint_callback, early_stopping_callback])
推荐答案
是的,虽然花了一些时间,但我还是弄清楚了.这是一个非常愚蠢的错误.
So yeah, It took some time but I figured it out. It was a pretty stupid mistake.
train_ds = data.map(preprocess, tf.data.experimental.AUTOTUNE).batch(64).repeat()
train_ds = data.prefetch(tf.data.experimental.AUTOTUNE)
实际上应该是:
train_ds = data.map(preprocess, tf.data.experimental.AUTOTUNE).batch(64).repeat().prefetch(tf.data.experimental.AUTOTUNE)
这篇关于ValueError:sequence_9层的输入0与该层不兼容:预期ndim = 4,找到的ndim = 0.收到的完整形状:[]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!