Tensorflow ValueError:没有为任何变量提供渐变 [英] Tensorflow ValueError: No gradients provided for any variable

查看:445
本文介绍了Tensorflow ValueError:没有为任何变量提供渐变的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使我的tensorflow模型在2类图像上进行训练,但遇到了ValueError问题.有人可以帮忙吗? 这是相关代码:

I'm trying to get my tensorflow model to train on 2 categories of images but I'm running into a ValueError problem. Can somebody please help. Here is the relevant code:

# Get image arrays and labels for all image files
images, labels = load_data(sys.argv[1])

# Split data into training and testing sets
x_train, x_test, y_train, y_test = train_test_split(
    images, labels, test_size=TEST_SIZE
)

# Get a compiled neural network
model = get_model()
model.summary()

# Fit model on training data
model.fit_generator(x_train, steps_per_epoch=128, epochs=EPOCHS,
                    validation_data=y_train, validation_steps=128)

def load_data(data_dir):
    image_generator = ImageDataGenerator(rescale=1. / 255)
    resized_imgs = image_generator.flow_from_directory(batch_size=128, directory=data_dir,
                              shuffle=True, target_size=dimensions,
       class_mode='binary')

    images, labels = next(resized_imgs)
    plotImages(images[:15])

    return images, labels


def get_model():
    # create a convolutional neural network
    model = tf.keras.models.Sequential([

        # convolutional layer. Learn 32 filters using 

a 3x3 kernel
        tf.keras.layers.Conv2D(
            32, (3, 3), activation="relu", input_shape=(IMG_WIDTH, IMG_HEIGHT, 3)
    ),

    tf.keras.layers.BatchNormalization(),

    # max-pooling layer, using 2x2 pool size
    tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),

    # convolutional layer. Learn 32 filters using a 3x3 kernel
    tf.keras.layers.Conv2D(
        32, (3, 3), activation="relu", input_shape=(IMG_WIDTH, IMG_HEIGHT, 3)
    ),

    tf.keras.layers.BatchNormalization(),

    # max-pooling layer, using 2x2 pool size
    tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),

    # flatten units
    tf.keras.layers.Flatten(),

    # add a hidden layer with dropout
    tf.keras.layers.Dense(128, activation="relu"),
    tf.keras.layers.Dropout(0.5),

    # add an output layer with NUM_CATEGORIES (43) units
    tf.keras.layers.Dense(NUM_CATEGORIES, activation="sigmoid")  # changed activation from softmax
    # to sigmoid whic is the proper activation for binary data
])

# train neural network
model.compile(
    optimizer="adam",
    loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=["accuracy"]
)

return model

我最终收到以下错误: ValueError:没有为任何变量提供渐变:['conv2d/kernel:0','conv2d/bias:0','batch_normalization/gamma:0','batch_normalization/beta:0','conv2d_1/kernel:0', 'conv2d_1/bias:0','batch_normalization_1/gamma:0','batch_normalization_1/beta:0','dense/kernel:0','dense/bias:0','dense_1/kernel:0','dense_1 /bias:0'].

I end up getting the following error: ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'batch_normalization/gamma:0', 'batch_normalization/beta:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'batch_normalization_1/gamma:0', 'batch_normalization_1/beta:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0'].

该错误来自以下代码行,但不确定如何解决:

The error is coming from the following line of code but not sure how to fix it:

model.fit_generator(x_train, steps_per_epoch=128, epochs=EPOCHS,
                        validation_data=y_train, validation_steps=128)

谢谢

推荐答案

弄清楚了.由于tf模型中的最终输出层,我的logit与标签形状不匹配.

Figured it out. My logits weren't matching my label shape because of the final output layer in my tf model.

NUM_CATEGORIES = 2

tf.keras.layers.Dense(NUM_CATEGORIES, activation="sigmoid")

我将单位设置为2而不是1,所以我的输出形状是(None,2)而不是(None,1)

I had the units set to 2 instead of 1, so my output shape was (None, 2) instead of (None, 1)

这篇关于Tensorflow ValueError:没有为任何变量提供渐变的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆