ValueError:请使用“Layer"实例初始化“TimeDistributed"层 [英] ValueError: Please initialize `TimeDistributed` layer with a `Layer` instance

查看:93
本文介绍了ValueError:请使用“Layer"实例初始化“TimeDistributed"层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试构建一个可以在音频和视频样本上进行训练的模型,但出现此错误
ValueError: 请使用 `Layer` 实例初始化 `TimeDistributed` 层.你通过了:Tensor("input_13:0", shape=(None, 5, 648, 384, 3), dtype=float32)

I'm trying to build a model which can be trained on both audio and video samples but I get this error
ValueError: Please initialize `TimeDistributed` layer with a `Layer` instance. You passed: Tensor("input_13:0", shape=(None, 5, 648, 384, 3), dtype=float32)

这是我的三个模型函数:

Here are my three model functions:

def build_convnet(shape=(648, 384, 3)):
    momentum = .9 
    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.Conv2D(64, (2,2), input_shape=shape,padding='same', activation='relu'))
    model.add(tf.keras.layers.Conv2D(64, (2,2), padding='same', activation='relu'))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum))
    model.add(tf.keras.layers.MaxPool2D())
    model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'))
    model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum))
    model.add(tf.keras.layers.MaxPool2D())
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu'))
    model.add(tf.keras.layers.Conv2D(256, (3,3), padding='same', activation='relu'))
    model.add(tf.keras.layers.BatchNormalization(momentum=momentum))
    model.add(tf.keras.layers.GlobalMaxPool2D())

    print(model.summary())
    return model

def action_model(shape=(5, 648, 384, 3)):
    # Create our convnet with (112, 112, 3) input shape
    convnet = build_convnet(shape[1:])
    # then create our final model
    # model = tf.keras.Sequential()
    # add the convnet with (5, 112, 112, 3) shape
    input_shape = tf.keras.layers.Input(shape)
    TD = tf.keras.layers.TimeDistributed(input_shape)(convnet)
    # here, you can also use tf.keras.layers.GRU or LSTM
    LSTM1 = tf.keras.layers.LSTM(1024)(TD)

    Dense1 = tf.keras.layers.Dense(512, activation='relu')(LSTM1)
    Drop1 = tf.keras.layers.Dropout(.2)(Dense1)
    Dense2 = tf.keras.layers.Dense(128, activation='relu')(Drop1)
    Drop2 = tf.keras.layers.Dropout(.2)(Dense2)
    Dense3 = tf.keras.layers.Dense(64, activation='relu')(Drop2)
    # Dense4 = tf.keras.layers.Dense(2, activation='softmax')(Dense3)

    model = tf.keras.models.Model(inputs=input_shape,outputs=Dense3)

    return model

def audio_and_final_model():
  input_shape = tf.keras.layers.Input(shape=(220941,1))
  Conv1 = tf.keras.layers.Conv1D(16,activation='relu',kernel_size=(10))(input_shape)
  MaxPool1 = tf.keras.layers.MaxPool1D()(Conv1)
  Dropout1 = tf.keras.layers.Dropout(0.2)(MaxPool1)
  Conv2 = tf.keras.layers.Conv1D(32,activation='relu',kernel_size=(10))(Dropout1)
  MaxPool2 = tf.keras.layers.MaxPool1D()(Conv2)
  Dropout2 = tf.keras.layers.Dropout(0.2)(MaxPool2)
  Conv3 = tf.keras.layers.Conv1D(16,activation='relu',kernel_size=(10))(Dropout2)
  MaxPool3 = tf.keras.layers.MaxPool1D()(Conv3)
  Dropout3 = tf.keras.layers.Dropout(0.2)(MaxPool3)
  Flatten = tf.keras.layers.Flatten()(Dropout3)
  Dense1 = tf.keras.layers.Dense(128,activation='relu')(Flatten)
  Dense2 = tf.keras.layers.Dense(64,activation='relu')(Dense1)


  model = tf.keras.models.Model(inputs=input_shape,outputs=Dense2)

  return model

INSHAPEAM = (5, 648, 384, 3)
INSHAPEAFM = (220941,1)
am = action_model()
afm = audio_and_final_model()

combined = tf.keras.layers.Concatenate([am.output,afm.output])
z = tf.keras.layers.Dense(2,activation='softmax')(combined)

model = tf.keras.models.Model(inputs=[INSHAPEAM,INSHAPEAFM],outputs=z)

我试图搜索,但我只能找到一个答案这里 但我并没有真正理解它,所以如果有人能在这里帮助我,那将是很大的帮助.提前致谢!

I tried to search but I could just find one answer here but I didn't really understand it so it would be great help if someone could help me here. Thanks in advance!

推荐答案

问题在下面的部分.从函数中更改此部分:只需尝试使用功能模型而不是顺序将 build_convnet 上的部分嵌入到动作模型中

the problem is in the part below. change this part from the function: just try to embed the part on build_convnet in the action model using the functional model not the sequential

 TD = tf.keras.layers.TimeDistributed(convnet)(input_shape)

这篇关于ValueError:请使用“Layer"实例初始化“TimeDistributed"层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆