为什么在使用具有多个输出的简单模型时,Keras会抱怨缺少梯度? [英] Why when using this simple model with multiple outputs does Keras complain about a lack of gradients?

查看:251
本文介绍了为什么在使用具有多个输出的简单模型时,Keras会抱怨缺少梯度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,在较大的项目中会发生此问题,但我收集了一个最小的工作示例.请考虑以下内容:

So this problem occurs for in the context of a larger project, but I've assembled a minimal working example. Consider the following:

input_1 = Input((5,))
hidden_a = Dense(2)(input_1)
hidden_b = Dense(2)(input_1)

m1 = Model(input_1, [hidden_a, hidden_b])

input_2 = Input((2,))
output = Dense(1)(input_2)

m2 = Model(input_2, output)

m3 = Model(input_1, m2(m1(input_1)[0]))

print(m3.summary())

m3.compile(optimizer='adam', loss='mse')

x = np.random.random(size=(10,5))
y = np.random.random(size=(10,1))

m3.fit(x,y)

我的期望是,当评估该网络时,hidden_b的输出将被简单地丢弃,并且我将有效地获得一个简单的前馈神经网络,该网络将变为input_1 -> hidden_a -> input_2 -> output.相反,我得到了一个神秘的错误:

My expectation is that when evaluating this network, the output of hidden_b will simply be discarded and I'll effectively have a simple feed-forward neural network that goes input_1 -> hidden_a -> input_2 -> output. Instead, I get a cryptic error:

Traceback (most recent call last):
  File "test.py", line 37, in <module>
    m3.fit(x,y)
  File "/home/thomas/.local/lib/python3.5/site-packages/keras/engine/training.py", line 1013, in fit
    self._make_train_function()
  File "/home/thomas/.local/lib/python3.5/site-packages/keras/engine/training.py", line 497, in _make_train_function
    loss=self.total_loss)
  File "/home/thomas/.local/lib/python3.5/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/home/thomas/.local/lib/python3.5/site-packages/keras/optimizers.py", line 445, in get_updates
    grads = self.get_gradients(loss, params)
  File "/home/thomas/.local/lib/python3.5/site-packages/keras/optimizers.py", line 80, in get_gradients
    raise ValueError('An operation has `None` for gradient. '
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

任何想法可能是什么原因造成的吗?谢谢!

Any idea what might be causing this? Thanks!

更新:如果将input_1传递给m1是问题,那为什么行得通呢?

Update: If passing input_1 to m1 is the problem, then why does this work?

input_1 = Input((5,))
hidden_a = Dense(2)(input_1)
hidden_b = Dense(2)(input_1)

def sampling (args):
    hidden_a, hidden_b = args
    return hidden_a + hidden_b

z = Lambda(sampling)([hidden_a, hidden_b])

m1 = Model(input_1, [hidden_a, hidden_b, z])

input_2 = Input((2,))
output = Dense(1)(input_2)

m2 = Model(input_2, output)

m3 = Model(input_1, m2(m1(input_1)[2]))

m3.compile(optimizer='adam', loss='mse')

x = np.random.random(size=(10,5))
y = np.random.random(size=(10,1))

m3.fit(x,y)

推荐答案

您正在将输入传递给模型1,而该输入已经是模型1的输入.

You're passing an input to model 1 that is already the input of model 1.

m3 = Model(input_1, m2(m1.outputs[0]))

这篇关于为什么在使用具有多个输出的简单模型时,Keras会抱怨缺少梯度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆