Keras 损失函数的导数 [英] Derivative in loss function in Keras

查看:32
本文介绍了Keras 损失函数的导数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在 keras 中创建以下损失函数:

I want to make following loss function in keras:

Loss = mse + double_derivative(y_pred,x_train)

我无法合并衍生项.我试过 K.gradients(K.gradients(y_pred,x_train),x_train) 但它没有帮助.

I am not able to incorporate the derivative term. I have tried K.gradients(K.gradients(y_pred,x_train),x_train) but it does not help.

我收到错误消息:

AttributeError: 'NoneType' 对象没有属性 'op'

AttributeError: 'NoneType' object has no attribute 'op'

def _loss_tensor(y_true, y_pred,x_train):
    l1 = K.mean(K.square(y_true - y_pred), axis=-1)
    sigma = 0.01
    lamda = 3
    term = K.square(sigma)*K.gradients(K.gradients(y_pred,x_train),x_train)
    l2 = K.mean(lamda*K.square(term),axis=-1)
    return l1+l2

def loss_func(x_train):
        def loss(y_true,y_pred):
            return _loss_tensor(y_true,y_pred,x_train)
        return loss

def create_model_neural(learning_rate, num_layers,
                 num_nodes, activation):

    model_neural = Sequential()

    x_train = model_neural.add(Dense(num_nod, input_dim=num_input, activation=activation))

    for i in range(num_layers-1):
        model_neural.add(Dense(num_nodes,activation=activation,name=name))

    model_neural.add(Dense(1, activation=activation))

    optimizer = SGD(lr=learning_rate)
    model_loss = loss_func(x_train=x_train)

    model_neural.compile(loss=model_loss,optimizer=optimizer)

    return model_neural

推荐答案

问题是 x_train 总是 None 并且 keras 不能取派生 wrt .这是因为 model_neural.add(...) 没有返回任何东西.

The problem is that x_train is always None and keras can't take a derivative wrt None. And this is happening because model_neural.add(...) does not return anything.

我假设 x_train 是传递给网络的输入.在这种情况下,x_train 应该是 create_model_neural 的另一个参数,或者您可以尝试 model_neural.input 张量.

I assume that x_train is the input that is passed to the network. In this case x_train should probably be another argument of create_model_neural or alternatively you can try model_neural.input tensor.

这篇关于Keras 损失函数的导数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆