在 tensorflow 中训练时使用自定义损失值 [英] Use custom loss value while training in tensorflow

查看:30
本文介绍了在 tensorflow 中训练时使用自定义损失值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用我自己的自定义损失值来训练我的神经网络.因此,我想对一个小批量执行前馈传播以将激活存储在内存中,然后使用我自己的损失值执行反向传播.这将使用 tensorflow 来完成.

I would like to train my neural network using a custom loss value of my own. Therefore, I would like to perform feed forward propagation for one mini batch to store the activations in the memory, and then perform back propagation using a my own loss value. This is to be done using tensorflow.

最后,我需要做一些类似的事情:

Finally, I need to do something like:

sess.run(optimizer, feed_dict={x: training_data, loss: my_custom_loss_value}

这可能吗?我假设优化器取决于本身取决于输入的损失.因此,我想将输入输入到图中,但我想使用我的损失值.

Is that possible? I am assuming that the optimizer depends on the loss which by itself depends on the input. Therefore, I want to inputs to be fed into the graph, but I want to use my value for the loss.

推荐答案

我猜因为优化器依赖于激活,它们将被评估,换句话说,输入将被输入到网络中.下面是一个例子:

I guess since the optimizer depends on the activations, they will be evaluated, in other words, the input is going to be fed into the network. Here is an example:

import tensorflow as tf

a = tf.Variable(tf.constant(8.0))
a = tf.Print(input_=a, data=[a], message="a:")
b = tf.Variable(tf.constant(6.0))
b = tf.Print(input_=b, data=[b], message="b:")

c = a * b

optimizer = tf.train.AdadeltaOptimizer(learning_rate=0.1).minimize(c)
init_op = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init_op)

    value, _ = sess.run([c, optimizer], feed_dict={c: 1})
    print(value)

最后,打印的值为 1.0,而控制台显示:a:[8]b:[6] 这意味着输入得到了评估.

Finally, the printed value is 1.0, while the console shows: a:[8]b:[6] which means that the inputs got evaluated.

这篇关于在 tensorflow 中训练时使用自定义损失值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆