Tensorflow中的加权训练示例 [英] Weighted Training Examples in Tensorflow

查看:404
本文介绍了Tensorflow中的加权训练示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

给出一组用于训练神经网络的训练示例,我们希望或多或少地重视各种训练示例.我们基于示例的值"(例如,有效性或置信度)的某些标准,对每个示例应用0.0到1.0之间的权重.如何在Tensorflow中实现此功能,尤其是在使用tf.nn.sparse_softmax_cross_entropy_with_logits()时?

Given a set of training examples for training a neural network, we want to give more or less weight to various examples in training. We apply a weight between 0.0 and 1.0 to each example based on some criteria for the "value" (e.g. validity or confidence) of the example. How can this be implemented in Tensorflow, in particular when using tf.nn.sparse_softmax_cross_entropy_with_logits()?

推荐答案

在最常见的情况下,调用形状为[batch_size, num_classes]logits和形状为[batch_size]labelstf.nn.sparse_softmax_cross_entropy_with_logits形状为batch_size的张量.您可以将此张量与权重张量相乘,然后再将其减小为单个损失值:

In the most common case where you call tf.nn.sparse_softmax_cross_entropy_with_logits with logits of shape [batch_size, num_classes] and labels of shape [batch_size], the function returns a tensor of shape batch_size. You can multiply this tensor with a weight tensor before reducing them to a single loss value:

weights = tf.placeholder(name="loss_weights", shape=[None], dtype=tf.float32)
loss_per_example = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(weights * loss_per_example)

这篇关于Tensorflow中的加权训练示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆