Tensorflow 中的加权训练示例 [英] Weighted Training Examples in Tensorflow

查看:36
本文介绍了Tensorflow 中的加权训练示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

给定一组用于训练神经网络的训练样例,我们希望对训练中的各种样例赋予或多或少的权重.我们根据示例的价值"(例如有效性或置信度)的一些标准,将介于 0.0 和 1.0 之间的权重应用于每个示例.这如何在 Tensorflow 中实现,尤其是在使用 tf.nn.sparse_softmax_cross_entropy_with_logits() 时?

Given a set of training examples for training a neural network, we want to give more or less weight to various examples in training. We apply a weight between 0.0 and 1.0 to each example based on some criteria for the "value" (e.g. validity or confidence) of the example. How can this be implemented in Tensorflow, in particular when using tf.nn.sparse_softmax_cross_entropy_with_logits()?

推荐答案

在最常见的情况下,您使用 logits 形状的 tf.nn.sparse_softmax_cross_entropy_with_logits 调用 >[batch_size, num_classes]labels 形状为 [batch_size],函数返回形状为 batch_size 的张量.您可以将此张量与权重张量相乘,然后再将它们减少为单个损失值:

In the most common case where you call tf.nn.sparse_softmax_cross_entropy_with_logits with logits of shape [batch_size, num_classes] and labels of shape [batch_size], the function returns a tensor of shape batch_size. You can multiply this tensor with a weight tensor before reducing them to a single loss value:

weights = tf.placeholder(name="loss_weights", shape=[None], dtype=tf.float32)
loss_per_example = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(weights * loss_per_example)

这篇关于Tensorflow 中的加权训练示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆