Tensorflow:加权交叉熵中权重的解释 [英] Tensorflow: Interpretation of Weight in Weighted Cross Entropy

查看:56
本文介绍了Tensorflow:加权交叉熵中权重的解释的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Tensorflow 函数 tf.nn.weighted_cross_entropy_with_logits() 接受参数 pos_weight.文档定义了pos_weightcode> as 用于正例的系数."我认为这意味着增加 pos_weight 会增加误报的损失并减少误报的损失.还是我倒退了?

The Tensorflow function tf.nn.weighted_cross_entropy_with_logits() takes the argument pos_weight. The documentation defines pos_weight as "A coefficient to use on the positive examples." I assume this means that increasing pos_weight increases the loss from false positives and decreases the loss from false negatives. Or do I have that backwards?

推荐答案

实际上,情况正好相反.引用文档:

Actually, it's the other way around. Citing documentation:

参数 pos_weight 用作正数的乘数目标.

The argument pos_weight is used as a multiplier for the positive targets.

因此,假设您的数据集中有 5 个正例和 7 个负例,如果您设置 pos_weight=2,那么您的损失就好像你有 10 个正面例子和 7 个负面例子.

So, assuming you have 5 positive examples in your dataset and 7 negative, if you set the pos_weight=2, then your loss would be as if you had 10 positive examples and 7 negative.

假设你把所有正面例子都弄错了,所有负面例子都是对的.最初你会有 5 误报和 0 误报.当你增加 pos_weight 时,假阴性的数量会人为地增加.请注意,来自误报的损失值不会改变.

Assume you got all of the positive examples wrong and all negative right. Originally you would have 5 false negatives and 0 false positives. When you increase the pos_weight, the number of false negatives will artificially increase. Note that the loss value coming from false positives doesn't change.

这篇关于Tensorflow:加权交叉熵中权重的解释的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆