TensorFlow 中的 sigmoid 后跟交叉熵和 sigmoid_cross_entropy_with_logits 有什么区别? [英] What is the difference between a sigmoid followed by the cross entropy and sigmoid_cross_entropy_with_logits in TensorFlow?

查看:29
本文介绍了TensorFlow 中的 sigmoid 后跟交叉熵和 sigmoid_cross_entropy_with_logits 有什么区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当试图用sigmoid激活函数得到交叉熵时,

When trying to get cross-entropy with sigmoid activation function, there is a difference between

  1. loss1 = -tf.reduce_sum(p*tf.log(q), 1)
  2. loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1)

但是在使用 softmax 激活函数时它们是相同的.

But they are the same when with softmax activation function.

以下是示例代码:

import tensorflow as tf

sess2 = tf.InteractiveSession()
p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)
sess.run(tf.global_variables_initializer())

feed_dict = {p: [[0, 0, 0, 1, 0], [1,0,0,0,0]], logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2], [0.3, 0.3, 0.2, 0.1, 0.1]]}
loss1 = -tf.reduce_sum(p*tf.log(q),1).eval(feed_dict)
loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1).eval(feed_dict)

print(p.eval(feed_dict), "
", q.eval(feed_dict))
print("
",loss1, "
", loss2)

推荐答案

你混淆了二进制多类问题的交叉熵.

You're confusing the cross-entropy for binary and multi-class problems.

您使用的公式是正确的,它直接对应于tf.nn.softmax_cross_entropy_with_logits:

The formula that you use is correct and it directly corresponds to tf.nn.softmax_cross_entropy_with_logits:

-tf.reduce_sum(p * tf.log(q), axis=1)

pq 预计是 N 类的概率分布.特别是,N 可以是 2,如下例所示:

p and q are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:

p = tf.placeholder(tf.float32, shape=[None, 2])
logit_q = tf.placeholder(tf.float32, shape=[None, 2])
q = tf.nn.softmax(logit_q)

feed_dict = {
  p: [[0, 1],
      [1, 0],
      [1, 0]],
  logit_q: [[0.2, 0.8],
            [0.7, 0.3],
            [0.5, 0.5]]
}

prob1 = -tf.reduce_sum(p * tf.log(q), axis=1)
prob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q)
print(prob1.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]
print(prob2.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]

请注意,q 正在计算 tf.nn.softmax,即输出一个概率分布.所以还是多类交叉熵公式,只针对N=2.

Note that q is computing tf.nn.softmax, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.

这次正确的公式是

p * -tf.log(q) + (1 - p) * -tf.log(1 - q)

虽然在数学上是多类情况的部分情况,但pq含义是不同的.在最简单的情况下,每个pq都是一个数字,对应一个A类的概率.

Though mathematically it's a partial case of the multi-class case, the meaning of p and q is different. In the simplest case, each p and q is a number, corresponding to a probability of the class A.

重要:不要被常见的 p * -tf.log(q) 部分和总和混淆.以前的 p 是一个单热向量,现在它是一个数字,零或一.q 也一样——它是一个概率分布,现在它是一个数字(概率).

Important: Don't get confused by the common p * -tf.log(q) part and the sum. Previous p was a one-hot vector, now it's a number, zero or one. Same for q - it was a probability distribution, now's it's a number (probability).

如果p 是一个向量,每个单独的组件都被认为是一个独立的二元分类.请参阅此答案,其中概述了张量流中 softmax 和 sigmoid 函数之间的区别.所以定义 p = [0, 0, 0, 1, 0] 并不意味着一个单热向量,而是 5 个不同的特征,其中 4 个关闭,1 个打开.定义 q = [0.2, 0.2, 0.2, 0.2, 0.2] 意味着 5 个特征中的每一个都有 20% 的概率.

If p is a vector, each individual component is considered an independent binary classification. See this answer that outlines the difference between softmax and sigmoid functions in tensorflow. So the definition p = [0, 0, 0, 1, 0] doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definition q = [0.2, 0.2, 0.2, 0.2, 0.2] means that each of 5 features is on with 20% probability.

这解释了在交叉熵之前使用sigmoid函数:它的目标是将logit压缩到[0, 1]区间.

This explains the use of sigmoid function before the cross-entropy: its goal is to squash the logit to [0, 1] interval.

上述公式仍然适用于多个独立特征,而这正是 tf.nn.sigmoid_cross_entropy_with_logits 计算:

The formula above still holds for multiple independent features, and that's exactly what tf.nn.sigmoid_cross_entropy_with_logits computes:

p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)

feed_dict = {
  p: [[0, 0, 0, 1, 0],
      [1, 0, 0, 0, 0]],
  logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2],
            [0.3, 0.3, 0.2, 0.1, 0.1]]
}

prob1 = -p * tf.log(q)
prob2 = p * -tf.log(q) + (1 - p) * -tf.log(1 - q)
prob3 = p * -tf.log(tf.sigmoid(logit_q)) + (1-p) * -tf.log(1-tf.sigmoid(logit_q))
prob4 = tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q)
print(prob1.eval(feed_dict))
print(prob2.eval(feed_dict))
print(prob3.eval(feed_dict))
print(prob4.eval(feed_dict))

你应该看到最后三个张量是相等的,而prob1只是交叉熵的一部分,所以只有当p为<时,它才包含正确的值代码>1:

You should see that the last three tensors are equal, while the prob1 is only a part of cross-entropy, so it contains correct value only when p is 1:

[[ 0.          0.          0.          0.59813893  0.        ]
 [ 0.55435514  0.          0.          0.          0.        ]]
[[ 0.79813886  0.79813886  0.79813886  0.59813887  0.79813886]
 [ 0.5543552   0.85435522  0.79813886  0.74439669  0.74439669]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]

现在应该清楚的是,沿 axis=1-p * tf.log(q) 的总和在此设置中没有意义,尽管在多类情况下,这将是一个有效的公式.

Now it should be clear that taking a sum of -p * tf.log(q) along axis=1 doesn't make sense in this setting, though it'd be a valid formula in multi-class case.

这篇关于TensorFlow 中的 sigmoid 后跟交叉熵和 sigmoid_cross_entropy_with_logits 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆