什么是S形的,随后在TensorFlow交叉熵和sigmoid_cross_entropy_with_logits之间的区别? [英] What is the difference between a sigmoid followed by the cross entropy and sigmoid_cross_entropy_with_logits in TensorFlow?

查看:160
本文介绍了什么是S形的,随后在TensorFlow交叉熵和sigmoid_cross_entropy_with_logits之间的区别?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当尝试通过S型激活函数获得交叉熵时,

  1. loss1 = -tf.reduce_sum(p*tf.log(q), 1)
  2. loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1)

但它们是相同的与SOFTMAX激活功能时.

下面是示例代码:

 import tensorflow as tf

sess2 = tf.InteractiveSession()
p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)
sess.run(tf.global_variables_initializer())

feed_dict = {p: [[0, 0, 0, 1, 0], [1,0,0,0,0]], logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2], [0.3, 0.3, 0.2, 0.1, 0.1]]}
loss1 = -tf.reduce_sum(p*tf.log(q),1).eval(feed_dict)
loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1).eval(feed_dict)

print(p.eval(feed_dict), "\n", q.eval(feed_dict))
print("\n",loss1, "\n", loss2)
 

解决方案

你混淆交叉熵二进制多类问题.

多级交叉熵

您使用的公式是正确的,它直接对应于 :

 -tf.reduce_sum(p * tf.log(q), axis=1)
 

预计将在N个类概率分布.特别地,N可以是2,如下面的示例:

 p = tf.placeholder(tf.float32, shape=[None, 2])
logit_q = tf.placeholder(tf.float32, shape=[None, 2])
q = tf.nn.softmax(logit_q)

feed_dict = {
  p: [[0, 1],
      [1, 0],
      [1, 0]],
  logit_q: [[0.2, 0.8],
            [0.7, 0.3],
            [0.5, 0.5]]
}

prob1 = -tf.reduce_sum(p * tf.log(q), axis=1)
prob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q)
print(prob1.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]
print(prob2.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]
 

请注意为计算 ,即输出概率分布.因此,它仍然是多类交叉熵公式,仅适用于N = 2.

二进制交叉熵

此时间正确的公式是

 p * -tf.log(q) + (1 - p) * -tf.log(1 - q)
 

虽然数学上它是多级的情况下的部分的情况下,在是不同的.在最简单的情况下,每个是一个数字,对应于类A的概率

重要提示:不通过公共部分和之和混淆.先前是一热载体,现在它的一个数字,0或1.同样对于 - 这是一个概率分布,现在是它的一个号码(概率)

.

如果是一个向量,每个单独的部件被认为是独立二元分类.请参见这个答案,概述了在tensorflow SOFTMAX及乙状结肠功能之间的差异.所以定义并不意味着一热载体,但5个不同的特征,其中4是关,1是上.定义装置,每个5个特征是在以20%的概率.

这解释了如何使用交叉熵之前功能的:其目标是压扁分对数,以间隔

.

上面的公式仍然适用于多个独立功能,而这正是 计算:

<预类= 郎吡prettyprint-越权">

您应该看到,最后三个张量是相等的,而仅仅是一个交叉熵的一部分,因此它包含正确的值,只有当:

 [[ 0.          0.          0.          0.59813893  0.        ]
 [ 0.55435514  0.          0.          0.          0.        ]]
[[ 0.79813886  0.79813886  0.79813886  0.59813887  0.79813886]
 [ 0.5543552   0.85435522  0.79813886  0.74439669  0.74439669]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]
 

现在应该很清楚,服用的总和沿<30>没有意义在此设置,虽然它会在多类情况下的有效配方.

When trying to get cross-entropy with sigmoid activation function, there is a difference between

  1. loss1 = -tf.reduce_sum(p*tf.log(q), 1)
  2. loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1)

But they are the same when with softmax activation function.

Following is the sample code:

import tensorflow as tf

sess2 = tf.InteractiveSession()
p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)
sess.run(tf.global_variables_initializer())

feed_dict = {p: [[0, 0, 0, 1, 0], [1,0,0,0,0]], logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2], [0.3, 0.3, 0.2, 0.1, 0.1]]}
loss1 = -tf.reduce_sum(p*tf.log(q),1).eval(feed_dict)
loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1).eval(feed_dict)

print(p.eval(feed_dict), "\n", q.eval(feed_dict))
print("\n",loss1, "\n", loss2)

解决方案

You're confusing the cross-entropy for binary and multi-class problems.

Multi-class cross-entropy

The formula that you use is correct and it directly corresponds to tf.nn.softmax_cross_entropy_with_logits:

-tf.reduce_sum(p * tf.log(q), axis=1)

p and q are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:

p = tf.placeholder(tf.float32, shape=[None, 2])
logit_q = tf.placeholder(tf.float32, shape=[None, 2])
q = tf.nn.softmax(logit_q)

feed_dict = {
  p: [[0, 1],
      [1, 0],
      [1, 0]],
  logit_q: [[0.2, 0.8],
            [0.7, 0.3],
            [0.5, 0.5]]
}

prob1 = -tf.reduce_sum(p * tf.log(q), axis=1)
prob2 = tf.nn.softmax_cross_entropy_with_logits(labels=p, logits=logit_q)
print(prob1.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]
print(prob2.eval(feed_dict))  # [ 0.43748799  0.51301527  0.69314718]

Note that q is computing tf.nn.softmax, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.

Binary cross-entropy

This time the correct formula is

p * -tf.log(q) + (1 - p) * -tf.log(1 - q)

Though mathematically it's a partial case of the multi-class case, the meaning of p and q is different. In the simplest case, each p and q is a number, corresponding to a probability of the class A.

Important: Don't get confused by the common p * -tf.log(q) part and the sum. Previous p was a one-hot vector, now it's a number, zero or one. Same for q - it was a probability distribution, now's it's a number (probability).

If p is a vector, each individual component is considered an independent binary classification. See this answer that outlines the difference between softmax and sigmoid functions in tensorflow. So the definition p = [0, 0, 0, 1, 0] doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definition q = [0.2, 0.2, 0.2, 0.2, 0.2] means that each of 5 features is on with 20% probability.

This explains the use of sigmoid function before the cross-entropy: its goal is to squash the logit to [0, 1] interval.

The formula above still holds for multiple independent features, and that's exactly what tf.nn.sigmoid_cross_entropy_with_logits computes:

p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)

feed_dict = {
  p: [[0, 0, 0, 1, 0],
      [1, 0, 0, 0, 0]],
  logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2],
            [0.3, 0.3, 0.2, 0.1, 0.1]]
}

prob1 = -p * tf.log(q)
prob2 = p * -tf.log(q) + (1 - p) * -tf.log(1 - q)
prob3 = p * -tf.log(tf.sigmoid(logit_q)) + (1-p) * -tf.log(1-tf.sigmoid(logit_q))
prob4 = tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q)
print(prob1.eval(feed_dict))
print(prob2.eval(feed_dict))
print(prob3.eval(feed_dict))
print(prob4.eval(feed_dict))

You should see that the last three tensors are equal, while the prob1 is only a part of cross-entropy, so it contains correct value only when p is 1:

[[ 0.          0.          0.          0.59813893  0.        ]
 [ 0.55435514  0.          0.          0.          0.        ]]
[[ 0.79813886  0.79813886  0.79813886  0.59813887  0.79813886]
 [ 0.5543552   0.85435522  0.79813886  0.74439669  0.74439669]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]
[[ 0.7981388   0.7981388   0.7981388   0.59813893  0.7981388 ]
 [ 0.55435514  0.85435534  0.7981388   0.74439663  0.74439663]]

Now it should be clear that taking a sum of -p * tf.log(q) along axis=1 doesn't make sense in this setting, though it'd be a valid formula in multi-class case.

这篇关于什么是S形的,随后在TensorFlow交叉熵和sigmoid_cross_entropy_with_logits之间的区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆