sparse_softmax_cross_entropy_with_logits和softmax_cross_entropy_with_logits有什么区别? [英] What's the difference between sparse_softmax_cross_entropy_with_logits and softmax_cross_entropy_with_logits?
问题描述
我最近遇到了 tf.nn.sparse_softmax_cross_entropy_with_logits ,我可以不知道与 tf.nn.softmax_cross_entropy_with_logits 有什么区别.
I recently came across tf.nn.sparse_softmax_cross_entropy_with_logits and I can not figure out what the difference is compared to tf.nn.softmax_cross_entropy_with_logits.
Is the only difference that training vectors y
have to be one-hot encoded when using sparse_softmax_cross_entropy_with_logits
?
阅读API,与softmax_cross_entropy_with_logits
相比,我找不到其他任何区别.但是,为什么我们需要额外的功能呢?
Reading the API, I was unable to find any other difference compared to softmax_cross_entropy_with_logits
. But why do we need the extra function then?
如果softmax_cross_entropy_with_logits
提供了一个热编码的训练数据/矢量,softmax_cross_entropy_with_logits
是否应该产生与sparse_softmax_cross_entropy_with_logits
相同的结果?
Shouldn't softmax_cross_entropy_with_logits
produce the same results as sparse_softmax_cross_entropy_with_logits
, if it is supplied with one-hot encoded training data/vectors?
推荐答案
具有两个不同的功能是一个便利,因为它们产生相同的结果.
Having two different functions is a convenience, as they produce the same result.
区别很简单:
- 对于
sparse_softmax_cross_entropy_with_logits
,标签的形状必须为[batch_size],且dtype为int32或int64.每个标签都是一个[0, num_classes-1]
范围内的int. - 对于
softmax_cross_entropy_with_logits
,标签的形状必须为[batch_size,num_classes]和dtype float32或float64.
- For
sparse_softmax_cross_entropy_with_logits
, labels must have the shape [batch_size] and the dtype int32 or int64. Each label is an int in range[0, num_classes-1]
. - For
softmax_cross_entropy_with_logits
, labels must have the shape [batch_size, num_classes] and dtype float32 or float64.
softmax_cross_entropy_with_logits
中使用的标签是sparse_softmax_cross_entropy_with_logits
中使用的标签的一个热门版本.
Labels used in softmax_cross_entropy_with_logits
are the one hot version of labels used in sparse_softmax_cross_entropy_with_logits
.
另一个小区别是,使用sparse_softmax_cross_entropy_with_logits
,您可以给-1作为标签,使该标签上有损耗0
.
Another tiny difference is that with sparse_softmax_cross_entropy_with_logits
, you can give -1 as a label to have loss 0
on this label.
这篇关于sparse_softmax_cross_entropy_with_logits和softmax_cross_entropy_with_logits有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!