为什么“softmax_cross_entropy_with_logits_v2"反向传播到标签 [英] Why "softmax_cross_entropy_with_logits_v2" backprops into labels

查看:20
本文介绍了为什么“softmax_cross_entropy_with_logits_v2"反向传播到标签的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道为什么在 Tensorflow 1.5.0 及更高版本中,softmax_cross_entropy_with_logits_v2 默认为反向传播到标签和 logits.在哪些应用/场景中您希望反向传播到标签中?

I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applications/scenarios where you would want to backprop into labels?

推荐答案

我看到下面的 github issue 提出了同样的问题,你可能想关注它以备将来更新.

I saw the github issue below asking the same question, you might want to follow it for future updates.

https://github.com/tensorflow/minigo/issues/37

我不代表做出这个决定的开发人员,但我猜测他们会默认这样做,因为它确实经常使用,而且对于大多数不反向传播到标签的应用程序,标签无论如何都是一个常数,不会受到不利影响.

I don't speak for the developers who made this decision, but I would surmise that they would do this by default because it is indeed used often, and for most application where you aren't backpropagating into the labels, the labels are a constant anyway and won't be adversely affected.

反向传播到标签的两个常见用例是:

  • 创建对抗样本

围绕构建欺骗神经网络的对抗性示例有一个完整的研究领域.许多用于这样做的方法涉及训练网络,然后保持网络固定并反向传播到标签(原始图像)中以对其进行调整(通常在某些约束下)以产生欺骗网络对图像进行错误分类的结果.

There is a whole field of study around building adversarial examples that fool a neural network. Many of the approaches used to do so involve training a network, then holding the network fixed and backpropagating into the labels (original image) to tweak it (under some constraints usually) to produce a result that fools the network into misclassifying the image.

  • 可视化神经网络的内部结构.

我还建议人们在 youtube 上观看 deepviz 工具包视频,您将了解大量有关神经网络学习的内部表示的知识.

I also recommend people watch the deepviz toolkit video on youtube, you'll learn a ton about the internal representations learned by a neural network.

https://www.youtube.com/watch?v=AgkfIQ4IGaM

如果您继续深入研究并找到原始论文,您会发现它们还会反向传播到标签中以生成高度激活网络中某些过滤器以理解它们的图像.

If you continue digging into that and find the original paper you'll find that they also backpropagate into the labels to generate images which highly activate certain filters in the network in order to understand them.

这篇关于为什么“softmax_cross_entropy_with_logits_v2"反向传播到标签的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆