为什么"softmax_cross_entropy_with_logits_v2"反向道具放入标签 [英] Why "softmax_cross_entropy_with_logits_v2" backprops into labels

查看:320
本文介绍了为什么"softmax_cross_entropy_with_logits_v2"反向道具放入标签的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道为什么在Tensorflow 1.5.0及更高版本中, softmax_cross_entropy_with_logits_v2 默认为向后传播到标签和logit中.您想在标签上使用反向传播的哪些应用程序/场景是什么?

解决方案

我在下面看到了github问题,提出了同样的问题,您可能希望关注它以进行将来的更新.

https://github.com/tensorflow/minigo/issues/37

我不代表做出此决定的开发人员发言,但我想他们会默认执行此操作,因为它确实经常使用,并且对于大多数不向标签反向传播的应用程序,标签无论如何都是常数,不会受到不利影响.

向后传播到标签中的两个常见用例是:

  • 创建对抗性示例

围绕构建愚弄神经网络的对抗性示例,有一个完整的研究领域.用于执行此操作的许多方法包括训练网络,然后将网络固定并向后传播到标签(原始图像)中,以对其进行调整(通常在某些约束下),从而产生使网络误分类图像的结果. /p>

  • 可视化神经网络的内部.

我还建议人们观看youtube上的deepviz工具包视频,您将学到大量关于神经网络学习到的内部表示形式的信息.

https://www.youtube.com/watch?v=AgkfIQ4IGaM

如果继续研究并找到原始论文,您会发现它们也反向传播到标签中以生成图像,这些图像可以高度激活网络中的某些过滤器以了解它们.

I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applications/scenarios where you would want to backprop into labels?

解决方案

I saw the github issue below asking the same question, you might want to follow it for future updates.

https://github.com/tensorflow/minigo/issues/37

I don't speak for the developers who made this decision, but I would surmise that they would do this by default because it is indeed used often, and for most application where you aren't backpropagating into the labels, the labels are a constant anyway and won't be adversely affected.

Two common uses cases for backpropagating into labels are:

  • Creating adversarial examples

There is a whole field of study around building adversarial examples that fool a neural network. Many of the approaches used to do so involve training a network, then holding the network fixed and backpropagating into the labels (original image) to tweak it (under some constraints usually) to produce a result that fools the network into misclassifying the image.

  • Visualizing the internals of a neural network.

I also recommend people watch the deepviz toolkit video on youtube, you'll learn a ton about the internal representations learned by a neural network.

https://www.youtube.com/watch?v=AgkfIQ4IGaM

If you continue digging into that and find the original paper you'll find that they also backpropagate into the labels to generate images which highly activate certain filters in the network in order to understand them.

这篇关于为什么"softmax_cross_entropy_with_logits_v2"反向道具放入标签的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆