如何在TensorFlow中可视化依赖于输出类的DNN? [英] How to visualize DNNs dependent of the output class in TensorFlow?

查看:85
本文介绍了如何在TensorFlow中可视化依赖于输出类的DNN?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在TensorFlow中,直接给定单个输入即可可视化过滤器和激活层。

In TensorFlow it is pretty straight forward to visualize filters and activation layers given a single input.

但是我对相反的方式更感兴趣:给班级上课(作为一个热点向量)到输出层,然后看到类似于该特定类的最佳输入图像的东西。

But I'm more interested in the opposite way: feeding a class (as one-hot vector) to the output layer and see something like the optimal input image for that specific class.

有没有办法做到这一点或运行图反转了吗?

Is there a way to do so or to run the graph reversed?

背景:我正在使用Google的Inception V3进行15个班级学习,并且我已经用大量的数据对网络进行了训练,并且精度很高。现在,我有兴趣了解模型为何以及如何区分不同的类。

Background: I'm using Googles Inception V3 with 15 classes and I've trained the network already with a large amount of data up to a good precision. Now I'm interested in understanding why and how the model distinguishes the different classes.

推荐答案

基本版本是直截了当。您使用与训练网络相同的图表,但没有优化w.r.t。网络参数,则可以优化输入(输入必须是输入图像形状的变量)。您的优化目标是目标类的否定(因为您想最大化,而TF优化器最小化)logit。您需要使用几个不同的图像初始值来运行它。

The "basic" version of this is straightforward. You use the same graph as for training the network, but instead of optimizing w.r.t. the parameters of the network, you optimize w.r.t the input (which has to be a variable with the shape of your input image). Your optimization target is the negative (because you want to maximize, but TF optimizers minimize) logit of your target class. You want to run it with a couple of different initial values for the image.

还有一些相关技术,如果您搜索DeepDream和对抗性示例,则应该找到一个很多文献。

There's also a few related techniques, if you search for DeepDream and adversarial examples you should find a lot of literature.

这篇关于如何在TensorFlow中可视化依赖于输出类的DNN?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆