如何从Keras模型中获取数据以进行可视化? [英] how to get data from within Keras model for visualisation?

查看:507
本文介绍了如何从Keras模型中获取数据以进行可视化?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Tensorflow 1.12,该版本将Keras与Python 3.6.x集成在一起

I am using Tensorflow 1.12 which has Keras integrated together with Python 3.6.x

我希望使用Keras简化模型构建,但也希望使用中间层上的数据来可视化特征图和内核,以更好地了解机器学习的工作原理(尽管这显然不是很明显)

I wish to use Keras for its simplicity of model building, but also would like to use data on the intermediate layer for visualization of feature maps and kernels to better understand how machine learning works(even though this is admittedly not so evident)

我正在使用mnist数据库和一个非常基本的Keras模型来尝试做我想做的事情.

I am using the mnist data base and a very basic Keras model to try to do what I want to do.

这是代码

import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras

print(tf.VERSION)
print(tf.keras.__version__)

tf.keras.backend.clear_session()

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train_shaped = np.expand_dims(x_train, axis=3) / 255.0
x_test_shaped = np.expand_dims(x_test, axis=3) / 255.0

def create_model():

  model = tf.keras.models.Sequential([
    keras.layers.Conv2D(32, kernel_size=(4, 4),strides=(1,1),activation='relu', input_shape=(28,28,1)),
    keras.layers.Dropout(0.5),
    keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2)),
    keras.layers.Conv2D(24, kernel_size=(8, 8),strides=(1,1)),
    keras.layers.Flatten(),
    keras.layers.Dropout(0.5),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(10, activation=tf.nn.softmax)
  ])

  model.compile(optimizer=tf.keras.optimizers.Adam(), 
            loss=tf.keras.losses.sparse_categorical_crossentropy,
            metrics=['accuracy'])

  return model

上面设置了数据集和模型 接下来,我定义Tensorflow的课程并进行培训.

The above sets up the dataset and the model Next I define my session for Tensorflow and do the training.

这一切都很好,但是现在我想获取第一层的数据,最好是一个可以进行可视化的numpy数组.

This all works fine but now I want to get my data for the, as example, the first layer out as ideally a numpy array on which I can do the visualization.

我的model.layers[0].output给了我(?,25,25,32)Tensor期望值,现在我尝试先执行eval(),然后再执行.numpy()方法后得到结果.

My model.layers[0].output gives me a Tensor of (?,25,25,32) as expected and now I try to do a eval() and thenafter a .numpy() method to get my result.

错误消息是

You must feed a value for placeholder tensor 'conv2d_6_input' with dtype float and shape [?,28,28,1]

我正在寻求有关如何获取数据(32x 25x25像素的特征图)作为numpy数组进行可视化的帮助.

I am looking for help on how to get my data (32 feature maps of 25x25 pixels) out as numpy array for visualization.

sess = tf.Session(graph=tf.get_default_graph())
tf.keras.backend.set_session(sess)

with sess.as_default():
   model = create_model()
   model.summary()

   model.fit(x_train_shaped[:10000], y_train[:10000], epochs=2, 
   batch_size=64, validation_split=.2,)

   model.layers[0].output
   print(model.layers[0].output.shape)
   my_array = model.layers[0].output
   my_array.eval()

tf.keras.backend.clear_session()
sess.close()

推荐答案

首先,必须注意,只有在向输入层提供一些数据时,获取模型或层的输出才有意义.您得到模型某些东西(即输入数据),得到一些回报(即输出或特征图或激活图).这就是为什么它将产生以下错误的原因:

First of all, you must note that getting the output of a model or a layer only makes sense when you feed the input layers with some data. You get the model something (i.e. input data), you get something in return (i.e. output or feature map or activation map). That's why it would produce the following error:

您必须输入占位符张量"conv2d_6_ 输入"

You must feed a value for placeholder tensor 'conv2d_6_input'

您还没有喂婴儿,所以它会哭:)

You haven't fed the baby, so it would cry :)

现在,建立新的Keras模型的想法适得其反.首先,当您有一个大型模型时,您希望插入某种现成的代码,这些代码可以获取要素图的输出并对其进行可视化.因此,这条路线似乎并不是很有趣.

Now, the idea of building a new Keras model is counterproductive. When you have a large model in the first place, one would like to plug in some kind of ready-made code that can get the output of the feature maps and visualize them. So this route seems not really interesting.

我认为您错误地认为,当您从另一个模型的层中构造一个新模型时,会克隆一个全新的模型.并非如此,因为将共享图层的参数.

I think you are mistakenly thinking that when you construct a new model out of the layers of another model, a whole new model is cloned. That's not the case since the parameters of the layers would be shared.

具体来说,您正在寻找的东西可以通过以下方式实现:

Concretely, what you are looking for can be achieved like this:

viz_conv = Model(model.input, model.layers[0].output)
conv_active = viz_conv(my_input_data)  # my_input_data is a numpy array of shape `(num_samples,28,28,1)`

viz_conv的所有参数都与model共享,并且也没有被复制.在引擎盖下,他们使用相同重量的张量.

All the parameters of viz_conv are shared with model and they have not been copied either. Under the hood they are using the same weight Tensors.

或者,您可以定义一个后端函数来做到这一点:

Alternatively, you could define a backend function to do this:

from tensorflow.keras import backend as K

viz_func = K.function([model.input], [any layer(s) you would like in the model])
output = viz_func([my_input_data])

这已在

This has been covered in Keras documentation and I highly recommend to read that as well.

这篇关于如何从Keras模型中获取数据以进行可视化?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆