Keras获得中间层的输出 [英] Keras getting output of intermidate layers

查看:585
本文介绍了Keras获得中间层的输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

## what my model looks like

# defining the model archictecture
model = Sequential()
# 1st conv layer
model.add(Conv2D(32, (5, 5), activation='relu', input_shape=x_ip_shape))
# 1st max pool
model.add(MaxPooling2D(pool_size=(2, 2)))
# 2nd conv layer
model.add(Conv2D(64, (7, 7), activation='relu'))
# 2nd max pool
model.add(MaxPooling2D(pool_size=(2, 2)))
# Flattenning the input
model.add(Flatten())
# 1st Fully connected layer
model.add(Dense(10, activation='relu'))
# Adding droput
model.add(Dropout(0.25))
# softmax layer
model.add(Dense(classes_out, activation='softmax'))

# defining loss, optimizer learning rate and metric
model.compile(loss='categorical_crossentropy',optimizer=keras.optimizers.Adam(1e-4), metrics=['accuracy'])


## prediction
scores = model.evaluate(test_x, test_labels, verbose=0)

问题: 相反,我可以获取#1全连接层正向通过的输出,即model.add(Dense(10, activation='relu'))?

QUESTION: Instead I can get the output for a forward pass for # 1st Fully connected layer, that is model.add(Dense(10, activation='relu'))?

我通过了有关keras的示例常见问题解答.但这让我感到困惑: 在此:

I went through example on keras FAQ. But it confused me: In this:

get_3rd_layer_output = K.function([model.layers[0].input, K.learning_phase()], [model.layers[3].output])

我将输入数据传递到哪里? model.layers [0] .input是什么意思?训练有素的模型已经存储了输入吗?

where do I pass the input data? What does model.layers[0].input mean? Does the trained already model store the input?

推荐答案

get_3rd_layer_output是Theano函数.您无需对其进行很多修改.

The get_3rd_layer_output is a Theano function. You do not need to make many modifications in it.

model.layers[0].input将保持原样.换句话说,如果要将某层的输出指定为第4层作为输入,则应将其更改为model.layers[4].input.

model.layers[0].input will stay as it is if you want the output (of any layer) given input of the first layer in the network. In other words, if you want output of some layer given 4th layer as input, then you should change this to model.layers[4].input.

K.learning_phase()指示是要在训练阶段还是在测试阶段输出.两者的输出之间会有一些差异,因为存在诸如Dropout的层,它们在训练和测试期间的行为有所不同.如果要输出类似于predict()的输出,则希望传递零.

K.learning_phase() indicates whether you want the output in the training phase or in the testing phase. There will some differences between the outputs of these two as there are layers such as Dropout that behave differently during train and test time. You would want to pass zero if you want output similar to predict().

model.layers[3].output:这是您需要进行修改的地方.找出要从中输出图层的索引.如果您具有IDE(例如Pycharm),则单击model变量,然后查看图层的索引(请记住,该索引从零开始).如果不是,请为该图层分配一些名称,然后可以通过执行model.layers找出所有图层名称.由此,您可以轻松获取索引.例如,如果要从第10层输出,则可以将其更改为model.layers[10].output.

model.layers[3].output: This is where you will need to make modifications. Find out the index of the layer that you want output from. If you have IDE (e.g. Pycharm), then click on model variable and see the index of your layer (remember it starts from zero). If not, assign some name to that layer and then you can find out all the layer names by doing model.layers. From this, you can easily get the index. For example, if you want output from 10th layer, then you would change this to model.layers[10].output.

该怎么称呼?

同样,这是Theano函数,因此是对称函数.您必须传递值并对其进行评估.您可以按照以下步骤进行操作:

Again this is a Theano function, so a symolic one. You have to pass values and evaluate it. You do it as follows:

out = get_3rd_layer_output([X, 0])[0]  # test mode

请记住,即使X是单个数据点,其形状也应为(1,) + x_ip_shape.

Remember, even if X is a single data point, its shape should be (1,) + x_ip_shape.

这篇关于Keras获得中间层的输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆