Keras/Tensorflow:有效获取所有图层的预测或输出 [英] Keras/Tensorflow: Get predictions or output of all layers efficiently

查看:394
本文介绍了Keras/Tensorflow:有效获取所有图层的预测或输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我能够按照 Keras Docs:如何获得中间层的输出

def get_output_of_all_layers(model, test_input):
    output_of_all_layers = []

    for count, layer in enumerate(model.layers):

        # skip the input layer
        if count == 0:
            continue

        intermediate_layer_model = Model(inputs=model.input, outputs=model.get_layer(layer.name).output)
        intermediate_output = intermediate_layer_model.predict(test_input)[0]

        output_of_all_layers.append(intermediate_output)

    return np.array(output_of_all_layers)

但是这慢得令人难以置信,而且耗时超过一分钟(以6700HQGTX10706700HQ中以〜 65s 计时),这是荒谬的,推断在不到一秒钟的时间内就发生了... !)用于约50层的模型.我猜这是因为它每次都会构建一个模型,将模型加载到内存中,传递输入并获取输出.显然,如果没有其他层的结果就无法获得最后一层的输出,如何像上面一样保存它们,而不必创建冗余模型(或以更快,更有效的方式)?

But this is incredible slower and takes more than a minute (clocked at ~65s, in 6700HQ with GTX1070, that's ridiculously high, inference happens in less than a second...!) for models with about 50 layers. I guess this is because it is building a model every single time, load the model into the memory, pass inputs and get outputs. Clearly, you can't get the output of the last layer without getting results from the other layers, how do I save them all like above without having to create redundant models (or in a faster, more efficient way)?

更新:我还注意到这没有利用我的GPU,这意味着所有转换层都由CPU执行吗?为什么它不使用我的GPU?我认为,如果使用我的GPU,它的使用量将会减少.

Update: I also noticed that this does NOT utilize my GPU, this means all the conv layers are being executed by the CPU? Why wouldn't it use my GPU for this? I reckon it's gonna take way less if it used my GPU.

我如何更有效地做到这一点?

推荐答案

按照Ben Usman的建议,您可以先将模型包装在基本的端到端Model中,然后将其层作为输出提供给第二个Model:

As suggested by Ben Usman, you can first wrap the model in a basic end-to-end Model, and provide its layers as outputs to a second Model:

import keras.backend as K
from keras.models import Model
from keras.layers import Input, Dense

input_layer = Input((10,))

layer_1 = Dense(10)(input_layer)
layer_2 = Dense(20)(layer_1)
layer_3 = Dense(5)(layer_2)

output_layer = Dense(1)(layer_3)

basic_model = Model(inputs=input_layer, outputs=output_layer)

# some random input
import numpy as np
features = np.random.rand(100,10)

# With a second Model
intermediate_model = Model(inputs=basic_model.layers[0].input, 
                              outputs=[l.output for l in basic_model.layers[1:]])
intermediate_model.predict(features) # outputs a list of 4 arrays

或者,您可以类似的方式使用Keras函数:

Or, you could use a Keras function in similar fashion:

# With a Keras function
get_all_layer_outputs = K.function([basic_model.layers[0].input],
                                  [l.output for l in basic_model.layers[1:]])

layer_output = get_all_layer_outputs([features]) # return the same thing

这篇关于Keras/Tensorflow:有效获取所有图层的预测或输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆