使用Tensorflow 2.0模型子类访问层的输入/输出 [英] Accessing layer's input/output using Tensorflow 2.0 Model Sub-classing

查看:494
本文介绍了使用Tensorflow 2.0模型子类访问层的输入/输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在一次大学练习中,我使用了TF2.0的模型子分类API.这是我的代码(如果您想知道的话,这就是Alexnet架构):

Working on a university exercise, I used the model sub-classing API of TF2.0. Here's my code (it's the Alexnet architecture, if you wonder...):

class MyModel(Model):
    def __init__(self):
        super(MyModel, self).__init__()
        # OPS
        self.relu = Activation('relu', name='ReLU')
        self.maxpool = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='valid', name='MaxPool')
        self.softmax = Activation('softmax', name='Softmax')

        # Conv layers
        self.conv1 = Conv2D(filters=96, input_shape=(224, 224, 3), kernel_size=(11, 11), strides=(4, 4), padding='same',
                            name='conv1')
        self.conv2a = Conv2D(filters=128, kernel_size=(5, 5), strides=(1, 1), padding='same', name='conv2a')
        self.conv2b = Conv2D(filters=128, kernel_size=(5, 5), strides=(1, 1), padding='same', name='conv2b')
        self.conv3 = Conv2D(filters=384, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv3')
        self.conv4a = Conv2D(filters=192, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv4a')
        self.conv4b = Conv2D(filters=192, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv4b')
        self.conv5a = Conv2D(filters=128, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv5a')
        self.conv5b = Conv2D(filters=128, kernel_size=(3, 3), strides=(1, 1), padding='same', name='conv5b')

        # Fully-connected layers

        self.flatten = Flatten()

        self.dense1 = Dense(4096, input_shape=(100,), name='FC_4096_1')
        self.dense2 = Dense(4096, name='FC_4096_2')
        self.dense3 = Dense(1000, name='FC_1000')

        # Network definition

    def call(self, x, **kwargs):
        x = self.conv1(x)
        x = self.relu(x)
        x = tf.nn.local_response_normalization(x, depth_radius=2, alpha=2e-05, beta=0.75, bias=1.0)
        x = self.maxpool(x)

        x = tf.concat((self.conv2a(x[:, :, :, :48]), self.conv2b(x[:, :, :, 48:])), 3)
        x = self.relu(x)
        x = tf.nn.local_response_normalization(x, depth_radius=2, alpha=2e-05, beta=0.75, bias=1.0)
        x = self.maxpool(x)

        x = self.conv3(x)
        x = self.relu(x)
        x = tf.concat((self.conv4a(x[:, :, :, :192]), self.conv4b(x[:, :, :, 192:])), 3)
        x = self.relu(x)
        x = tf.concat((self.conv5a(x[:, :, :, :192]), self.conv5b(x[:, :, :, 192:])), 3)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.flatten(x)

        x = self.dense1(x)
        x = self.relu(x)
        x = self.dense2(x)
        x = self.relu(x)
        x = self.dense3(x)
        return self.softmax(x)

我的目标是访问任意层的输出(如果需要确切地知道:),以便最大化特定神经元的激活).问题是,尝试访问任何图层的输出时,出现属性错误.例如:

My goal is to access an arbitrary layer's output (in order to maximize a specific neuron's activation, if you have to know exactly :) ). The problem is that trying to access any layer's output, I get an attribute error. For example:

model = MyModel()
print(model.get_layer('conv1').output)
# => AttributeError: Layer conv1 has no inbound nodes.

我在SO中发现了与此错误有关的一些问题,所有这些问题都声称我必须在第一层中定义输入形状,但是正如您所看到的-它已经完成了(请参见__init__函数)!

I found some questions with this error here in SO, and all of them claim that I have to define the input shape in the first layer, but as you can see - it's already done (see the definition of self.conv1 in the __init__ function)!

我确实发现,如果定义了keras.layers.Input对象,则可以设法获取conv1的输出,但是尝试访问更深的层失败,例如:

I did find that if I define a keras.layers.Input object, I do manage to get the output of conv1, but trying to access deeper layers fails, for example:

model = MyModel()
I = tf.keras.Input(shape=(224, 224, 3))
model(I)
print(model.get_layer('conv1').output)
# prints Tensor("my_model/conv1/Identity:0", shape=(None, 56, 56, 96), dtype=float32)
print(model.get_layer('FC_1000').output)
# => AttributeError: Layer FC_1000 has no inbound nodes.

我在路上搜索了所有异常,但没有找到答案.在这种情况下,如何访问任何图层的输入/输出(顺便说一下,或者输入/输出_shape属性)?

I googled every exception that I got on the way, but found no answer. How can I access any layer's input/output (or input/output _shape attributes, by the way) in this case?

推荐答案

在子类化模型中,没有层图,它只是一段代码(模型call函数).创建Model类的实例时,未定义层连接.因此,我们需要首先通过调用call方法来构建模型.

尝试一下:

In sub-classed model there is no graph of layers, it's just a piece of code (models call function). Layer connections are not defined while creating instance of Model class. Hence we need to build model first by calling call method.

Try this:

model = MyModel()
inputs = tf.keras.Input(shape=(224,224,3))
model.call(inputs)
# instead of model(I) in your code.

完成此模型图创建之后.

After doing this model graph is created.

for i in model.layers:
  print(i.output)
# output
# Tensor("ReLU_7/Relu:0", shape=(?, 56, 56, 96), dtype=float32)
# Tensor("MaxPool_3/MaxPool:0", shape=(?, 27, 27, 96), dtype=float32)
# Tensor("Softmax_1/Softmax:0", shape=(?, 1000), dtype=float32)
# ...

这篇关于使用Tensorflow 2.0模型子类访问层的输入/输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆