在 Eager 模式下访问 Tensor-flow 2.0 中中间层的输出 [英] Access output of intermediate layers in Tensor-flow 2.0 in eager mode

查看:23
本文介绍了在 Eager 模式下访问 Tensor-flow 2.0 中中间层的输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Tensor-flow 2.0 上构建了 CNN.我需要访问中间层的输出.我正在研究其他类似的 stackoverflow 问题,但都有涉及 Keras 顺序模型的解决方案.

我曾尝试使用 model.layers[index].output 但我得到 <块引用>

层 conv2d 没有入站节点.

我可以在这里发布我的代码(超长),但我相信即使没有,也有人可以指点我如何在 Eager 模式下仅使用 Tensorflow 2.0 来完成.

解决方案

我在寻找答案时偶然发现了这个问题,我花了一些时间才弄清楚,因为我默认使用 TF 2.0 中的模型子类化 API(如在这里 https://www.tensorflow.org/tutorials/quickstart/advanced).如果有人处于类似情况,您需要做的就是分配您想要的中间输出,作为类的属性.然后保留没有 @tf.function 装饰器的 test_step 并创建其装饰副本,比如 val_step,以便在训练期间有效地内部计算验证性能.作为一个简短的例子,我从链接中相应地修改了教程的一些功能.我假设我们需要在展平后访问输出.

def call(self, x):x = self.conv1(x)x = self.flatten(x)self.intermediate=x #赋值为对象属性,供以后访问x = self.d1(x)返回 self.d2(x)#从 test_step 中移除 @tf.function 装饰器以进行预测def test_step(图像,标签):预测=模型(图像,训练=假)t_loss = loss_object(标签,预测)测试损失(t_loss)test_accuracy(标签,预测)返回#创建一个装饰的val_step,供对象在训练期间内部使用@tf.functiondef val_step(图像,标签):返回 test_step(图像,标签)

现在,当您在训练后运行 model.predict() 时,使用未修饰的测试步骤,您可以使用 model.intermediate 访问中间输出,这将是一个 EagerTensor,其值仅通过 model.intermediate.numpy().但是,如果您不从 test_step 中删除 @tf_function 装饰器,这将返回一个张量,其值不是那么容易获得.

I have CNN that I have built using on Tensor-flow 2.0. I need to access outputs of the intermediate layers. I was going over other stackoverflow questions that were similar but all had solutions involving Keras sequential model.

I have tried using model.layers[index].output but I get

Layer conv2d has no inbound nodes.

I can post my code here (which is super long) but I am sure even without that someone can point to me how it can be done using just Tensorflow 2.0 in eager mode.

解决方案

I stumbled onto this question while looking for an answer and it took me some time to figure out as I use the model subclassing API in TF 2.0 by default (as in here https://www.tensorflow.org/tutorials/quickstart/advanced). If somebody is in a similar situation, all you need to do is assign the intermediate output you want, as an attribute of the class. Then keep the test_step without the @tf.function decorator and create its decorated copy, say val_step, for efficient internal computation of validation performance during training. As a short example, I have modified a few functions of the tutorial from the link accordingly. I'm assuming we need to access the output after flattening.

def call(self, x):
    x = self.conv1(x)
    x = self.flatten(x)
    self.intermediate=x #assign it as an object attribute for accessing later
    x = self.d1(x)
return self.d2(x)

#Remove @tf.function decorator from test_step for prediction
def test_step(images, labels):
    predictions = model(images, training=False)
    t_loss = loss_object(labels, predictions)
    test_loss(t_loss)
    test_accuracy(labels, predictions)
    return

#Create a decorated val_step for object's internal use during training
@tf.function
def val_step(images, labels):
    return test_step(images, labels)

Now when you run model.predict() after training, using the un-decorated test step, you can access the intermediate output using model.intermediate which would be an EagerTensor whose value is obtained simply by model.intermediate.numpy(). However, if you don't remove the @tf_function decorator from test_step, this would return a Tensor whose value is not so straightforward to obtain.

这篇关于在 Eager 模式下访问 Tensor-flow 2.0 中中间层的输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆