可视化TFLite图并获取特定节点的中间值? [英] Visualize TFLite graph and get intermediate values of a particular node?

查看:827
本文介绍了可视化TFLite图并获取特定节点的中间值?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想知道是否有一种方法可以知道tflite中特定节点的输入和输出列表?我知道我可以获取输入/输出详细信息,但这不允许我重构Interpreter内部发生的计算过程.所以我要做的是:

I was wondering if there is a way to know the list of inputs and outputs for a particular node in tflite? I know that I can get input/outputs details, but this does not allow me to reconstruct the computation process that happens inside an Interpreter. So what I do is:

interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
interpreter.get_tensor_details()

最后3个命令基本上给了我一些词典,这些词典似乎没有必要的信息.

The last 3 commands basically give me dictionaries which don't seem to have the necessary information.

所以我想知道是否有办法知道每个节点的输出在哪里? Interpreter当然知道这一点.我们可以吗?谢谢.

So I was wondering if there is way to know where each nodes outputs goes? Surely Interpreter knows this somehow. Can we? Thanks.

推荐答案

TF-Lite的机制使检查图形和获取内部节点的中间值的整个过程变得有些棘手.另一个答案建议的get_tensor(...)方法不起作用.

The mechanism of TF-Lite makes the whole process of inspecting the graph and getting the intermediate values of inner nodes a bit tricky. The get_tensor(...) method suggested by the other answer does not work.

可以使用可视化来可视化TensorFlow Lite模型. TensorFlow Lite存储库中的py 脚本.您只需要:

TensorFlow Lite models can be visualized using the visualize.py script in the TensorFlow Lite repository. You just need to:

  • Clone the TensorFlow repository
  • Run the visualize.py script with bazel:

bazel run //tensorflow/lite/tools:visualize \
     model.tflite \
     visualized_model.html

否!实际上,TF-Lite可以修改您的图形,使其变得更优化.以下是 TF-Lite文档中的一些话:

NO! In fact, TF-Lite can modify your graph so that it become more optimal. Here are some words about it from the TF-Lite documentation:

TensorFlow Lite可以处理许多TensorFlow操作,即使它们没有直接的等效项.对于可以从图形中简单删除的操作(tf.identity),替换为张量(tf.placeholder)或融合为更复杂的操作(tf.nn.bias_add)就是这种情况.有时甚至可以通过这些过程之一来删除某些受支持的操作.

A number of TensorFlow operations can be processed by TensorFlow Lite even though they have no direct equivalent. This is the case for operations that can be simply removed from the graph (tf.identity), replaced by tensors (tf.placeholder), or fused into more complex operations (tf.nn.bias_add). Even some supported operations may sometimes be removed through one of these processes.

此外,TF-Lite API当前不允许获取节点对应关系.很难解释TF-Lite的内部格式.因此,即使没有下面的一个问题,也无法获得任何所需节点的中间输出.

Moreover, the TF-Lite API currently doesn't allow to get node correspondence; it's hard to interpret the inner format of TF-Lite. So, you can't get the intermediate outputs for any nodes you want, even without the one more issue below...

否!在这里,我将解释为什么 get_tensor(...) 在TF-Lite中不起作用.假设在内部表示中,图形包含3个张量,以及它们之间的一些密集操作(节点)(您可以将tensor1视为模型的输入,而将tensor3视为模型的输出).在推断该特定图形时,仅TF-Lite 需要2个缓冲区,让我们来演示一下.

NO! Here, I will explain why get_tensor(...) wouldn't work in TF-Lite. Suppose in the inner representation, the graph contains of 3 tensors, together with some dense operations (nodes) in-between (you can think of tensor1 as input and tensor3 as output of your model). During inference of this particular graph, TF-Lite only needs 2 buffers, let's show how.

首先,使用tensor1通过应用dense运算来计算tensor2.这仅需要2个缓冲区来存储值:

First, use tensor1 to compute tensor2 by applying dense operation. This only requires 2 buffers to store the values:

           dense              dense
[tensor1] -------> [tensor2] -------> [tensor3]
 ^^^^^^^            ^^^^^^^
 bufferA            bufferB

第二,使用存储在bufferB中的tensor2的值来计算tensor3 ...但是要等一下!我们不再需要bufferA,因此让我们用它来存储tensor3的值:

Second, use the value of tensor2 stored in bufferB to compute tensor3... but wait! We don't need bufferA anymore, so let's use it to store the value of tensor3:

           dense              dense
[tensor1] -------> [tensor2] -------> [tensor3]
                    ^^^^^^^            ^^^^^^^
                    bufferB            bufferA

现在是棘手的部分. tensor1的输出值"仍将指向bufferA,该值现在保存tensor3的值.因此,如果您为第一个张量调用 get_tensor(...) 会得到不正确的值. 此方法的文档甚至说明:

Now is the tricky part. The "output value" of tensor1 will still point to bufferA, which now holds the values of tensor3. So if you call get_tensor(...) for the 1st tensor, you'll get incorrect values. The documentation of this method even states:

此功能不能用于读取中间结果.

This function cannot be used to read intermediate results.

如何解决这个问题?

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆