如何获得图中的所有张量? [英] How to get all the tensors in a graph?

查看:24
本文介绍了如何获得图中的所有张量?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想访问图形的所有张量实例.例如,我可以检查张量是否分离或者我可以检查大小.它可以在 tensorflow.

不想想要图表的可视化.

您可以在运行时访问整个计算图.为此,您可以使用钩子.这些是插入到 nn.Module 的函数,用于推理和反向传播.

在推理时,您可以使用 register_forward_hook.同样对于反向传播,您可以使用 <代码>register_full_backward_hook.
注意:截至 PyTorch 版本 1.8.0 register_backward_hook 已被弃用.

通过这两个函数,您基本上可以访问计算图上的任何张量.是否要打印所有张量、打印形状,甚至插入断点以进行调查,这完全取决于您.

这是一个可能的实现:

def forward_hook(module, input, output):# ...

参数 input 由 PyTorch 作为 元组 传递,并将包含传递给挂钩模块的转发函数的所有参数.

def reverse_hook(module, grad_input, grad_output):# ...

对于后向钩子,grad_inputgrad_output 都是 tuple 并且会根据模型的层具有不同的形状.

然后您可以将这些回调挂接到任何现有的 nn.Module.例如,您可以遍历模型中的所有子模块:

 用于 model.children() 中的模块:module.register_forward_hook(forward_hook)module.register_full_backward_hook(backward_hook)


要获取模块的名称,您可以将钩子包裹起来以包含名称并在模型的named_modules 上循环:

def forward_hook(name):def钩子(模块,x,y):print(f'{name}: {[tuple(i.shape) for i in x]} -> {list(y.shape)}')回车钩对于名称,model.named_children() 中的模块:module.register_forward_hook(forward_hook(name))

可以在推理中打印以下内容:

fc1: [(1, 100)] ->(1, 10)fc2: [(1, 10)] ->(1, 5)fc3: [(1, 5)] ->(1, 1)


至于模型的参数,您可以通过调用 module.parameters.这将返回一个生成器.

I would like to access all the tensors instances of a graph. For example, I can check if a tensor is detached or I can check the size. It can be done in tensorflow.

I don't want visualization of the graph.

解决方案

You can get access to the entirety of the computation graph at runtime. To do so, you can use hooks. These are functions plugged onto nn.Modules both for inference and when backpropagating.

At inference, you can hook a callback function with register_forward_hook. Similarly for backpropagation, you can use register_full_backward_hook.
Note: as of PyTorch version 1.8.0 register_backward_hook has been deprecated.

With these two functions, you will basically have access to any tensor on the computation graph. It's entirely up to you whether you want to print all tensors, print the shapes, or even insert breakpoints to investigate.

Here is a possible implementation:

def forward_hook(module, input, output):
    # ...

Argument input is passed by PyTorch as a tuple and will contain all arguments passed to the forward function of the hooked module.

def backward_hook(module, grad_input, grad_output):
    # ...

For the backward hook, both grad_input and grad_output will be tuples and will have varying shapes depending on your model's layers.

Then you can hook these callbacks on any existing nn.Module. For example, you could loop over all child modules from your model:

for module in model.children():
    module.register_forward_hook(forward_hook)
    module.register_full_backward_hook(backward_hook)


To get the names of the modules, you can wrap the hook to enclose the name and loop on your model's named_modules:

def forward_hook(name):
    def hook(module, x, y):
        print(f'{name}: {[tuple(i.shape) for i in x]} -> {list(y.shape)}')
    return hook

for name, module in model.named_children():
    module.register_forward_hook(forward_hook(name))

Which could print the following on inference:

fc1: [(1, 100)] -> (1, 10)
fc2: [(1, 10)] -> (1, 5)
fc3: [(1, 5)] -> (1, 1)


As for the model's parameter, you can easily access the parameters for a given module in both hooks by calling module.parameters. This will return a generator.

这篇关于如何获得图中的所有张量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆