我可以在 pytorch 中访问 DeepLab 的内层输出吗? [英] Can I access the inner layer outputs of DeepLab in pytorch?

查看:14
本文介绍了我可以在 pytorch 中访问 DeepLab 的内层输出吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用 Pytorch,我正在尝试实现一个使用预训练的 DeepLab ResNet-101 的网络.我找到了两种使用该网络的可能方法:

这个

 torchvision.models.segmentation.deeplabv3_resnet101(pretrained=False,progress=True,num_classes=21,aux_loss=None,**kwargs)

然而,我可能不仅需要这个网络的输出,还需要几个内部层的输出.有没有办法使用这些方法之一访问内层输出?

如果不是 - 是否可以手动复制经过训练的 resnet 的参数,以便我可以手动重新创建它并自己添加这些输出?(希望第一个选项是可能的,所以我不需要这样做)

谢谢!

解决方案

您可以使用 forward hooks.

这个想法是遍历模型的模块,找到您感兴趣的层,​​将回调函数挂接到它们上.调用时,这些层将触发钩子.我们将利用这一点来保存中间输出.

例如,假设您想获得 classifier.0.convs.3.1 层的输出:

layers = ['classifier.0.convs.3.1']激活 = {}def forward_hook(名称):def钩子(模块,x,y):激活[名称] = y回车钩对于名称,model.named_modules() 中的模块:如果在图层中命名:module.register_forward_hook(forward_hook(name))

*forward_hook 的作用域围绕 hook() 的闭包用于包含模块的名称,否则您此时无法访问该名称.

一切就绪,我们可以调用模型

<预><代码>>>>模型 = torchvision.models.segmentation.deeplabv3_resnet101(预训练=真,进度=真,num_classes=21,aux_loss=无)>>>模型(torch.rand(16, 3, 100, 100))

正如预期的那样,在推理之后,activations 将有一个新条目 'classifier.0.convs.3.1',在这种情况下,它将包含一个张量形状 (16, 256, 13, 13).


不久前,我写了一篇关于类似问题的答案,其中更详细地介绍了如何钩子可用于检查中间输出形状.

Using Pytorch, I am trying to implement a network that is using the pre=trained DeepLab ResNet-101. I found two possible methods for using this network:

this one

or

 torchvision.models.segmentation.deeplabv3_resnet101(
     pretrained=False, progress=True, num_classes=21, aux_loss=None, **kwargs)

However, I might not only need this network's output, but also several inside layers' outputs. Is there a way to access the inner layer outputs using one of these methods?

If not - Is it possible to manually copy the trained resnet's parameters so I can manually recreate it and add those outputs myself? (Hopefully the first option is possible so I won't need to do this)

Thanks!

解决方案

You can achieve this without too much trouble using forward hooks.

The idea is to loop over the modules of your model, find the layers you're interested in, hook a callback function onto them. When called, those layers will trigger the hook. We will take advantage of this to save the intermediate outputs.

For example, let's say you want to get the outputs of layer classifier.0.convs.3.1:

layers = ['classifier.0.convs.3.1']
activations = {}

def forward_hook(name):
    def hook(module, x, y):
        activations[name] = y
    return hook

for name, module in model.named_modules():
    if name in layers:
        module.register_forward_hook(forward_hook(name))

*The closure around hook() made by forward_hook's scope is used to enclose the module's name which you wouldn't otherwise have access to at this point.

Everything is ready, we can call the model

>>> model = torchvision.models.segmentation.deeplabv3_resnet101(
        pretrained=True, progress=True, num_classes=21, aux_loss=None)

>>> model(torch.rand(16, 3, 100, 100))

And as expected, after inference, activations will have a new entry 'classifier.0.convs.3.1' which - in this case - will contain a tensor of shape (16, 256, 13, 13).


Not so long ago, I wrote an answer about a similar question which goes a little bit more in detail on how hooks can be used to inspect the intermediate output shapes.

这篇关于我可以在 pytorch 中访问 DeepLab 的内层输出吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆