pytorch - “conv1d"在哪里实现? [英] pytorch - Where is “conv1d” implemented?

查看:23
本文介绍了pytorch - “conv1d"在哪里实现?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想看看conv1d模块是如何实现的https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv1d.所以我查看了function.py,但还是找不到循环和互相关计算.

I wanted to see how the conv1d module is implemented https://pytorch.org/docs/stable/_modules/torch/nn/modules/conv.html#Conv1d. So I looked at functional.py but still couldn’t find the looping and cross-correlation computation.

然后我通过关键字'conv1d'搜索Github,检查了conv.cpp https://github.com/pytorch/pytorch/blob/eb5d28ecefb9d78d4fff5fac099e70e5eb3fbe2e/torch/csrc/api/src/nn/modules/conv.cpp 1 但仍然不能't 定位计算发生的位置.

Then I searched Github by keyword ‘conv1d’, checked conv.cpp https://github.com/pytorch/pytorch/blob/eb5d28ecefb9d78d4fff5fac099e70e5eb3fbe2e/torch/csrc/api/src/nn/modules/conv.cpp 1 but still couldn’t locate where the computation is happening.

我的问题有两个方面.

  1. conv1d"实现的源代码在哪里?

  1. Where is the source code that "conv1d" is implemented?

一般来说,如果我想检查模块是如何实现的,最好在哪里找到?任何指向文档的指针将不胜感激.谢谢.

In general, if I want to check how the modules are implemented, where is the best place to find? Any pointer to the documentation will be appreciated. Thank you.

推荐答案

  1. 这取决于后端(GPU、CPU、分布式等),但在最有趣的 GPU 案例中,它来自 cuDNN 以二进制格式发布,因此您无法检查其源代码.CPU MKLDNN 也有类似的情况.我不知道 PyTorch 会在任何地方处理"它自己的卷积核,但我可能是错的.编辑:确实,正如下面的答案所指出的那样,我错了.
  2. 如果不了解 PyTorch 的结构就很难.许多代码实际上是基于各种标记文件自动生成的,正如 此处.弄清楚这一点需要大量的跳跃.例如,您正在链接的 conv.cpp 文件 使用 torch::conv1d,定义为这里 并使用 at::convolution 依次使用 at::convolutionhttps://github.com/pytorch/pytorch/blob/517c7c98610402e2746586c78987c64c28e024aa/aten/src/ATen/native/Convolution.cpp#L272" rel="nofollow noreferrer">ata>,它分派到多个变体,例如 at::cudnn_convolution.我相信 at::cudnn_convolution 是创建 此处 通过标记文件直接插入cuDNN 实现(尽管发生这种情况时我无法确定代码中的确切点).莉>
  1. It depends on the backend (GPU, CPU, distributed etc) but in the most interesting case of GPU it's pulled from cuDNN which is released in binary format and thus you can't inspect its source code. It's a similar story for CPU MKLDNN. I am not aware of any place where PyTorch would "handroll" it's own convolution kernels, but I may be wrong. EDIT: indeed, I was wrong as pointed out in an answer below.
  2. It's difficult without knowing how PyTorch is structured. A lot of code is actually being autogenerated based on various markup files, as explained here. Figuring this out requires a lot of jumping around. For instance, the conv.cpp file you're linking uses torch::conv1d, which is defined here and uses at::convolution which in turn uses at::_convolution, which dispatches to multiple variants, for instance at::cudnn_convolution. at::cudnn_convolution is, I believe, created here via a markup file and just plugs in directly to cuDNN implementation (though I cannot pinpoint the exact point in code when that happens).

这篇关于pytorch - “conv1d"在哪里实现?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆