@ cuda.jit和@jit之间的区别(target ='gpu') [英] Difference between @cuda.jit and @jit(target='gpu')

查看:133
本文介绍了@ cuda.jit和@jit之间的区别(target ='gpu')的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我对使用Continuum的Accelerate和numba软件包中的Python CUDA库有疑问.将装饰器 @jit target = gpu 一起使用是否与 @ cuda.jit 相同?

I have a question on working with Python CUDA libraries from Continuum's Accelerate and numba packages. Is using the decorator @jit with target = gpu the same as @cuda.jit?

推荐答案

不,尽管最终编译成PTX的编译路径是相同的,但它们并不相同. @jit 装饰器是常规的编译器路径,可以选择将其引导到CUDA设备上. @ cuda.jit 装饰器实际上是Continuum Analytics开发的低级Python CUDA内核方言.因此,您将获得对 @ cuda.jit 中的CUDA内置变量(如 threadIdx )和内存空间说明符(如 __ shared __ )的支持.

No, they are not the same, although the eventual compilation path into PTX into assembler is. The @jit decorator is the general compiler path, which can be optionally steered onto a CUDA device. The @cuda.jit decorator is effectively the low level Python CUDA kernel dialect which Continuum Analytics have developed. So you get support for CUDA built-in variables like threadIdx and memory space specifiers like __shared__ in @cuda.jit.

如果要用Python编写CUDA内核并进行编译和运行,请使用 @ cuda.jit .否则,如果您想加速现有的Python,请对CUDA目标使用 @jit .

If you want to write a CUDA kernel in Python and compile and run it, use @cuda.jit. Otherwise, if you want to accelerate an existing piece of Python use @jit with a CUDA target.

这篇关于@ cuda.jit和@jit之间的区别(target ='gpu')的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆