单CUDA函数在线编译 [英] Online compilation of single CUDA function

查看:20
本文介绍了单CUDA函数在线编译的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的程序中有一个名为 float valueAt(float3 v) 的函数.它应该在给定点返回函数的值.该功能是用户指定的.目前我有这个函数的解释器,但其他人建议我在线编译该函数,这样它就在机器代码中并且速度更快.

I have a function in my program called float valueAt(float3 v). It's supposed to return the value of a function at the given point. The function is user-specified. I have an interpreter for this function at the moment, but others recommended I compile the function online so it's in machine code and is faster.

我该怎么做?我相信我知道如何在生成 PTX 时加载该函数,但我不知道如何生成 PTX.

How do I do this? I believe I know how to load the function when I have PTX generated, but I have no idea how to generate the PTX.

推荐答案

CUDA 没有提供非 PTX 代码的运行时编译方式.

CUDA provides no way of runtime compilation of non-PTX code.

可以完成您想要的操作,但不能使用标准的 CUDA API.PyCUDA 为 CUDA C 代码提供了一种优雅的即时编译方法,其中包括在幕后分叉工具链以编译为设备代码并使用运行时 API 加载.(可能的)缺点是您需要将 Python 用于应用程序的顶层,如果您要将代码交付给第三方,您可能还需要交付一个可用的 Python 发行版.

What you want can be done, but not using the standard CUDA APIs. PyCUDA provides an elegant just-in-time compilation method for CUDA C code which includes behind the scenes forking of the toolchain to compile to device code and loading using the runtime API. The (possible) downside is that you need to use Python for the top level of your application, and if you are shipping code to third parties, you might need to ship a working Python distribution too.

我能想到的唯一其他选择是 OpenCL,它确实支持运行时编译(直到最近才支持它).C99 语言库比 CUDA 提供的限制要多得多,我发现 API 非常冗长,但运行时编译模型运行良好.

The only other alternative I can think of is OpenCL, which does support runtime compilation (that is all it supported until recently). The C99 language base is a lot more restrictive than what CUDA offers, and I find the APIs to be very verbose, but the runtime compilation model works well.

这篇关于单CUDA函数在线编译的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆