什么是“正确的”方式来实现CUDA的32位memset? [英] What's the 'right' way to implement a 32-bit memset for CUDA?
问题描述
CUDA有API调用
CUDA has the API call
cudaError_t cudaMemset (void *devPtr, int value, size_t count)
用单字节值填充缓冲区。我想用一个多字节值填充它。假设,为了简单起见,我想用一个32位(4字节)的值填充 devPtr
,并假设我们可以忽略字节序。现在,CUDA驱动程序具有以下API调用:
which fills a buffer with a single-byte value. I want to fill it with a multi-byte value. Suppose, for the sake of simplicity, that I want to fill devPtr
with a 32-bit (4-byte) value, and suppose we can ignore endianness. Now, the CUDA driver has the following API call:
CUresult cuMemsetD32(CUdeviceptr dstDevice, unsigned int ui, size_t N)
这样就足够了:获取 CUdeviceptr
从设备内存空间指针,然后进行驱动程序API调用?
So is it enough for me to just: obtain the CUdeviceptr
from the device-memory-space pointer, then make the driver API call? Or is there something else I need to be doing?
推荐答案
关于CUDA 3.0,运行时API设备指针(和其他一切)可以与驱动程序API互操作。所以,是的,您可以使用 cuMemsetD32
以32位值填充运行时API分配。 CUdeviceptr
的大小将匹配您平台上的 void *
的大小,并且可以安全地从CUDA API至 CUdeviceptr
或反之亦然
As of about CUDA 3.0, runtime API device pointers (and everything else) are interoperable with the driver API. So yes, you can use cuMemsetD32
to fill a runtime API allocation with a 32 bit value. The size of CUdeviceptr
will match the size of void *
on you platform and it is safe to cast a pointer from the CUDA API to CUdeviceptr
or vice versa.
这篇关于什么是“正确的”方式来实现CUDA的32位memset?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!