使用vector< complex< long double> > C ++-NVIDIA CUDA [英] cudaMallocManaged with vector<complex<long double> > C++ - NVIDIA CUDA
问题描述
我正在通过NVIDIA GeForce GT 650M GPU实施多线程,以进行我创建的仿真。为了确保一切正常,我创建了一些辅助代码来测试一切正常。一方面,我需要更新变量向量(它们都可以单独更新)。
I am in the process of implementing multithreading through a NVIDIA GeForce GT 650M GPU for a simulation I have created. In order to make sure everything works properly, I have created some side code to test that everything works. At one point I need to update a vector of variables (they can all be updated separately).
这是要点:
`\__device__
int doComplexMath(float x, float y)
{
return x+y;
}`
`// Kernel function to add the elements of two arrays
__global__
void add(int n, float *x, float *y, vector<complex<long double> > *z)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
int stride = blockDim.x * gridDim.x;
for (int i = index; i < n; i += stride)
z[i] = doComplexMath(*x, *y);
}`
`int main(void)
{
int iGAMAf = 1<<10;
float *x, *y;
vector<complex<long double> > VEL(iGAMAf,zero);
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, sizeof(float));
cudaMallocManaged(&y, sizeof(float));
cudaMallocManaged(&VEL, iGAMAf*sizeof(vector<complex<long double> >));
// initialize x and y on the host
*x = 1.0f;
*y = 2.0f;
// Run kernel on 1M elements on the GPU
int blockSize = 256;
int numBlocks = (iGAMAf + blockSize - 1) / blockSize;
add<<<numBlocks, blockSize>>>(iGAMAf, x, y, *VEL);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
return 0;
}`
我正在尝试分配统一内存(可从GPU和CPU访问的内存) )。使用nvcc进行编译时,出现以下错误:
I am trying to allocate unified memory (memory accessible from the GPU and CPU). When compiling using nvcc, I get the following error:
错误:没有重载函数 cudaMallocManaged的实例与参数列表匹配
参数类型为:(std :: __ 1 :: vector,std :: __ 1 :: allocator >> *,无符号长)
error: no instance of overloaded function "cudaMallocManaged" matches the argument list argument types are: (std::__1::vector, std::__1::allocator>> *, unsigned long)
如何在CUDA中正确重载函数以在此类型中使用此类型
How can I overload the function properly in CUDA to use this type with multithreading?
推荐答案
无法执行您要尝试执行的操作。
It isn't possible to do what you are trying to do.
要使用托管内存分配向量,您将必须编写自己的分配器实现,该实现继承自 std :: allocator_traits
并调用 cudaMallocManaged
。然后,您可以使用分配器类实例化 std :: vector
。
To allocate a vector using managed memory you would have to write your own implementation of an allocator which inherits from std::allocator_traits
and calls cudaMallocManaged
under the hood. You can then instantiate a std::vector
using your allocator class.
还要注意,您的CUDA内核代码为导致您无法在设备代码中使用 std :: vector
。
Also note that your CUDA kernel code is broken in that you can't use std::vector
in device code.
请注意,尽管该问题已经从托管内存来看,这适用于其他类型的CUDA分配,例如固定分配。
Note that although the question has managed memory in view, this is applicable to other types of CUDA allocation such as pinned allocation.
作为另一种选择,建议使用此处,您可以考虑使用推力宿主向量代替 std :: vector
并使用自定义分配器。一个有效的示例是如果是固定分配器,则为 ( cudaMallocHost
/ cudaHostAlloc
)。
As another alternative, suggested here, you could consider using a thrust host vector in lieu of std::vector
and use a custom allocator with it. A worked example is here in the case of pinned allocator (cudaMallocHost
/cudaHostAlloc
).
这篇关于使用vector< complex< long double> > C ++-NVIDIA CUDA的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!