如何使用cudaMalloc / cudaMemcpy作为指向包含指针的结构的指针? [英] How to use cudaMalloc / cudaMemcpy for a pointer to a structure containing pointers?
问题描述
我已经看过这个网站和其他人,没有什么工作。我正在为我的具体情况发布一个问题。
I've looked all around this site and others, and nothing has worked. I'm resorting to posting a question for my specific case.
我有一堆矩阵,目标是使用内核让GPU做同样的操作。我确定我可以得到内核工作,但我不能让cudaMalloc / cudaMemcpy工作。
I have a bunch of matrices, and the goal is to use a kernel to let the GPU to do the same operation on all of them. I'm pretty sure I can get the kernel to work, but I can't get cudaMalloc / cudaMemcpy to work.
我有一个指向Matrix结构的指针,它有一个名为elements的成员指向一些浮动。我可以做所有的非cuda malloc只是罚款。
I have a pointer to a Matrix structure, which has a member called elements that points to some floats. I can do all the non-cuda mallocs just fine.
感谢任何/所有的帮助。
Thanks for any/all help.
:
typedef struct {
int width;
int height;
float* elements;
} Matrix;
int main void() {
int rows, cols, numMat = 2; // These are actually determined at run-time
Matrix* data = (Matrix*)malloc(numMat * sizeof(Matrix));
// ... Successfully read from file into "data" ...
Matrix* d_data;
cudaMalloc(&d_data, numMat*sizeof(Matrix));
for (int i=0; i<numMat; i++){
// The next line doesn't work
cudaMalloc(&(d_data[i].elements), rows*cols*sizeof(float));
// Don't know if this works
cudaMemcpy(d_data[i].elements, data[i].elements, rows*cols*sizeof(float)), cudaMemcpyHostToDevice);
}
// ... Do other things ...
}
谢谢!
推荐答案
你必须知道你的内存在哪里。 malloc分配主机内存,cudaMalloc分配设备上的内存,并返回指向该内存的指针。但是,此指针仅在设备功能中有效。
You have to be aware where your memory resides. malloc allocates host memory, cudaMalloc allocates memory on the device and returns a pointer to that memory back. However, this pointer is only valid in device functions.
您想要什么可以如下所示:
What you want could be achived as followed:
typedef struct {
int width;
int height;
float* elements;
} Matrix;
int main void() {
int rows, cols, numMat = 2; // These are actually determined at run-time
Matrix* data = (Matrix*)malloc(numMat * sizeof(Matrix));
// ... Successfully read from file into "data" ...
Matrix* h_data = (Matrix*)malloc(numMat * sizeof(Matrix));
memcpy(h_data, data, numMat * sizeof(Matrix);
for (int i=0; i<numMat; i++){
cudaMalloc(&(h_data[i].elements), rows*cols*sizeof(float));
cudaMemcpy(h_data[i].elements, data[i].elements, rows*cols*sizeof(float)), cudaMemcpyHostToDevice);
}// matrix data is now on the gpu, now copy the "meta" data to gpu
Matrix* d_data;
cudaMalloc(&d_data, numMat*sizeof(Matrix));
cudaMemcpy(d_data, h_data, numMat*sizeof(Matrix));
// ... Do other things ...
}
make something clear:
Matrix * data
包含主机上的数据。
Matrix * h_data
在元素中包含指向设备内存的指针,可以作为参数传递给内核。内存在GPU上。
Matrix * d_data
是完全在GPU上,可以像主机上的数据一样使用。
To make things clear:
Matrix* data
contains the data on the host.
Matrix* h_data
contains a pointer to the device memory in elements which can be passed to the kernels as parameters. The memory is on the GPU.
Matrix* d_data
is completly on the GPU and can be used like data on the host.
在你的内核代码中,kann现在可以访问矩阵值,例如
in your kernel code you kann now access the matrix values, e.g.,
__global__ void doThings(Matrix* matrices)
{
matrices[i].elements[0] = 42;
}
这篇关于如何使用cudaMalloc / cudaMemcpy作为指向包含指针的结构的指针?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!