如何在CUDA中写入内存指针 [英] How can I write the memory pointer in CUDA

查看:328
本文介绍了如何在CUDA中写入内存指针的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我声明了两个GPU内存指针,并分配了GPU内存,传输数据并在主线程中启动内核:

I declared two GPU memory pointers, and allocated the GPU memory, transfer data and launch the kernel in the main:

// declare GPU memory pointers
char * gpuIn;
char * gpuOut;

// allocate GPU memory
cudaMalloc(&gpuIn, ARRAY_BYTES);
cudaMalloc(&gpuOut, ARRAY_BYTES);

// transfer the array to the GPU
cudaMemcpy(gpuIn, currIn, ARRAY_BYTES, cudaMemcpyHostToDevice);

// launch the kernel
role<<<dim3(1),dim3(40,20)>>>(gpuOut, gpuIn);

// copy back the result array to the CPU
cudaMemcpy(currOut, gpuOut, ARRAY_BYTES, cudaMemcpyDeviceToHost);

cudaFree(gpuIn);
cudaFree(gpuOut);

这是我在内核中的代码:

And this is my code inside the kernel:

__global__ void role(char * gpuOut, char * gpuIn){
    int idx = threadIdx.x;
    int idy = threadIdx.y;

    char live = '0';
    char dead = '.';

    char f = gpuIn[idx][idy];

    if(f==live){ 
       gpuOut[idx][idy]=dead;
    }
    else{
       gpuOut[idx][idy]=live;
    } 
}

但是这里有一些错误,我认为这是指针上的一些错误.任何机构都可以提供帮助吗?

But here are some errors, I think here are some errors on the pointers. Any body can give a help?

推荐答案

关键概念是多维数组在内存中的存储顺序-对此进行了很好的描述

The key concept is the storage order of multidimensional arrays in memory -- this is well described here. A useful abstraction is to define a simple class which encapsulates a pointer to a multidimensional array stored in linear memory and provides an operator which gives something like the usual a[i][j] style access. Your code could be modified something like this:

template<typename T>
struct array2d
{
    T* p;
    size_t lda;

    __device__ __host__
    array2d(T* _p, size_t _lda) : p(_p), lda(_lda) {};

    __device__ __host__
    T& operator()(size_t i, size_t j) {
        return p[j + i * lda]; 
    }
    __device__ __host__
    const T& operator()(size_t i, size_t j) const {
        return p[j + i * lda]; 
    }
};

__global__ void role(array2d<char> gpuOut, array2d<char> gpuIn){
    int idx = threadIdx.x;
    int idy = threadIdx.y;

    char live = '0';
    char dead = '.';

    char f = gpuIn(idx,idy);

    if(f==live){ 
       gpuOut(idx,idy)=dead;
    }
    else{
       gpuOut(idx,idy)=live;
    } 
}

int main()
{        
    const int rows = 5, cols = 6;
    const size_t ARRAY_BYTES = sizeof(char) * size_t(rows * cols);

    // declare GPU memory pointers
    char * gpuIn;
    char * gpuOut;

    char currIn[rows][cols], currOut[rows][cols];

    // allocate GPU memory
    cudaMalloc(&gpuIn, ARRAY_BYTES);
    cudaMalloc(&gpuOut, ARRAY_BYTES);

    // transfer the array to the GPU
    cudaMemcpy(gpuIn, currIn, ARRAY_BYTES, cudaMemcpyHostToDevice);

    // launch the kernel
    role<<<dim3(1),dim3(rows,cols)>>>(array2d<char>(gpuOut, cols), array2d<char>(gpuIn, cols));

    // copy back the result array to the CPU
    cudaMemcpy(currOut, gpuOut, ARRAY_BYTES, cudaMemcpyDeviceToHost);

    cudaFree(gpuIn);
    cudaFree(gpuOut);

    return 0;
}

这里的重点是可以将存储在线性存储器中的二维C或C ++数组定为col + row * number of cols.上面代码中的类只是表达这一点的便捷方式.

The important point here is that a two dimensional C or C++ array stored in linear memory can be addressed as col + row * number of cols. The class in the code above is just a convenient way of expressing this.

这篇关于如何在CUDA中写入内存指针的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆