在本地存储阵列定义变量的大小,使用CUDA [英] define variable size on array in local memory, using CUDA
本文介绍了在本地存储阵列定义变量的大小,使用CUDA的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
是不是有点可能使一个列表,数组,一些在设备与列表/数组中的呼叫beeing一个参数的大小...或全局变量的在呼叫时初始化函数?
Is it somewhat possible to make a list, array, something in a device function with the size of the list/array beeing a parameter in the call… or a global variable that's initialized at call time?
我想是这样,这些列表中的一个工作:
I would like something like one of these list to work:
unsigned int size1;
__device__ void function(int size2) {
int list1[size1];
int list2[size2];
}
是否有可能做一些聪明的做出这样的工作?
Is it possible to do something smart to make something like this work?
推荐答案
有是分配共享内存的动态量1的方式 - 使用第三个发射内核参数:
There is 1 way to allocate dynamic amount of shared memory - to use third launch kernel parameter:
__global__ void kernel (int * arr)
{
extern __shared__ int buf []; // size is not stated
// copy data to shared mem:
buf[threadIdx.x] = arr[blockIdx.x * blockDim.x + threadIdx.x];
// . . .
}
// . . .
// launch kernel, set size of shared mem in bytes (k elements in buf):
kernel<<<grid, threads, k * sizeof(int)>>> (arr);
有是许多阵列的一个黑客:
There is a hack for many arrays:
__device__ void function(int * a, int * b, int k) // k elements in first list
{
extern __shared__ int list1 [];
extern __shared__ int list2 []; // list2 points to the same point as list1 does
list1 [threadIdx.x] = a[blockIdx.x * blockDim.x + threadIdx.x];
list2 [k + threadIdx.x] = b[blockIdx.x * blockDim.x + threadIdx.x];
// . . .
}
您必须考虑:分配给所有的块内存
You must take into account: memory allocated to all block.
这篇关于在本地存储阵列定义变量的大小,使用CUDA的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文