在 CUDA 内核中搜索有序数组 [英] Search an ordered array in a CUDA kernel

查看:28
本文介绍了在 CUDA 内核中搜索有序数组的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在编写一个 CUDA 内核,每个线程必须完成以下任务:假设我有一个 n 无符号整数的有序数组 a(第一个是总是 0) 存储在共享内存中,每个线程必须找到数组索引 i 使得 a[i]threadIdx.x 和 <代码>a[i + 1] > threadIdx.x.

I'm writing a CUDA kernel and each thread has to complete the following task: suppose I have an ordered array a of n unsigned integers (the first one is always 0) stored in shared memory, each thread has to find the array index i such that a[i]threadIdx.x and a[i + 1] > threadIdx.x.

一个简单的解决方案可能是:

A naive solution could be:

for (i = 0; i < n - 1; i++)
    if (a[i + 1] > threadIdx.x) break;

但我想这不是最好的方法……谁能提出更好的建议?

but I suppose this is not the optimal way to do it... can anyone suggest anything better?

推荐答案

和 Robert 一样,我认为二分查找必须比简单循环更快——二分查找的操作次数上限是 O(log(n)),与循环的 O(N) 相比.

Like Robert, I was thinking that a binary search has got to be faster that a naïve loop -- the upper bound of operation count for a binary search is O(log(n)), compared to O(N) for the loop.

我的非常简单的实现:

#include <iostream>
#include <climits>
#include <assert.h>

__device__  __host__
int midpoint(int a, int b)
{
    return a + (b-a)/2;
}

__device__ __host__
int eval(int A[], int i, int val, int imin, int imax)
{

    int low = (A[i] <= val);
    int high = (A[i+1] > val);

    if (low && high) {
        return 0;
    } else if (low) {
        return -1;
    } else {
        return 1;
    }
}

__device__ __host__
int binary_search(int A[], int val, int imin, int imax)
{
    while (imax >= imin) {
        int imid = midpoint(imin, imax);
        int e = eval(A, imid, val, imin, imax);
        if(e == 0) {
            return imid;
        } else if (e < 0) {
            imin = imid;
        } else {         
            imax = imid;
        }
    }

    return -1;
}


__device__ __host__
int linear_search(int A[], int val, int imin, int imax)
{
    int res = -1;
    for(int i=imin; i<(imax-1); i++) {
        if (A[i+1] > val) {
            res = i;
            break;
        }
    }

    return res;
}

template<int version>
__global__
void search(int * source, int * result, int Nin, int Nout)
{
    extern __shared__ int buff[];
    int tid = threadIdx.x + blockIdx.x*blockDim.x;

    int val = INT_MAX;
    if (tid < Nin) val = source[threadIdx.x];
    buff[threadIdx.x] = val;
    __syncthreads();

    int res;
    switch(version) {

        case 0:
        res = binary_search(buff, threadIdx.x, 0, blockDim.x);
        break;

        case 1:
        res = linear_search(buff, threadIdx.x, 0, blockDim.x);
        break;
    }

    if (tid < Nout) result[tid] = res; 
}

int main(void)
{
    const int inputLength = 128000;
    const int isize = inputLength * sizeof(int);
    const int outputLength = 256;
    const int osize = outputLength * sizeof(int);

    int * hostInput = new int[inputLength];
    int * hostOutput = new int[outputLength];
    int * deviceInput;
    int * deviceOutput;

    for(int i=0; i<inputLength; i++) {
        hostInput[i] = -200 + 5*i;
    }

    cudaMalloc((void**)&deviceInput, isize);
    cudaMalloc((void**)&deviceOutput, osize);

    cudaMemcpy(deviceInput, hostInput, isize, cudaMemcpyHostToDevice);

    dim3 DimBlock(256, 1, 1);
    dim3 DimGrid(1, 1, 1);
    DimGrid.x = (outputLength / DimBlock.x) + 
                ((outputLength % DimBlock.x > 0) ? 1 : 0); 
    size_t shmsz = DimBlock.x * sizeof(int);

    for(int i=0; i<5; i++) {
        search<1><<<DimGrid, DimBlock, shmsz>>>(deviceInput, deviceOutput, 
                inputLength, outputLength);
    }

    for(int i=0; i<5; i++) {
        search<0><<<DimGrid, DimBlock, shmsz>>>(deviceInput, deviceOutput,
                inputLength, outputLength);
    }

    cudaMemcpy(hostOutput, deviceOutput, osize, cudaMemcpyDeviceToHost);

    for(int i=0; i<outputLength; i++) {
        int idx = hostOutput[i];
        int tidx = i % DimBlock.x;
        assert( (hostInput[idx] <= tidx) && (tidx < hostInput[idx+1]) );
    } 
    cudaDeviceReset();

    return 0;
}

与循环相比,速度提高了大约五倍:

gave about a five times speed up compared to the loop:

>nvprof a.exe
======== NVPROF is profiling a.exe...
======== Command: a.exe
======== Profiling result:
 Time(%)      Time   Calls       Avg       Min       Max  Name
   60.11  157.85us       1  157.85us  157.85us  157.85us  [CUDA memcpy HtoD]
   32.58   85.55us       5   17.11us   16.63us   19.04us  void search<int=1>(int*, int*, int, int)
    6.52   17.13us       5    3.42us    3.35us    3.73us  void search<int=0>(int*, int*, int, int)
    0.79    2.08us       1    2.08us    2.08us    2.08us  [CUDA memcpy DtoH]

我相信有聪明的人可以做得比这更好.但也许这至少给了你一些想法.

I'm sure that someoneclever could do a lot better than that. But perhaps this gives you at least a few ideas.

这篇关于在 CUDA 内核中搜索有序数组的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆