OpenCL是否限制循环大小? [英] OpenCL Limit on for loop size?

查看:120
本文介绍了OpenCL是否限制循环大小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

更新:clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0, LIST_SIZE * sizeof(double), C, 0, NULL, NULL);返回-5,CL_OUT_OF_RESOURCES.此功能/通话永远都不会返回!

UPDATE: clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0, LIST_SIZE * sizeof(double), C, 0, NULL, NULL); is returning -5, CL_OUT_OF_RESOURCES. This funciton/call should never return this!

我已经开始使用OpenCL并遇到了问题.如果我允许一个for循环(在内核中)运行10000次,那么如果我允许该循环运行8000次,我得到的所有C都为0,那么结果都是正确的.

I've started using OpenCL and have come across a problem. If I allow a for loop (in the kernel) to run 10000 times I get all of C to be 0 if I allow the loop to run for 8000 the results are all correct.

我在内核周围添加了等待以确保其完成,以为我在完成之前就拉出了数据,并尝试了Clwaitforevent和CLFinish.任何呼叫均未发出任何错误信号.我在使用ints for循环时将在4000000的大小上工作.浮点数和双精度数具有相同的问题,但是浮点数在10000而不是20000起作用,当我使用浮点数时,我删除了#pragma OPENCL EXTENSION cl_khr_fp64 : enable来检查是否不是问题.

I have added waits around the kernel to ensure it completes, thinking I was pulling the data out before completion and have tried both Clwaitforevent and CLFinish. No errors are signalled by any of the calls. I when I used ints the for loop would work at a size of 4000000. Float and doubles have the same problem however floats work at 10000, but not at 20000, when I used the floats I removed #pragma OPENCL EXTENSION cl_khr_fp64 : enable to check that wasn't the problem.

这是一些奇怪的内存问题吗,我使用的是OpenCL错误吗?我意识到,在大多数内核中,我不会为这样的循环实现,但这似乎是一个问题.我还删除了__private,看看是否是问题所在,没有任何变化.那么OpenCL内核中for循环的大小是否有限制?是特定于硬件的吗?还是这是一个错误?

Is this some weird memory thing, I'm I using OpenCL wrong? I realise that in most kernels I woun't be implementing for loops like this, but this seems like an issue. I have also removed __private to see if that was the problem, no change. So is there a limit on the size of for loops in OpenCL kernels? Is is hardware specific? Or is this a bug?

内核是一个简单的内核,它将两个数组(A + B)加在一起,然后输出另一个(C).为了感觉到性能,我在每个计算周围放置了一个for循环,以减慢计算速度/增加每次运行的操作次数.

The kernel is a simple kernel, which adds 2 arrays (A+B) together and outputs another (C). In order to get a feel for performance I put a for loop around each calculation to slow it up/increase the number of operations per run through.

内核代码如下:

#pragma OPENCL EXTENSION cl_khr_fp64 : enable

__kernel void vector_add(__global double *A, __global double *B, __global double *C)
{

    // Get the index of the current element
    int i = get_global_id(0);

    // Do the operation

    for (__private unsigned int j = 0; j < 10000; j++)
    {
        C[i] = A[i] + B[i];
    }
}

我正在运行的代码如下:(当我在float和double之间切换时,我确保两个代码段之间的变量都是一致的)

The code I'm running is as follows: (I ensure that the variables are consistent between both pieces of code when I switch between float and double)

#include <stdio.h>
#include <stdlib.h>
#include <iostream>

#ifdef __APPLE__
#include <OpenCL/opencl.h>
#else
#include <CL/cl.h>
#endif

#define MAX_SOURCE_SIZE (0x100000)

int main(void) {
    // Create the two input vectors
    int i;
    const int LIST_SIZE = 4000000;
    double *A = (double*)malloc(sizeof(double)*LIST_SIZE);
    double *B = (double*)malloc(sizeof(double)*LIST_SIZE);
    for(i = 0; i < LIST_SIZE; i++) {
        A[i] = static_cast<double>(i);
        B[i] = static_cast<double>(LIST_SIZE - i);
    }

    // Load the kernel source code into the array source_str
    FILE *fp;
    char *source_str;
    size_t source_size;

    fp = fopen("vector_add_kernel.cl", "r");
    if (!fp) {
        fprintf(stderr, "Failed to load kernel.\n");
        exit(1);
    }
    source_str = (char*)malloc(MAX_SOURCE_SIZE);
    source_size = fread( source_str, 1, MAX_SOURCE_SIZE, fp);
    fclose( fp );

    // Get platform and device information
    cl_platform_id platform_id = NULL;
    cl_device_id device_id = NULL;
    cl_uint ret_num_devices;
    cl_uint ret_num_platforms;
//    clGetPlatformIDs(1, &platform_id, NULL);
//clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, ret_num_devices);


    cl_int ret = clGetPlatformIDs(1, &platform_id, NULL);
                if (ret != CL_SUCCESS) {
printf("Error: Failed to get platforms! (%d) \n", ret);
return EXIT_FAILURE;
}
    ret = clGetDeviceIDs(platform_id, CL_DEVICE_TYPE_GPU, 1, &device_id, &ret_num_devices);
            if (ret != CL_SUCCESS) {
printf("Error: Failed to query platforms to get devices! (%d) \n", ret);
return EXIT_FAILURE;
}
/*
    cl_int ret = clGetPlatformIDs(1, &platform_id, NULL);
                if (ret != CL_SUCCESS) {
printf("Error: Failed to get platforms! (%d) \n", ret);
return EXIT_FAILURE;
}
    ret = clGetDeviceIDs( platform_id, CL_DEVICE_TYPE_CPU, 1,
            &device_id, &ret_num_devices);
            if (ret != CL_SUCCESS) {
printf("Error: Failed to query platforms to get devices! (%d) \n", ret);
return EXIT_FAILURE;
}
*/
    // Create an OpenCL context
    cl_context context = clCreateContext( NULL, 1, &device_id, NULL, NULL, &ret);

    // Create a command queue
    cl_command_queue command_queue = clCreateCommandQueue(context, device_id, 0, &ret);

    // Create memory buffers on the device for each vector
    cl_mem a_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
    cl_mem b_mem_obj = clCreateBuffer(context, CL_MEM_READ_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
    cl_mem c_mem_obj = clCreateBuffer(context, CL_MEM_WRITE_ONLY,
            LIST_SIZE * sizeof(double), NULL, &ret);
            if (ret != CL_SUCCESS) {
printf("Error: Buffer Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Copy the lists A and B to their respective memory buffers
    ret = clEnqueueWriteBuffer(command_queue, a_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), A, 0, NULL, NULL);
    ret = clEnqueueWriteBuffer(command_queue, b_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), B, 0, NULL, NULL);

    std::cout << "Begin Compile" << "\n";
    // Create a program from the kernel source
    cl_program program = clCreateProgramWithSource(context, 1,
            (const char **)&source_str, (const size_t *)&source_size, &ret);
             if (ret != CL_SUCCESS) {
printf("Error: Program Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Build the program
    ret = clBuildProgram(program, 1, &device_id, NULL, NULL, NULL);
    if (ret != CL_SUCCESS) {
printf("Error: ProgramBuild Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Create the OpenCL kernel
    cl_kernel kernel = clCreateKernel(program, "vector_add", &ret);
    if (ret != CL_SUCCESS) {
printf("Error: Kernel Build Fail! (%d) \n", ret);
return EXIT_FAILURE;
}
    std::cout << "End Compile" << "\n";

    std::cout << "Begin Data Move" << "\n";
    // Set the arguments of the kernel
    ret = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&a_mem_obj);
    ret = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&b_mem_obj);
    ret = clSetKernelArg(kernel, 2, sizeof(cl_mem), (void *)&c_mem_obj);
    std::cout << "End Data Move" << "\n";

    // Execute the OpenCL kernel on the list
    size_t global_item_size = LIST_SIZE; // Process the entire lists
    size_t local_item_size = 64; // Process in groups of 64

    std::cout << "Begin Execute" << "\n";
    cl_event event;
    ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL,
            &global_item_size, &local_item_size, 0, NULL, &event);
            clFinish(command_queue);
            //clWaitForEvents(1, &event);
    std::cout << "End Execute" << "\n";
    if (ret != CL_SUCCESS) {
printf("Error: Execute Fail! (%d) \n", ret);
return EXIT_FAILURE;
}

    // Read the memory buffer C on the device to the local variable C
    std::cout << "Begin Data Move" << "\n";

    double *C = (double*)malloc(sizeof(double)*LIST_SIZE);
    ret = clEnqueueReadBuffer(command_queue, c_mem_obj, CL_TRUE, 0,
            LIST_SIZE * sizeof(double), C, 0, NULL, NULL);
            if (ret != CL_SUCCESS) {
            printf("Error: Read Fail! (%d) \n", ret);
            return EXIT_FAILURE;
            }
            clFinish(command_queue);
    std::cout << "End Data Move" << "\n";

    std::cout << "Done" << "\n";
    std::cin.get();
    // Display the result to the screen
    for(i = 0; i < LIST_SIZE; i++)
        printf("%f + %f = %f \n", A[i], B[i], C[i]);

    // Clean up
    ret = clFlush(command_queue);
    ret = clFinish(command_queue);
    ret = clReleaseKernel(kernel);
    ret = clReleaseProgram(program);
    ret = clReleaseMemObject(a_mem_obj);
    ret = clReleaseMemObject(b_mem_obj);
    ret = clReleaseMemObject(c_mem_obj);
    ret = clReleaseCommandQueue(command_queue);
    ret = clReleaseContext(context);
    free(A);
    free(B);
    free(C);
    std::cout << "Number of Devices: " << ret_num_devices << "\n";
    std::cin.get();
    return 0;
}

我在互联网上看过一遍,找不到有类似问题的人,这是一个令人担忧的问题,因为它可能导致代码在扩大规模之前可以正常工作...

I've had a look on the internet and can't find people with similar problems, this is a concern as it could lead to code that works well till scaled up...

我正在运行Ubuntu 14.04,并且有一个用于RC520的笔记本电脑显卡,该显卡与bumblebee/optirun一起运行.如果此错误在循环大小最大为4000000的其他计算机上无法再现,那么我将使用bumblebee/optirun记录该错误.

I'm running Ubuntu 14.04, and have a laptop graphics card for a RC520 which I run with bumblebee/optirun. If this bug isn't reproducible on other machines up to a loop size of 4000000 then I will log a bug with bumblebee/optirun.

欢呼

推荐答案

我发现了这个问题,连接到显示器/活动VGA/etc的GPU具有一个看门狗计时器,该计时器在大约5秒后超时.不是特斯拉的卡就是这种情况,但必须关闭此功能.在辅助卡上运行可以解决.这很糟糕,需要尽快修复.绝对是NVidia的问题,无论如何都不确定AMD,这太可怕了.

I found the issue, GPUs attached to displays/active VGAs/etc have a Watch Dog Timer that times out after ~5s. This is the case for cards that aren't teslas, which have this functionality to be turned off. Running on a secondary card is a work around. This sucks and needs to be fixed ASAP. It's definitely an NVidia issue, not sure about about AMD, either way, this is terrible.

解决方法是在Windows中更改注册表,在Linux/Ubuntu中更改X conf并放置:

Workarounds are registry changes in Windows and, in Linux/Ubuntu, altering the X conf and placing:

选项交互式""0"

在与图形卡之间的差距中,但是X conf现在不会在更高版本中生成,因此可能必须手动创建.如果有人对此进行了复制和粘贴的控制台代码修复,那将是一个很好的选择,并且是一个更好的答案.

In the gap with the graphics card, however X conf is now not generated in later versions and may have to be manually created. If anyone has a copy and paste console code fix to this that would be great and a better answer.

这篇关于OpenCL是否限制循环大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆