分析CUDA矩阵添加代码,使用nvprof:代码API配置文件,内核不 [英] Profiling a CUDA matrix addition code, using nvprof: the code API profiles, the kernel does not

查看:61
本文介绍了分析CUDA矩阵添加代码,使用nvprof:代码API配置文件,内核不的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是带有NVIDIA GeForce GPU的远程工作站,编译并执行后,当我尝试评测时,屏幕上会显示此信息

这是我运行nvidia-smi时的输出

#include <stdio.h>
#include <cuda.h>
#include <math.h>

__global__ void matrixInit(double *matrix, int width, int height, double value){
    for(int i = (threadIdx.x + blockIdx.x * blockDim.x); i<width; i+=(blockDim.x * gridDim.x)){
        for(int j = (threadIdx.y + blockIdx.y * blockDim.y); j<height; j+=(blockDim.y * gridDim.y)){
            matrix[j * width +i] = value;
        }
    }
}

__global__ void matrixAdd(double *d_A, double *d_B, double *d_C, int width, int height){
    int ix = threadIdx.x + blockIdx.x * blockDim.x;
    int iy = threadIdx.y + blockIdx.y * blockDim.y;

    int stride_x = blockDim.x * gridDim.x;
    int stride_y = blockDim.y * gridDim.y;

    for(int j=iy; j<height; j+=stride_y){
        for(int i=ix; i<width; i+=stride_x){
            int index = j * width +i;
           d_C[index] = d_A[index-1] + d_B[index];
        }
    }
}

int main(){
    int Nx = 1<<12;
    int Ny = 1<<15;


    size_t size = Nx*Ny*sizeof(double);

 // host memory pointers
    double *A, *B, *C;

 // device memory pointers
    double *d_A, *d_B, *d_C;

    // allocate host memory
    A = (double*)malloc(size);
    B = (double*)malloc(size);
    C = (double*)malloc(size);

    // kernel call
    int thread = 32;
    int block_x = ceil(Nx + thread -1)/thread;
    int block_y = ceil(Ny + thread -1)/thread;

    dim3 THREADS(thread,thread);
    dim3 BLOCKS(block_y,block_x);

    // initialize variables
    matrixInit<<<BLOCKS,THREADS>>>(A, Nx, Ny, 1.0);
    matrixInit<<<BLOCKS,THREADS>>>(B, Nx, Ny, 2.0);
    matrixInit<<<BLOCKS,THREADS>>>(C, Nx, Ny, 0.0);

    //allocated device memory

    cudaMalloc(&d_A, size);
    cudaMalloc(&d_B, size);
    cudaMalloc(&d_C, size);


//copy to device
    cudaMemcpy(d_A, A, size, cudaMemcpyHostToDevice);
    cudaMemcpy(d_B, B, size, cudaMemcpyHostToDevice);


// Add matrix at GPU
    matrixAdd<<<BLOCKS,THREADS>>>(A, B, C, Nx, Ny);

//copy back to host
    cudaMemcpy(C, d_C, size, cudaMemcpyDeviceToHost);

    cudaFree(A);
    cudaFree(B);
    cudaFree(C);

    return 0;

}

这是我的代码。总而言之,结果显示以下两条警告消息:

==525867== Warning: 4 records have invalid timestamps due to insufficient device buffer space. You can configure the buffer space using the option --device-buffer-size.                
==525867== Warning: 1 records have invalid timestamps due to insufficient semaphore pool size. You can configure the pool size using the option --profiling-semaphore-pool-size. 
==525867== Profiling result: No kernels were profiled.

推荐答案

matrixInit<<<BLOCKS,THREADS>>>(A, Nx, Ny, 1.0);
matrixInit<<<BLOCKS,THREADS>>>(B, Nx, Ny, 2.0);
matrixInit<<<BLOCKS,THREADS>>>(C, Nx, Ny, 0.0);

您正在写入此处的主机内存,这是不允许的。

相反,您可以在分配之后直接在设备阵列d_Ad_Bd_C上执行matrixInit()

此处的另一个错误:

cudaFree(A);
cudaFree(B);
cudaFree(C);

应该是d_Ad_Bd_C。对ABC使用常规free()

您的内核也没有执行您想要的操作。您使用每个矩阵条目一个线程来启动它们,这意味着内核中不应该有for()个循环。

这篇关于分析CUDA矩阵添加代码,使用nvprof:代码API配置文件,内核不的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆