在主机和GPU上添加CUDA的不同结果 [英] Different results for CUDA addition on host and on GPU
问题描述
我有一个函数,它需要一个彩色图片,并返回它的灰色版本。
如果我在主机上运行顺序代码,一切工作完美。如果我在设备上运行它,结果略有不同(1000的一个像素是+1或-1相比正确的值)。
I have a function wich takes a color picture and returns the gray version of it. If I run the sequential code on the host, everything works perfectly. If I run it on the device, the result is slightly different (one pixel in 1000 is either +1 or -1 compared to the correct value).
我认为这与转换有关,但我不知道。这是我使用的代码:
I think this has something to do with the conversions, but I don't know for sure. This is the code I use:
__global__ void rgb2gray_d (unsigned char *deviceImage, unsigned char *deviceResult, const int height, const int width){
/* calculate the global thread id*/
int threadsPerBlock = blockDim.x * blockDim.y;
int threadNumInBlock = threadIdx.x + blockDim.x * threadIdx.y;
int blockNumInGrid = blockIdx.x + gridDim.x * blockIdx.y;
int globalThreadNum = blockNumInGrid * threadsPerBlock + threadNumInBlock;
int i = globalThreadNum;
float grayPix = 0.0f;
float r = static_cast< float >(deviceImage[i]);
float g = static_cast< float >(deviceImage[(width * height) + i]);
float b = static_cast< float >(deviceImage[(2 * width * height) + i]);
grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);
deviceResult[i] = static_cast< unsigned char > (grayPix);
}
void rgb2gray(unsigned char *inputImage, unsigned char *grayImage, const int width, const int height, NSTimer &timer) {
unsigned char *deviceImage;
unsigned char *deviceResult;
int initialBytes = width * height * 3;
int endBytes = width * height * sizeof(unsigned char);
unsigned char grayImageSeq[endBytes];
cudaMalloc((void**) &deviceImage, initialBytes);
cudaMalloc((void**) &deviceResult, endBytes);
cudaMemset(deviceResult, 0, endBytes);
cudaMemset(deviceImage, 0, initialBytes);
cudaError_t err = cudaMemcpy(deviceImage, inputImage, initialBytes, cudaMemcpyHostToDevice);
// Convert the input image to grayscale
rgb2gray_d<<<width * height / 256, 256>>>(deviceImage, deviceResult, height, width);
cudaDeviceSynchronize();
cudaMemcpy(grayImage, deviceResult, endBytes, cudaMemcpyDeviceToHost);
////// Sequential
for ( int y = 0; y < height; y++ ) {
for ( int x = 0; x < width; x++ ) {
float grayPix = 0.0f;
float r = static_cast< float >(inputImage[(y * width) + x]);
float g = static_cast< float >(inputImage[(width * height) + (y * width) + x]);
float b = static_cast< float >(inputImage[(2 * width * height) + (y * width) + x]);
grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);
grayImageSeq[(y * width) + x] = static_cast< unsigned char > (grayPix);
}
}
//compare sequential and cuda and print pixels that are wrong
for (int i = 0; i < endBytes; i++)
{
if (grayImage[i] != grayImageSeq[i])
cout << i << "-" << static_cast< unsigned int >(grayImage[i]) <<
" should be " << static_cast< unsigned int >(grayImageSeq[i]) << endl;
}
cudaFree(deviceImage);
cudaFree(deviceResult);
}
我提到我为初始图像宽度*高度* 3分配,初始图像是CImg。
I mention that I allocate for the initial image width * height * 3 because the initial image is a CImg.
我在GeForce GTX 480上工作。
I work on a GeForce GTX 480.
推荐答案
最后我发现了答案。 CUDA在单精度和双精度中自动融合乘加。使用 1 下面的文档, 4.4,我设法解决它。代替执行
Finally I found the answer. CUDA does automatically fused multiply-add in both single and double precision. Using the document below 1, Section 4.4, I managed to fix it. Instead of doing
grayPix = (0.3f * r) + (0.59f * g) + (0.11f * b);
我现在在做
grayPix = __fadd_rn(__fadd_rn(__fmul_rn(0.3f, r),__fmul_rn(0.59f, g)), __fmul_rn(0.11f, b));
这将禁止将乘法和加法合并为融合乘法指令。
This disables the merging of multiplies and adds into fused multiply-adds instructions.
这篇关于在主机和GPU上添加CUDA的不同结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!