GPU和CPU上的自定义缩减产生不同的结果 [英] Custom reduction on GPU vs CPU yield different result

查看:77
本文介绍了GPU和CPU上的自定义缩减产生不同的结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

为什么与连续CPU相比,我在GPU上看到不同的结果?

Why am I seeing different result on GPU compare to sequential CPU?

import numpy
from numba import cuda
from functools import reduce

A = (numpy.arange(100, dtype=numpy.float64)) + 1
cuda.reduce(lambda a, b: a + b * 20)(A) 
# result 12952749821.0
reduce(lambda a, b: a + b * 20, A) 
# result 100981.0

import numba
numba.__version__
# '0.34.0+5.g1762237'

使用Java Stream API并行化CPU缩减时,会发生类似的行为:

Similar behavior happens when using Java Stream API to parallelize reduction on CPU:

int n = 10;
float inputArray[] = new float[n];
ArrayList<Float> inputList = new ArrayList<Float>();
for (int i=0; i<n; i++)
{
    inputArray[i] = i+1;
    inputList.add(inputArray[i]);
}
Optional<Float> resultStream = inputList.stream().parallel().reduce((x, y) -> x+y*20);
float sequentialResult = array[0];
for (int i = 1; i < array.length; i++)
{
    sequentialResult = sequentialResult + array[i] * 20;            
}
System.out.println("Sequential Result "+sequentialResult); 
// Sequential Result 10541.0
System.out.println("Stream Result "+resultStream.get()); 
// Stream Result 1.2466232E8

推荐答案

It seems that, as pointed by Numba's team, lambda a, b: a + b * 20 isn't associative and commutative reduction function which yield to this unexpected result.

这篇关于GPU和CPU上的自定义缩减产生不同的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆