如何在不同平台上实现32位浮点计算的一致性? [英] How to force 32bits floating point calculation consistency across different platforms?

查看:176
本文介绍了如何在不同平台上实现32位浮点计算的一致性?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一段简单的代码可以处理浮点数. 循环中很少有乘法,除法,exp(),减法和加法. 当我在不同平台(例如PC,Android手机,iPhone)上运行同一段代码时,得到的结果会略有不同. 结果在所有平台上几乎都相等,但差异很小-通常为浮点值的1/1000000.

I have a simple piece of code that operates with floating points. Few multiplications, divisions, exp(), subtraction and additions in a loop. When I run the same piece of code on different platforms (like PC, Android phones, iPhones) I get slightly different results. The result is pretty much equal on all the platforms but has a very small discrepancy - typically 1/1000000 of the floating point value.

我想原因是有些电话没有浮点寄存器,只是用整数模拟那些计算,有些电话有浮点寄存器,但是实现方式不同. 这里有证明: http://christian-seiler.de/projekte/fpmath/

I suppose the reason is that some phones don't have floating point registers and just simulate those calculations with integers, some do have floating point registers but have different implementations. There are proofs to that here: http://christian-seiler.de/projekte/fpmath/

是否有一种方法可以强制所有平台产生一致的结果? 例如好&快速开放源代码库,该库以整数形式(在软件中)实现浮点机制,因此我可以避免硬件实现上的差异.

Is there a way to force all the platform to produce a consistent results? For example a good & fast open-source library that implements floating point mechanics with integers (in software), thus I can avoid hardware implementation differences.

我需要完全一致的原因是为了避免计算层之间的复合错误. 当前,那些复合误差确实会产生明显不同的结果. 换句话说,我不太在乎哪个平台的结果更正确,而是想强制一致性以重现相同的行为.例如,在手机上发现的错误在PC上调试起来要容易得多,但是我需要重现这种确切的行为

The reason I need an exact consistency is to avoid compound errors among layers of calculations. Currently those compound errors do produce a significantly different result. In other words, I don't care so much which platform has a more correct result, but rather want to force consistency to be able to reproduce equal behavior. For example a bug which was discovered on a mobile phone is much easier to debug on PC, but I need to reproduce this exact behavior

推荐答案

对于给定的计算,最多32位浮点数学的精度为1777777中的1(2 24中为1 ).诸如exp之类的函数通常被实现为一系列计算,因此可能会有较大的误差.如果您连续执行多个计算,则误差将相加并相乘.通常,float的精度约为6-7位.

32-bit floating point math, for a given calculation will, at best, have a precision of 1 in 16777216 (1 in 224). Functions such as exp are often implemented as a sequence of calculations, so may have a larger error due to this. If you do several calculations in a row, the errors will add and multiply up. In general float has about 6-7 digits of precision.

正如一条评论所述,检查舍入模式是否相同.您可以选择大多数FPU的四舍五入"(rtn),四舍五入"(rtz)和四舍五入"(rte)模式.不同平台上的默认设置可能会有所不同.

As one comment says, check the rounding mode is the same. Most FPU's have a "round to nearest" (rtn), "round to zero" (rtz) and "round to even" (rte) mode that you can choose. The default on different platforms MAY vary.

如果您执行相当小的数字到相当大的数字的加法或减法,由于必须对数字进行归一化,您将在此类操作中产生更大的错误.

If you perform additions or subtractions of fairly small numbers to fairly large numbers, since the number has to be normalized you will have a greater error from these sort of operations.

归一化意味着将两个数字都排在小数点后的位置-就像在纸上一样,您必须填写额外的零以对齐要添加的两个数字-但当然在纸上也可以用0.000000001添加12419818.0并以12419818.000000001结尾,因为纸张的精度尽可能高.在floatdouble中执行此操作将得到与以前相同的数字.

Normalized means shifted such that both numbers have the decimal place lined up - just like if you do that on paper, you have to fill in extra zeros to line up the two numbers you are adding - but of course on paper you can add 12419818.0 with 0.000000001 and end up with 12419818.000000001 because paper has as much precision as you can be bothered with. Doing this in float or double will result in the same number as before.

确实有一些可以进行浮点运算的库-最受欢迎的是MPFR-但这是一个多精度"库,但是运行起来会很慢-因为它们并不是为浮点数的插件替换"而构建的,但是是一种工具,例如,当您要使用1000个数字来计算pi时,或者当您要在远远大于64位或128位的范围内计算素数时.

There are indeed libraries that do floating point math - the most popular being MPFR - but it is a "multiprecision" library, but it will be fairly slow - because they are not really built to be "plugin replacement of float", but a tool for when you want to calculate pi with 1000s of digits, or when you want to calculate prime numbers in the ranges much larger than 64 or 128 bits, for example.

它可以解决使用此类库的问题,但是会很慢.

It MAY solve the problem to use such a library, but it will be slow.

floatdouble的更好选择应该具有类似的效果(与32位浮点数中的23位相比,double的尾数为53位,因此,位数多于两倍)在尾数中).并且应该仍然可以在任何合理的最新ARM处理器中作为硬件指令使用,并且速度相对较快,但不如float快(FPU可从ARMv7获得-当然可以在iPhone中找到-至少从iPhone 3和中高端Android设备-我设法发现Samsung Galaxy ACE具有ARM9处理器(于1997年首次推出)-因此没有浮点硬件).

A better choice would, moving from float to double should have a similar effect (double has 53 bits of mantissa, compared to the 23 in a 32-bit float, so more than twice as many bits in the mantissa). And should still be available as hardware instructions in any reasonably recent ARM processor, and as such relatively fast, but not as fast as float (FPU is available from ARMv7 - which certainly what you find in iPhone - at least from iPhone 3 and the middle to high end Android devices - I managed to find that Samsung Galaxy ACE has an ARM9 processor [first introduced in 1997] - so has no floating point hardware).

这篇关于如何在不同平台上实现32位浮点计算的一致性?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆