为什么基于表格的近似文献总是使用这个公式,而另一个公式似乎更有意义呢? [英] Why does table-based sin approximation literature always use this formula when another formula seems to make more sense?

查看:210
本文介绍了为什么基于表格的近似文献总是使用这个公式,而另一个公式似乎更有意义呢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有关计算基本函数 sin 的文献参考公式:

  sin(x)= sin(Cn)* cos(h)+ cos(Cn)* sin(h)

其中 x = Cn + h Cn sin(Cn) cos(Cn)已经被预先计算了,并且可以在一个表中使用,如果遵循Gal的方法,已选择 Cn ,使得 sin(Cn) cos(Cn)近似于浮点数。数量 h 接近 0.0 。这个公式的参考例子是文章(page 7)。

我不明白为什么这是有道理的: cos(h) ,对于 h 的某些值,可能会错误至少0.5 ULP,并且由于接近 1.0 ,这似乎对以这种方式计算的结果 sin(x)的准确性有很大影响。





  sin(x)= sin(Cn)+(sin( Cn)*(cos(h)-1.0)+ cos(Cn)* sin(h))

然后,可以用多项式来近似两个量(cos(h)-1.0) sin(h)很容易做到准确,因为他们产生接近零的结果。 (cn)*(cos(h)-1.0), cos(Cn)* sin(h) sin(Cn)是几乎正确的四舍五入。



我是否错过了一些让更早,更流行,更简单的公式表现的更好?作者是否理所当然地认为,第一个公式实际上是作为第二个公式来实现的?

编辑:例子



用于计算单精度的单精度表 sinf() cosf() 可能包含以下单精度点:

 
f | cos f | sin f
----------------------- + -------------------- --- + ---------------------
0.017967 0x1.2660bcp-6 | 0x1.ffead8p-1 | 0x1.265caep-6
| (实际值:) | (实际值:)
| 〜0x1.ffead8000715dp-1 | 〜0x1.265cae000e6f9p-6

以下函数是围绕 0.017967 $ b $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $' .2660bcp-6F;

返回0x1.265caep-6f * cos_0(h)+ 0x1.ffead8p-1f * sin_0(h);


float sinf_new(float x)
{
float h = x - 0x1.2660bcp-6f;

return 0x1.265caep-6f +(0x1.265caep-6f * cosm1_0(h)+ 0x1.ffead8p-1f * sin_0(h));





$ b

在0.01f和0.025f之间测试这些函数似乎表明新的公式给出更精确的结果:

 
$ gcc -std = c99 test.c && ./a.out
相对错误,传统:2.169624e-07,新增:1.288049e-07
绝对误差平方和,传统:6.616202e-12,新增:2.522784e-12

我使用了几个快捷方式,所以请查看完整的程序

sin(x)= sin(Cn)* cos(h)+ cos(Cn)* sin(h)那么 sin(Cn)* cos(h)的四舍五入误差达到1/2 ulp的结果,如果目标是要得到一个精确的结果。然而,有些术语有时用伪扩展更精确地表示。例如,一个数字可以用一对( a b )来表示,其中 b 远小于 a ,其值被视为 a + b 。在这种情况下,cos( h )可以用一对(1, h ')来表示,计算就等同于你的建议。



另外,一旦给出了评估cos(em> h )和sin( h )的公式,可以详细描述实现。参见Stehlé和Zimmermann的论文第3.1节,他们定义了C <*>( h )= C( h ) - 1,并使用C在最终的公式中,这基本上是你的建议。



注意:我注意到使用上面的公式是最好的选择。可以从 sin(x)= sin(Cn)+ error_term 开始,并以其他方式计算误差项。

The literature on computing the elementary function sin with tables refers to the formula:

sin(x) = sin(Cn) * cos(h) + cos(Cn) * sin(h)

where x = Cn + h, Cn is a constant for which sin(Cn) and cos(Cn) have been pre-computed and are available in a table, and, if following Gal's method, Cn has been chosen so that both sin(Cn) and cos(Cn) are closely approximated by floating-point numbers. The quantity h is close to 0.0. An example of reference to this formula is this article (page 7).

I don't understand why this makes sense: cos(h), however it is computed, will probably be wrong by at least 0.5 ULP for some values of h, and since it is close to 1.0, this seems to have a drastic effect on the accuracy of the result sin(x) when computed this way.

I do not understand why the formula below is not used instead:

sin(x) = sin(Cn) + (sin(Cn) * (cos(h) - 1.0) + cos(Cn) * sin(h))

Then the two quantities (cos(h) - 1.0) and sin(h) can be approximated with polynomials that are easy to make accurate as they produce results near zero. The values for sin(Cn) * (cos(h) - 1.0), cos(Cn) * sin(h) and for their sum is still small and its absolute accuracy is expressed in ULPs of the small quantity that the sum represents, so that adding this quantity to sin(Cn) is almost correctly rounded.

Am I missing something that makes the earlier, popular, simpler formula behave well too? Do the writers take it for granted that the readers will understand that the first formula is actually implemented as the second formula?

EDIT: Example

A single-precision table to compute single-precision sinf() and cosf() might contain the following point in single-precision:

         f             |        cos f          |       sin f      
-----------------------+-----------------------+---------------------
0.017967 0x1.2660bcp-6 |    0x1.ffead8p-1      |    0x1.265caep-6
                       |    (actual value:)    |    (actual value:)
                       | ~0x1.ffead8000715dp-1 | ~0x1.265cae000e6f9p-6

The following functions are specialized single-precision functions to use around 0.017967:

float sinf_trad(float x)
{
  float h = x - 0x1.2660bcp-6f;

  return 0x1.265caep-6f * cos_0(h) + 0x1.ffead8p-1f * sin_0(h);
}

float sinf_new(float x)
{
  float h = x - 0x1.2660bcp-6f;

  return 0x1.265caep-6f + (0x1.265caep-6f * cosm1_0(h) + 0x1.ffead8p-1f * sin_0(h));
}

Testing these functions between 0.01f and 0.025f seems to show that the new formula gives more precise results:

$ gcc -std=c99 test.c && ./a.out 
relative error, traditional: 2.169624e-07, new: 1.288049e-07
sum of squares of absolute error, traditional: 6.616202e-12, new: 2.522784e-12

I took several shortcuts so please look at the complete program.

解决方案

Well, this formula is a start. Then other transformations could be done, depending on the context. I agree that if the formula sin(x) = sin(Cn) * cos(h) + cos(Cn) * sin(h) is applied in the target precision, then the rounding error of sin(Cn) * cos(h) is up to 1/2 ulp of the result, which is bad if the goal is to get an accurate result. However some terms are sometimes expressed in greater precision by using pseudo-expansions. For instance, a number can be represented by a pair (a,b) where b is much smaller than a and whose value is regarded as a+b. In such a case, cos(h) could be represented by a pair (1,h') and the computation would be equivalent to what you suggest.

Alternatively, the implementation can be detailed once the formulas to evaluate cos(h) and sin(h) are given. See Section 3.1 in Stehlé and Zimmermann's paper you cited: they define C*(h) = C(h) − 1, and use C* in the final formula, which is basically what you suggest.

Note: I'm note sure that using the above formula is the best choice. One could start with sin(x) = sin(Cn) + error_term, and compute the error term in some other way.

这篇关于为什么基于表格的近似文献总是使用这个公式,而另一个公式似乎更有意义呢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆