最好的机器优化多项式极小逼近反正切在[-1,1]? [英] Best machine-optimized polynomial minimax approximation to arctangent on [-1,1]?

查看:304
本文介绍了最好的机器优化多项式极小逼近反正切在[-1,1]?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有关的简单而有效地实现快速数学运算功能与合理的准确性,多项式极小极大近似往往是首选的方法。通常与雷米兹算法的一个变体中产生极小的近似。多种广泛使用的工具,如枫树和数学有内置的功能,这。生成的系数通常是计算使用高precision算术。它是公知的,简单地四舍五入那些系数到机器precision导致次优的精度在所得实施

相反,人们搜索紧密相关系数组被完全重新presentable为机号码生成机优化逼近。两个相关的文件有:

萨科Brisebarre,让 - 米歇尔·穆勒,和Arnaud蒂塞朗,计算机器,高效的多项式逼近,在数学软件ACM交易,卷。 32,第2号,2006年6月,第236-256。

萨科Brisebarre和西尔Chevillard,高效多项式L∞-逼近,第18届IEEE研讨会上计算机算法(ARITH-18),蒙彼利埃(法国),2007年6月,页169-176。

在LLL算法从后一份文件的执行程序是可作为 fpminimax()的的Sollya工具。这是我的理解,提出了机优化的近似值的代所有的算法都是基于启发式,并因此它一般是未知的什么精度可通过最佳逼近来实现。目前尚不清楚对我FMA的可用性是否(乘加)为近似的评价对回答这个问题的影响。在我看来,天真地认为它应该。

我目前在看一个简单的多项式近似正切在[1,1]是在IEEE-754单precision算术评估,使用霍纳方案和FMA。见功能 atan_poly()在下面的C99 code。对于无法获得一台Linux机器的那一刻,我没有用Sollya产生这些系数,但使用可能被松散地描述为最速下降和模拟退火的混合我自己的启发式(以避免陷入局部极小) 。我的机器优化多项式的最大误差是非常接近1 ULP,但我非常希望成为最大ULP误差低于1 ULP。

我知道我可以改变我的计算使用psented超过单precision precision领先系数重新$ P $,以提高精度,例如,但我想,以保持code完全相同是(即,尽可能简单的)仅调整的系数,以提供最准确的结果可能

系数的证明最优解集是理想的,指向相关文献的欢迎。我做了文献检索,但无法找到前进的艺术有意义超越的状态的任何文件Sollya的 fpminimax(),并没有一个检查FMA的作用(如果有的话)在这个问题上。

  //最大ULP ERR = 1.03143
浮动atan_poly(浮起)
{
    浮R,S;
    S = A * A;
    R = 0x1.7ed1ccp-9F;
    R = fmaf(R,S,-0x1.0c2c08p-6F);
    R = fmaf(R,S,0x1.61fdd0p-5F);
    R = fmaf(R,S,-0x1.3556b2p-4023);
    R = fmaf(R,S,0x1.b4e128p-4023);
    R = fmaf(R,S,-0x1.230ad2p-3f)的;
    R = fmaf(R,S,0x1.9978ecp-3f)的;
    R = fmaf(R,S,-0x1.5554dcp-2f)的;
    R = R *秒;
    R = fmaf(R,A,A);
    返回ř;
}

//最大ULP ERR = 1.52637
浮动my_atanf(浮起)
{
    浮R,T;
    T = fabsf(一);
    R =吨;
    如果(T> 1.0F){
        R = 1.0F / R;
    }
    R = atan_poly(r)的;
    如果(T> 1.0F){
        R = fmaf(0x1.ddcb02p-1F,0x1.aee9d6p + 0F,-r); // PI / 2  -  R的
    }
    R = copysignf(R,A);
    返回ř;
}
 

解决方案

下面的函数是在 A如实全面的实施反正切 [0 ,1]

 浮atan_poly(浮起){
  浮S = A * A,U = fmaf(A,-A,0x1.fde90cp-1F);
  浮R1 = 0x1.74dfb6p-9F;
  浮R2 = fmaf(R1,U,0x1.3a1c7cp-8F);
  浮R3 = fmaf(R2,S,-0x1.7f24b6p-7f)上;
  浮R4 = fmaf(R3,U,-0x1.eb3900p-7F);
  浮R5 = fmaf(R4,S,0x1.1ab95ap-5F);
  浮R6 = fmaf(R5,U,0x1.80e87cp-5F);
  浮R7 = fmaf(6-,S,-0x1.e71aa4p-4023);
  浮R8 = fmaf(R7,U,-0x1.b81b44p-3F);
  浮R9 = R8 * S;
  浮R10 = fmaf(R9,一,一);
  返回R10;
}
 

下面的测试工具将中止如果函数 atan_poly 未能如实圆润的 [1E-16,1] 和打印成功,否则:

  INT checkit(浮点六){
  双D = ATAN(F);
  浮D1 = D,D2 = D;
  如果(D1< D​​)D2 = nextafterf(D1,1.0 / 0.0);
  别的D1 = nextafterf(D1,-1.0 / 0.0);
  浮动P = atan_poly(F);
  如果(P = D1和放大器;!&安培; P!= D2)返回0;
  返回1;
}

诠释的main(){
  为(浮动F = 1; F> 1E-16,F = nextafterf(F,-1.0 / 0.0)){
    (!checkit(F)),如果中止();
  }
  的printf(成功\ N);
  出口(0);
}
 


在每乘用取值的问题是多项式的系数不迅速衰减。投入接近1结果许许多多取消几乎相等数量的,这意味着你要寻找一组系数,使累积的舍入的计算结束接近于反正切的残留

0x1.fde90cp-1F 是一个数字接近1其中(反正切(的sqrt(x)) - X)/ X ^ 3 非常接近最近的浮动。也就是说,它是一个常量,它进入了 U 的计算,使立方系数几乎是完全确定。 (对于这个程序,立方系数必须是 -0x1.b81b44p-3F -0x1.b81b42p-3F 。)

取值 U 有降低舍入误差的影响效果<$ C $交替乘法C>里在研究{I + 2} 至多为1/4,因为 S *ü &LT; 1/4 任何 A 是。这给了很大的回旋余地在选择的五阶及以后的系数。


我发现系数与两个程序的帮助:

  • 在一个程序中插入一堆测试点,跌幅为线性不等式的系统写入,并计算范围从不平等的该系统的系数。公告称,鉴于 A ,我们可以计算出的 R8 的范围导致了忠实全面的结果。要获得的线性的不平等,我pretended R8 将被计算为在浮动多项式取值取值 U 中的实数的运算;线性不等式约束这种实数 R8 趴在一些间隔。我用了帕尔马多面体库来处理这些约束系统。
  • 在另一个程序随机测试台在一定范围内的系数,堵在第一套测试点,那么所有的浮动 - 从 1 1E-8 按降序排列,并检查 atan_poly 生成<$ C的忠实四舍五入$ C> ATAN((双)x)。如果某些 X 失败,它打印出来了 X 和失败的原因。

要获得系数,我砍死这个第一个程序来修复 C3 ,制定出 R7 界限为每个测试点,则获得高阶系数范围。然后,我砍死它解决 C3 C5 ,并得到边界上的高阶系数。我这样做,直到我所有,但最高的三个阶系数 C13 C15 C17

我长大了一套测试点的第二个程序直到它停止打印任何东西出来或打印出来的成功。我需要令人惊讶的几个测试点拒绝几乎所有的错多项式---我在节目数85个测试点。


在这里,我告诉我的一些工作,选择系数。为了得到一个忠实圆润反正切我的初始设置测试点假设 R1 R8 在现实算术进行计算(和圆形某种程度上令人不快,但在某种程度上,我记不清了),但 R9 R10 浮动算术进行计算,我需要:

  -0x1.b81b456625f15p-3'= C3&LT; = -0x1.b81b416e22329p-3
-0x1.e71d48d9c2ca4p-4; = C5&其中; = -0x1.e71783472f5d1p -4-
0x1.80e063cb210f9p-5℃= -C 7烷基,= 0x1.80ed6efa0a369p -5-
0x1.1a3925ea0c5a9p-5℃= C9&其中; = 0x1.1b3783f148ed8p -5-
-0x1.ec6032f293143p-7其中; = C11&其中; = -0x1.e928025d508p -7-
-0x1.8c06e851e2255p-7其中; = C13&其中; = -0x1.732b2d4677028p -7-
0x1.2aff33d629371p-8示= C15&其中; = 0x1.41e9bc01ae472p -8-
0x1.1e22f3192fd1dp-9&LT; = C17&LT; = 0x1.d851520a087c2p-9
 

以C3 = -0x1.b81b44p-3,假设 R8 还评估了浮动算术:

  -0x1.e71df05b5ad56p-4 = C5&LT; = -0x1.e7175823ce2a4p-4
0x1.80df529dd8b18p-5℃= -C 7烷基,= 0x1.80f00e8da7f58p -5-
0x1.1a283503e1a97p-5℃= C9&其中; = 0x1.1b5ca5beeeefep -5-
-0x1.ed2c7cd87f889p-7其中; = C11&其中; = -0x1.e8c17789776cdp -7-
-0x1.90759e6defc62p-7其中; = C13&其中; = -0x1.7045e66924732p -7-
0x1.27eb51edf324p-8示= C15&其中; = 0x1.47cda0bb1f365p -8-
0x1.f6c6b51c50b54p-10&LT; = C17&LT; = 0x1.003a00ace9a79p-8
 

以C5 = -0x1.e71aa4p-4,假设 R7 完成后持股量算术:

  0x1.80e3dcc972cb3p-5℃= C7&LT; = 0x1.80ed1cf56977fp-5
0x1.1aa005ff6a6f4p-5℃= C9&其中; = 0x1.1afce9904742p -5-
-0x1.ec7cf2464a893p-7其中; = C11&其中; = -0x1.e9d6f7039db61p -7-
-0x1.8a2304daefa26p-7其中; = C13&其中; = -0x1.7a2456ddec8b2p -7-
0x1.2e7b48f595544p-8示= C15&其中; = 0x1.44437896b7049p -8-
0x1.396f76c06de2ep-9&LT; = C17&LT; = 0x1.e3bedf4ed606dp-9
 

以C7 = 0x1.80e87cp-5,假设 R6 完成后持股量算术:

  0x1.1aa86d25bb64fp-5℃= C9&LT; = 0x1.1aca48cd5caabp-5
-0x1.eb6311f6c29dcp-7其中; = C11&其中; = -0x1.eaedb032dfc0cp -7-
-0x1.81438f115cbbp-7其中; = C13&其中; = -0x1.7c9a106629f06p -7-
0x1.36d433f81a012p-8示= C15&其中; = 0x1.3babb57bb55bap -8-
0x1.5cb14e1d4247dp-9&LT; = C17&LT; = 0x1.84f1151303aedp-9
 

以C9 = 0x1.1ab95ap-5,假设 R5 完成后持股量算术:

  -0x1.eb51a3b03781dp-7&LT; = C11&LT; = -0x1.eb21431536e0dp-7
-0x1.7fcd84700f7cfp-7其中; = C13&其中; = -0x1.7ee38ee4beb65p -7-
0x1.390fa00abaaabp-8示= C15&其中; = 0x1.3b100a7f5d3cep -8-
0x1.6ff147e1fdeb4p-9&LT; = C17&LT; = 0x1.7ebfed3ab5f9bp-9
 

我挑了点接近该范围的中间为 C11 并随机选择了 C13 C15 C17


编辑:我现在已经自动化此过程。下面的功能也是一个忠实全面的实施反正切 [0,1]

 浮法C5 = 0x1.997a72p-3;
浮C7 = -0x1.23176cp-3;
浮C9 = 0x1.b523c8p-4;
浮C11 = -0x1.358ff8p-4;
浮C13 = 0x1.61c5c2p-5;
浮C15 = -0x1.0b16e2p-6;
浮C17 = 0x1.7b422p-9;

浮动juffa_poly(浮起){
  浮S = A * A;
  浮R1 = C17;
  浮R2 = fmaf(R1,S,C15);
  浮R3 = fmaf(R2,S,C13);
  浮R4 = fmaf(R3,S,C11);
  浮R5 = fmaf(R4,S,C9);
  浮R6 = fmaf(r5中,S,C7);
  浮R7 = fmaf(R6,S,C5);
  浮R8 = fmaf(R7,S,-0x1.5554dap-2F);
  浮R9 = R8 * S;
  浮R10 = fmaf(R9,一,一);
  返回R10;
}
 

我觉得很奇怪,这个code的存在。为了接近这些系数,就可以得到一个下界之间的距离 R10 和多项式的实数运算评估几个ULPS感谢的顺序缓慢收敛的价值该多项式时取值靠近 1 。我所预料的舍入误差表现的方式,从根本untamable简单地通过调整系数的方法。

For the simple and efficient implementation of fast math functions with reasonable accuracy, polynomial minimax approximations are often the method of choice. Minimax approximations are typically generated with a variant of the Remez algorithm. Various widely available tools such as Maple and Mathematica have built-in functionality for this. The generated coefficients are typically computed using high-precision arithmetic. It is well-known that simply rounding those coefficients to machine precision leads to suboptimal accuracy in the resulting implementation.

Instead, one searches for closely related sets of coefficients that are exactly representable as machine numbers to generate a machine-optimized approximation. Two relevant papers are:

Nicolas Brisebarre, Jean-Michel Muller, and Arnaud Tisserand, "Computing Machine-Efficient Polynomial Approximations", ACM Transactions on Mathematical Software, Vol. 32, No. 2, June 2006, pp. 236–256.

Nicolas Brisebarre and Sylvain Chevillard, "Efficient polynomial L∞-approximations", 18th IEEE Symposium on Computer Arithmetic (ARITH-18), Montpellier (France), June 2007, pp. 169-176.

An implementation of the LLL-algorithm from the latter paper is available as the fpminimax() command of the Sollya tool. It is my understanding that all algorithms proposed for the generation of machine-optimized approximations are based on heuristics, and that it is therefore generally unknown what accuracy can be achieved by an optimal approximation. It is not clear to me whether the availability of FMA (fused multiply-add) for the evaluation of the approximation has an influence on the answer to that question. It seems to me naively that it should.

I am currently looking at a simple polynomial approximation for arctangent on [-1,1] that is evaluated in IEEE-754 single-precision arithmetic, using the Horner scheme and FMA. See function atan_poly() in the C99 code below. For lack of access to a Linux machine at the moment, I did not use Sollya to generate these coefficients, but used my own heuristic that could be loosely described as a mixture of steepest decent and simulated annealing (to avoid getting stuck on local minima). The maximum error of my machine-optimized polynomial is very close to 1 ulp, but ideally I would like the maximum ulp error to be below 1 ulp.

I am aware that I could change my computation to increase the accuracy, for example by using a leading coefficient represented to more than single-precision precision, but I would like to keep the code exactly as is (that is, as simple as possible) adjusting only the coefficients to deliver the most accurate result possible.

A "proven" optimal set of coefficients would be ideal, pointers to relevant literature are welcome. I did a literature search but could not find any paper that advances the state of the art meaningfully beyond Sollya's fpminimax(), and none that examine the role of FMA (if any) in this issue.

// max ulp err = 1.03143
float atan_poly (float a)
{
    float r, s;
    s = a * a;
    r =              0x1.7ed1ccp-9f;
    r = fmaf (r, s, -0x1.0c2c08p-6f);
    r = fmaf (r, s,  0x1.61fdd0p-5f);
    r = fmaf (r, s, -0x1.3556b2p-4f);
    r = fmaf (r, s,  0x1.b4e128p-4f);
    r = fmaf (r, s, -0x1.230ad2p-3f);
    r = fmaf (r, s,  0x1.9978ecp-3f);
    r = fmaf (r, s, -0x1.5554dcp-2f);
    r = r * s;
    r = fmaf (r, a, a);
    return r;
}

// max ulp err = 1.52637
float my_atanf (float a)
{
    float r, t;
    t = fabsf (a);
    r = t;
    if (t > 1.0f) {
        r = 1.0f / r;
    }
    r = atan_poly (r);
    if (t > 1.0f) {
        r = fmaf (0x1.ddcb02p-1f, 0x1.aee9d6p+0f, -r); // pi/2 - r
    }
    r = copysignf (r, a);
    return r;
}

解决方案

The following function is a faithfully-rounded implementation of arctan on [0, 1]:

float atan_poly (float a) {
  float s = a * a, u = fmaf(a, -a, 0x1.fde90cp-1f);
  float r1 =               0x1.74dfb6p-9f;
  float r2 = fmaf (r1, u,  0x1.3a1c7cp-8f);
  float r3 = fmaf (r2, s, -0x1.7f24b6p-7f);
  float r4 = fmaf (r3, u, -0x1.eb3900p-7f);
  float r5 = fmaf (r4, s,  0x1.1ab95ap-5f);
  float r6 = fmaf (r5, u,  0x1.80e87cp-5f);
  float r7 = fmaf (r6, s, -0x1.e71aa4p-4f);
  float r8 = fmaf (r7, u, -0x1.b81b44p-3f);
  float r9 = r8 * s;
  float r10 = fmaf (r9, a, a);
  return r10;
}

The following test harness will abort if the function atan_poly fails to be faithfully-rounded on [1e-16, 1] and print "success" otherwise:

int checkit(float f) {
  double d = atan(f);
  float d1 = d, d2 = d;
  if (d1 < d) d2 = nextafterf(d1, 1.0/0.0);
  else d1 = nextafterf(d1, -1.0/0.0);
  float p = atan_poly(f);
  if (p != d1 && p != d2) return 0;
  return 1;
}

int main() {
  for (float f = 1; f > 1e-16; f = nextafterf(f, -1.0/0.0)) {
    if (!checkit(f)) abort();
  }
  printf("success\n");
  exit(0);
}


The problem with using s in every multiplication is that the polynomial's coefficients do not decay rapidly. Inputs close to 1 result in lots and lots of cancellation of nearly equal numbers, meaning you're trying to find a set of coefficients so that the accumulated roundoff at the end of the computation closely approximates the residual of arctan.

The constant 0x1.fde90cp-1f is a number close to 1 for which (arctan(sqrt(x)) - x) / x^3 is very close to the nearest float. That is, it's a constant that goes into the computation of u so that the cubic coefficient is almost completely determined. (For this program, the cubic coefficient must be either -0x1.b81b44p-3f or -0x1.b81b42p-3f.)

Alternating multiplications by s and u has the effect of reducing the effect of roundoff error in ri upon r{i+2} by a factor of at most 1/4, since s*u < 1/4 whatever a is. This gives considerable leeway in choosing the coefficients of fifth order and beyond.


I found the coefficients with the aid of two programs:

  • One program plugs in a bunch of test points, writes down a system of linear inequalities, and computes bounds on the coefficients from that system of inequalities. Notice that, given a, one can compute the range of r8 that lead to a faithfully-rounded result. To get linear inequalities, I pretended r8 would be computed as a polynomial in the floats s and u in real-number arithmetic; the linear inequalities constrained this real-number r8 to lie in some interval. I used the Parma Polyhedra Library to handle these constraint systems.
  • Another program randomly tested sets of coefficients in certain ranges, plugging in first a set of test points and then all floats from 1 to 1e-8 in descending order and checking that atan_poly produces a faithful rounding of atan((double)x). If some x failed, it printed out that x and why it failed.

To get coefficients, I hacked this first program to fix c3, work out bounds on r7 for each test point, then get bounds on the higher-order coefficients. Then I hacked it to fix c3 and c5 and get bounds on the higher-order coefficients. I did this until I had all but the three highest-order coefficients, c13, c15, and c17.

I grew the set of test points in the second program until it either stopped printing anything out or printed out "success". I needed surprisingly few test points to reject almost all wrong polynomials---I count 85 test points in the program.


Here I show some of my work selecting the coefficients. In order to get a faithfully-rounded arctan for my initial set of test points assuming r1 through r8 are evaluated in real arithmetic (and rounded somehow unpleasantly but in a way I can't remember) but r9 and r10 are evaluated in float arithmetic, I need:

-0x1.b81b456625f15p-3 <= c3 <= -0x1.b81b416e22329p-3
-0x1.e71d48d9c2ca4p-4 <= c5 <= -0x1.e71783472f5d1p-4
0x1.80e063cb210f9p-5 <= c7 <= 0x1.80ed6efa0a369p-5
0x1.1a3925ea0c5a9p-5 <= c9 <= 0x1.1b3783f148ed8p-5
-0x1.ec6032f293143p-7 <= c11 <= -0x1.e928025d508p-7
-0x1.8c06e851e2255p-7 <= c13 <= -0x1.732b2d4677028p-7
0x1.2aff33d629371p-8 <= c15 <= 0x1.41e9bc01ae472p-8
0x1.1e22f3192fd1dp-9 <= c17 <= 0x1.d851520a087c2p-9

Taking c3 = -0x1.b81b44p-3, assuming r8 is also evaluated in float arithmetic:

-0x1.e71df05b5ad56p-4 <= c5 <= -0x1.e7175823ce2a4p-4
0x1.80df529dd8b18p-5 <= c7 <= 0x1.80f00e8da7f58p-5
0x1.1a283503e1a97p-5 <= c9 <= 0x1.1b5ca5beeeefep-5
-0x1.ed2c7cd87f889p-7 <= c11 <= -0x1.e8c17789776cdp-7
-0x1.90759e6defc62p-7 <= c13 <= -0x1.7045e66924732p-7
0x1.27eb51edf324p-8 <= c15 <= 0x1.47cda0bb1f365p-8
0x1.f6c6b51c50b54p-10 <= c17 <= 0x1.003a00ace9a79p-8

Taking c5 = -0x1.e71aa4p-4, assuming r7 is done in float arithmetic:

0x1.80e3dcc972cb3p-5 <= c7 <= 0x1.80ed1cf56977fp-5
0x1.1aa005ff6a6f4p-5 <= c9 <= 0x1.1afce9904742p-5
-0x1.ec7cf2464a893p-7 <= c11 <= -0x1.e9d6f7039db61p-7
-0x1.8a2304daefa26p-7 <= c13 <= -0x1.7a2456ddec8b2p-7
0x1.2e7b48f595544p-8 <= c15 <= 0x1.44437896b7049p-8
0x1.396f76c06de2ep-9 <= c17 <= 0x1.e3bedf4ed606dp-9

Taking c7 = 0x1.80e87cp-5, assuming r6 is done in float arithmetic:

0x1.1aa86d25bb64fp-5 <= c9 <= 0x1.1aca48cd5caabp-5
-0x1.eb6311f6c29dcp-7 <= c11 <= -0x1.eaedb032dfc0cp-7
-0x1.81438f115cbbp-7 <= c13 <= -0x1.7c9a106629f06p-7
0x1.36d433f81a012p-8 <= c15 <= 0x1.3babb57bb55bap-8
0x1.5cb14e1d4247dp-9 <= c17 <= 0x1.84f1151303aedp-9

Taking c9 = 0x1.1ab95ap-5, assuming r5 is done in float arithmetic:

-0x1.eb51a3b03781dp-7 <= c11 <= -0x1.eb21431536e0dp-7
-0x1.7fcd84700f7cfp-7 <= c13 <= -0x1.7ee38ee4beb65p-7
0x1.390fa00abaaabp-8 <= c15 <= 0x1.3b100a7f5d3cep-8
0x1.6ff147e1fdeb4p-9 <= c17 <= 0x1.7ebfed3ab5f9bp-9

I picked a point close to the middle of the range for c11 and randomly chose c13, c15, and c17.


EDIT: I've now automated this procedure. The following function is also a faithfully-rounded implementation of arctan on [0, 1]:

float c5 = 0x1.997a72p-3;
float c7 = -0x1.23176cp-3;
float c9 = 0x1.b523c8p-4;
float c11 = -0x1.358ff8p-4;
float c13 = 0x1.61c5c2p-5;
float c15 = -0x1.0b16e2p-6;
float c17 = 0x1.7b422p-9;

float juffa_poly (float a) {
  float s = a * a;
  float r1 =              c17;
  float r2 = fmaf (r1, s, c15);
  float r3 = fmaf (r2, s, c13);
  float r4 = fmaf (r3, s, c11);
  float r5 = fmaf (r4, s, c9);
  float r6 = fmaf (r5, s, c7);
  float r7 = fmaf (r6, s, c5);
  float r8 = fmaf (r7, s, -0x1.5554dap-2f);
  float r9 = r8 * s;
  float r10 = fmaf (r9, a, a);
  return r10;
}

I find it surprising that this code even exists. For coefficients near these, you can get a bound on the distance between r10 and the value of the polynomial evaluated in real arithmetic on the order of a few ulps thanks to the slow convergence of this polynomial when s is near 1. I had expected roundoff error to behave in a way that was fundamentally "untamable" simply by means of tweaking coefficients.

这篇关于最好的机器优化多项式极小逼近反正切在[-1,1]?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆