在 64 位处理器上不使用双精度(例如使用浮点数)有什么好处吗? [英] Is there any benefit of not using double on a 64bit (and using, say, float instead) processor?

查看:35
本文介绍了在 64 位处理器上不使用双精度(例如使用浮点数)有什么好处吗?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我总是使用 double 来进行计算,但 double 提供了比我需要的更高的准确度(或者说是有道理的,考虑到我所做的大多数计算一开始都是近似值).

I always use double to do calculations but double offers far better accuracy than I need (or makes sense, considering that most of the calculations I do are approximations to begin with).

但由于处理器已经是 64 位,我不认为使用较少位的类型会有任何好处.

But since the processor is already 64bit, I do not expect that using a type with less bits will be of any benefit.

我是对还是错,我将如何优化速度(我知道较小的类型会更有效地节省内存)

Am I right/wrong, how would I optimize for speed (I understand that smaller types would be more memory efficient)

这里是测试

#include <cmath>
#include <ctime>
#include <cstdio>

template<typename T>
void creatematrix(int m,int n, T **&M){
    M = new T*[m];
    T *M_data = new T[m*n];

    for(int i=0; i< m; ++i) 
    {
        M[i] = M_data + i * n;
    }
}

void main(){
    clock_t start,end;
    double diffs;
    const int N = 4096;
    const int rep =8;

    float **m1,**m2;
    creatematrix(N,N,m1);creatematrix(N,N,m2);

    start=clock();
    for(int k = 0;k<rep;k++){
        for(int i = 0;i<N;i++){
            for(int j =0;j<N;j++)
                m1[i][j]=sqrt(m1[i][j]*m2[i][j]+0.1586);
        }
    }
    end = clock();
    diffs = (end - start)/(double)CLOCKS_PER_SEC;
    printf("time = %lf
",diffs);


    delete[] m1[0];
    delete[] m1;

    delete[] m2[0];
    delete[] m2;

    getchar();
}

double 和 float 之间没有时间差异,但是当不使用平方根时,float 的速度是其两倍.

there was no time difference between double and float, however when square root is not used, float is twice as fast.

推荐答案

有几种方法可以加快速度:

There are a couple of ways they can be faster:

  • 更快的 I/O:您只有一半的位可以在磁盘/内存/缓存/寄存器之间移动
  • 通常,唯一较慢的运算是平方根和除法.例如,在 Haswell 上,DIVSS(浮点除法)需要 7 个时钟周期,而 DIVSD(双除法)需要 8-14(来源:Agner Fog 的表格).
  • 如果您可以利用 SIMD 指令,那么您可以处理两倍于每条指令的指令(即在 128 位 SSE 寄存器中,您可以操作 4 个浮点数,但只能操作 2 个双精度数).
  • 特殊函数(logsin)可以使用低次多项式:例如log 的 openlibm 实现使用7 次多项式,而 logf 只需要 4 级.
  • 如果您需要更高的中间精度,您可以简单地将 float 提升为 double,而对于 double,您需要 软件双倍,或更慢的long double.
  • Faster I/O: you have only half the bits to move between disk/memory/cache/registers
  • Typically the only operations that are slower are square-root and division. As an example, on a Haswell a DIVSS (float division) takes 7 clock cycles, whereas a DIVSD (double division) takes 8-14 (source: Agner Fog's tables).
  • If you can take advantage of SIMD instructions, then you can handle twice as many per instruction (i.e. in a 128-bit SSE register, you can operate on 4 floats, but only 2 doubles).
  • Special functions (log, sin) can use lower-degree polynomials: e.g. the openlibm implementation of log uses a degree 7 polynomial, whereas logf only needs degree 4.
  • If you need higher intermediate precision, you can simply promote float to double, whereas for a double you need either software double-double, or slower long double.

请注意,这些要点也适用于 32 位架构:与整数不同,格式的大小与您的架构相匹配并没有什么特别之处,即在大多数机器上,双精度和浮点数一样本机".

Note that these points also hold for 32-bit architectures as well: unlike integers, there's nothing particularly special about having the size of the format match your architecture, i.e. on most machines doubles are just as "native" as floats.

这篇关于在 64 位处理器上不使用双精度(例如使用浮点数)有什么好处吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆