查询表和运行时计算效率 - C ++ [英] lookup table vs runtime computation efficiency - C++

查看:138
本文介绍了查询表和运行时计算效率 - C ++的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的代码需要从以下函数中连续计算一个值:

  inline double f(double x){
return(tanh(3 *(5-x))* 0.5 + 0.5);
}

分析表明程序的这一部分是大部分时间。由于程序将运行几个星期甚至几个月,我想优化此操作,并考虑使用查找表。



我知道一个查找表取决于表本身的大小,以及它的设计方式。目前我不能使用小于100 MB,可以使用高达2GB。



使用查找表比执行计算要快吗?此外,将使用N维矩阵比1-D std ::向量更好,并且什么是不应该越过的表的大小的阈值(如果有的话)?


<我正在编写一个代码,不断需要从特定的函数计算一个值。经过一些分析,我发现我的程序的这一部分是大部分时间花费的。



到目前为止,我不允许使用小于100 MB ,我可以使用高达2GB。


如果你有巨大的查找表(数百MB你说),这不适合缓存 - 最可能的内存查找时间将远远高于计算本身。 RAM是非常慢,特别是当从大型阵列的随机位置提取时。



这里是合成测试:



现场演示

  #include< boost / progress。 hpp> 
#include< iostream>
#include< ostream>
#include< vector>
#include< cmath>

使用命名空间boost;
using namespace std;

inline double calc(double x)
{
return(tanh(3 *(5-x))* 0.5 + 0.5);
}

template< typename F>
void test(F&& f)
{
progress_timer t;
volatile double res;
for(unsigned i = 0; i!= 1 <26; ++ i)
res = f(i);
(void)res;
}

int main()
{
const unsigned size =(1 <26)+ 1;
矢量< double>表(大小);
cout<< 表大小是< 1.0 * sizeof(double)* size /(1 << 20)< MiB endl;
cout<< calc;
test(calc);
cout<< dummy lookup;
test([&](unsigned i){return table [(i << 12)%size];}); //虚拟查找,不是真实值
}

我的机器上的输出是: p>

 表大小为512MiB 
calc 0.52 s

虚拟查找0.92 s


My code requires continuously computing a value from the following function:

inline double f (double x) {
    return ( tanh( 3*(5-x)  ) *0.5 + 0.5);
}

Profiling indicates that this part of the program is where most of the time is spent. Since the program will run for weeks if not months, I would like to optimize this operation and am considering the use of a lookup table.

I know that the efficiency of a lookup table depends on the size of the table itself, and on the way it's designed. Currently I cannot use less than 100 MB and can use up to 2GB. Values between two points in the matrix will be linearly interpolated.

Would using a lookup table be faster than doing the computation? Also, would using an N-dimensional matrix be better than a 1-D std::vector and what is the threshold (if any) on the size of the table that should not be crossed?

解决方案

I'm writing a code that continuously requires to compute a value from a particular function. After some profiling, I discovered that this part of my program is where most of the time is spent.

So far, I'm not allowed to use less than 100 MB, and I can use up to 2GB. A linear interpolation will be used for points between to points in the matrix.

If you would have huge lookup table (hundreds of MB as you said), which does not fit to cache - most likely memory lookup time would be much higher than calculation itself. RAM is "very slow", especially when fetching from random locations of huge arrays.

Here is synthetic test:

live demo

#include <boost/progress.hpp>
#include <iostream>
#include <ostream>
#include <vector>
#include <cmath>

using namespace boost;
using namespace std;

inline double calc(double x)
{
    return ( tanh( 3*(5-x)  ) *0.5 + 0.5);
}

template<typename F>
void test(F &&f)
{
   progress_timer t;
   volatile double res;
   for(unsigned i=0;i!=1<<26;++i)
      res = f(i);
   (void)res;
}

int main()
{
   const unsigned size = (1 << 26) + 1;
   vector<double> table(size);
   cout << "table size is " << 1.0*sizeof(double)*size/(1 << 20) << "MiB" << endl;
   cout << "calc ";
   test(calc);
   cout << "dummy lookup ";
   test([&](unsigned i){return table[(i << 12)%size];}); // dummy lookup, not real values
}

Output on my machine is:

table size is 512MiB
calc 0.52 s

dummy lookup 0.92 s

这篇关于查询表和运行时计算效率 - C ++的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆