Linux上的计算时间:粒度和精度 [英] Computing time on Linux: granularity and precision

查看:594
本文介绍了Linux上的计算时间:粒度和精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

**********************原始编辑**********************


我正在使用不同类型的时钟来获取Linux系统上的时间:

rdtsc, gettimeofday, clock_gettime

并且已经阅读了诸如此类的各种问题:

但是我有点困惑:


粒度,分辨率,精度和准确性之间有什么区别?


粒度(或分辨率或精度)准确性不是同一件事(如果我没错……)

例如,使用"clock_gettime" 时,精度为10毫秒,就像我得到的那样:

struct timespec res;
clock_getres(CLOCK_REALTIME, &res):

粒度(定义为每秒滴答数)为100Hz(或10ms),如我在执行时所得到的:

 long ticks_per_sec = sysconf(_SC_CLK_TCK);

精度以纳秒为单位,如上面的代码所示:

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;

在下面的链接中,我看到这是Linux粒度的全局定义,最好不要更改它:

http://wwwagss .informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

所以我的问题是,以上说明是否正确,并且:

a)我们可以看到rdtsc和gettimeofday的粒度是多少(通过命令)吗?

b)我们可以(以任何方式)更改它们吗?


**********************编辑号2 **********************

我已经测试了一些新时钟,我想分享信息:

a)在下面的页面中, David Terei 做了一个很好的程序,比较了各种时钟及其性能:

https://github.com/dterei/Scraps/tree/master /c/time

b)我也按照 Raxman 的建议对 omp_get_wtime 进行了测试,我发现精度为nsec,但并不比"clock_gettime"更好(就像他们在此网站上所做的一样):

http://msdn.microsoft.com/en-us/library /t3282fe5.aspx

我认为这是面向Windows的时间功能.

与使用CLOCK_REALTIME相比,使用CLOCK_MONOTONIC与 clock_gettime 给出的结果更好.这很正常,因为第一个分别计算 PROCESSING 时间,而另一个 REAL TIME

c)我还找到了英特尔函数 ippGetCpuClocks ,但是我没有对其进行过测试,因为必须先注册:

http://software.intel. com/en-us/articles/ipp-downloads-registration-and-licensing/

...或者您可以使用试用版

解决方案

  • 精度是信息量,即您报告的有效位数. (例如,我的身高分别为2米,1.8米,1.83米和1.8322米.所有这些测量都是准确的,但越来越精确.)

  • 准确性是所报告信息和真实性之间的关系. (例如,我高1.70微米"比"1.8米"更精确,但实际上并不准确.)

  • 粒度分辨率大约是计时器可以测量的最小时间间隔.例如,如果您的粒度为1毫秒,那么以毫微秒的精度报告结果几乎没有意义,因为它可能无法达到该精度水平.

在Linux上,可用的计时器的粒度越来越大:

    >中的
  • clock()(分辨率为20毫秒还是10毫秒?)

  • Posix <sys/time.h>中的
  • gettimeofday()(微秒)

  • clock_gettime()在Posix上(纳秒?)

在C ++中,<chrono>标头对此提供了一定程度的抽象,并且std::high_resolution_clock尝试为您提供最佳时钟.

**********************Original edit**********************


I am using different kind of clocks to get the time on Linux systems:

rdtsc, gettimeofday, clock_gettime

and already read various questions like these:

But I am a little confused:


What is the difference between granularity, resolution, precision, and accuracy?


Granularity (or resolution or precision) and accuracy are not the same things (if I am right ...)

For example, while using the "clock_gettime" the precision is 10 ms as I get with:

struct timespec res;
clock_getres(CLOCK_REALTIME, &res):

and the granularity (which is defined as ticks per second) is 100 Hz (or 10 ms), as I get when executing:

 long ticks_per_sec = sysconf(_SC_CLK_TCK);

Accuracy is in nanosecond, as the above code suggest:

struct timespec gettime_now;

clock_gettime(CLOCK_REALTIME, &gettime_now);
time_difference = gettime_now.tv_nsec - start_time;

In the link below, I saw that this is the Linux global definition of granularity and it's better not to change it:

http://wwwagss.informatik.uni-kl.de/Projekte/Squirrel/da/node5.html#fig:clock:hw

So my question is If this remarks above were right, and also:

a) Can we see what is the granularity of rdtsc and gettimeofday (with a command)?

b) Can we change them (with any way)?


**********************Edit number 2**********************

I have tested some new clocks and I will like to share information:

a) In the page below, David Terei, did a fine program that compares various clock and their performances:

https://github.com/dterei/Scraps/tree/master/c/time

b) I have also tested omp_get_wtime as Raxman suggested by and I found a precision in nsec, but not really better than "clock_gettime (as they did in this website):

http://msdn.microsoft.com/en-us/library/t3282fe5.aspx

I think it's a Windows-oriented time function.

Better results are given with clock_gettime using CLOCK_MONOTONIC than when using CLOCK_REALTIME. That's normal, because the first calculates PROCESSING time and the other REAL TIME respectively

c) I have found also the Intel function ippGetCpuClocks, but not I've not tested it because it's mandatory to register first:

http://software.intel.com/en-us/articles/ipp-downloads-registration-and-licensing/

... or you may use a trial version

解决方案

  • Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)

  • Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)

  • Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.

On Linux, the available timers with increasing granularity are:

  • clock() from <time.h> (20 ms or 10 ms resolution?)

  • gettimeofday() from Posix <sys/time.h> (microseconds)

  • clock_gettime() on Posix (nanoseconds?)

In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.

这篇关于Linux上的计算时间:粒度和精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆