如何在Linux中创建一个高分辨率计时器来衡量程序的性能? [英] How to create a high resolution timer in Linux to measure program performance?
问题描述
我想GPU比作CPU的性能。对于NVIDIA GPU我一直在使用cudaEvent_t类型得到一个非常precise时机。
I'm trying to compare GPU to CPU performance. For the NVIDIA GPU I've been using the cudaEvent_t types to get a very precise timing.
有关的CPU我一直在使用下面的code:
For the CPU I've been using the following code:
// Timers
clock_t start, stop;
float elapsedTime = 0;
// Capture the start time
start = clock();
// Do something here
.......
// Capture the stop time
stop = clock();
// Retrieve time elapsed in milliseconds
elapsedTime = (float)(stop - start) / (float)CLOCKS_PER_SEC * 1000.0f;
显然,那块code仅为code,如果你以秒为单位计数。此外,结果有时出来挺奇怪的。
Apparently, that piece of code is only code if you're counting in seconds. Also, the results sometime come out quite strange.
有谁知道的一些方法来在Linux中创建一个高分辨率计时器?
Does anyone know of some way to create a high resolution timer in Linux?
推荐答案
查看<一个href=\"http://www.kernel.org/doc/man-pages/online/pages/man2/clock_gettime.2.html\"><$c$c>clock_gettime$c$c>,这是一个POSIX接口,高分辨率的计时器。
Check out clock_gettime
, which is a POSIX interface to high-resolution timers.
如果,看了手册页,你在想关于 CLOCK_REALTIME
和 CLOCK_MONOTONIC
之间的区别,见<一href=\"http://stackoverflow.com/questions/3523442/difference-between-clock-realtime-and-clock-monotonic\">Difference CLOCK_REALTIME和CLOCK_MONOTONIC之间?
If, having read the manpage, you're left wondering about the difference between CLOCK_REALTIME
and CLOCK_MONOTONIC
, see Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?
一个完整的示例,请参见以下页面:<一href=\"http://www.guyrutenberg.com/2007/09/22/profiling-$c$c-using-clock_gettime/\">http://www.guyrutenberg.com/2007/09/22/profiling-$c$c-using-clock_gettime/
See the following page for a complete example: http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/
这篇关于如何在Linux中创建一个高分辨率计时器来衡量程序的性能?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!