如何用C的时间在Linux打印功能的运行时间? [英] How to use c time in linux to print the function running time?
问题描述
当我运行Linux下C code时,code总是犯规打印出经过时间,而结果总是0.The code是刚刚如下:
的#include< SYS / time.h中>
#包括LT&;&stdio.h中GT;
#包括LT&;&stdlib.h中GT;
#包括LT&;&unistd.h中GT;
无效的主要(INT ARGC,CHAR *的argv []){
INT N;
如果(的argc == 2){
N =的atoi(ARGV [1]);
}
timeval结构开始,结束;
函数gettimeofday(安培;启动,0);
INT R = FIB(N);
函数gettimeofday(&放大器;一端,0);
长的mtime,S,我们;
S = end.tv_sec - start.tv_sec;
我们= end.tv_usec - start.tv_usec;
的printf(S =%F,我们=%F \\ N,S,美国);
修改时间=(S * 1000 +美国/ 1000.0)+0.5;
的printf(纤维蛋白原结果为%d为:%d个;流逝%F \\ N,N,R,修改时间);}INT FIB(INT N){
如果(N == 0)返回0;
如果(N == 1)返回1;
返回FIB(N-1)+ FIB(N-2);
}
所有的建议做实际上的工作,但时间测量的粒度大(通常为10至100毫秒)。因此,它实际测量的东西而持续例如一个计算半秒钟。在目前的处理器(在运行2到3GHz,每个周期约3-5个指令),这意味着像执行(即基本步在我们的C程序-with步骤的定义不清的概念一十亿机器指令通常是十几机器指令)。所以,你的测试是太小了,你真的应该计算一百万次fibionacci(10)。
要更具体地低于方案(其中一些计算输出,以避免优化他们全部)在约2秒钟正在运行。 (对东西少于16 fibionacci万元计算)。
的#include<&stdio.h中GT;
#包括LT&;&unistd.h中GT;
#包括LT&;&time.h中GT;
长FIB(INT N){
如果(N == 0)返回0;
如果(N == 1)返回1;
返回FIB(N-1)+ FIB(N-2);
}诠释的main()
{
INT I = 0;
INT P =(int)的GETPID();
clock_t表示CSTART =时钟();
clock_t表示CEND = 0;
对于(i = 0; I< 1000000;我++){
长F = FIB(我16%);
如果(我%P == 0)printf的(我=%D,F =%LD \\ N,I,F);
}
CEND =时钟();
的printf(%.3f CPU秒\\ n,((双)CEND - (双)CSTART)* 1.0E-6);
返回0;
}
最后几行输出时间./fib
(用 GCC编译-Wall -O2 -o fib.c FIB
)
是
I = 936079,F = 610
I = 948902,F = 8
I = 961725,F = 233
I = 974548,F = 3
I = 987371,F = 89
2.140秒的CPU
./fib 2.15s用户0.00S系统99%的CPU共有2.152
基准测试运行超过一秒钟更小的意义不大
(你可以使用时间
命令来衡量这样的运行)
when I run c code in linux,the code always doesnt print out the elapse time,and the result always is 0.The code is just as follow:
#include <sys/time.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
void main(int argc,char* argv[]){
int n;
if(argc == 2){
n = atoi(argv[1]);
}
struct timeval start, end;
gettimeofday(&start, 0);
int r = fib(n);
gettimeofday(&end, 0);
long mtime, s,us;
s = end.tv_sec - start.tv_sec;
us = end.tv_usec - start.tv_usec;
printf("s=%f,us=%f \n", s, us);
mtime = (s*1000 + us/1000.0)+0.5;
printf("Fib result for %d is: %d;elapsing %f \n", n, r, mtime);
}
int fib(int n){
if(n == 0) return 0;
if(n == 1) return 1;
return fib(n-1)+fib(n-2);
}
All the suggestions do in fact work, but the granularity of the time measurement is big (typically 10 to 100 milliseconds). So it actually measure something for a computation which last e.g. half a second. On current processors (running at 2 to 3Ghz, with about 3-5 instructions per cycle), that means something like a billion machine instructions executed (an "elementary step" in our C program -with an ill-defined notion of step is usually a dozen machine instructions). So your test is too small, you really should compute a million times fibionacci (10).
To be more specific the program below (where some computations are output, to avoid optimizing them all) is running in about 2 seconds. (on million computations of fibionacci of something less than 16).
#include <stdio.h>
#include <unistd.h>
#include <time.h>
long fib(int n){
if(n == 0) return 0;
if(n == 1) return 1;
return fib(n-1)+fib(n-2);
}
int main ()
{
int i=0;
int p = (int) getpid();
clock_t cstart = clock();
clock_t cend = 0;
for (i=0; i<1000000; i++) {
long f = fib(i%16);
if (i % p == 0) printf("i=%d, f=%ld\n", i, f);
}
cend = clock();
printf ("%.3f cpu sec\n", ((double)cend - (double)cstart)* 1.0e-6);
return 0;
}
The last few lines output with time ./fib
(compiled with gcc -O2 -Wall fib.c -o fib
)
are
i=936079, f=610
i=948902, f=8
i=961725, f=233
i=974548, f=3
i=987371, f=89
2.140 cpu sec
./fib 2.15s user 0.00s system 99% cpu 2.152 total
benchmarking a run smaller than about a second is not very meaningful
(and you can use the time
command to measure such a run)
See also time(7) and clock_gettime(2).
这篇关于如何用C的时间在Linux打印功能的运行时间?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!