代码执行时间成本 [英] code execution time cost

查看:106
本文介绍了代码执行时间成本的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我想知道如何计算ANSI c代码指令的执行时间;

我使用微控制器,晶振频率为20Mhz。



提前致谢,

z3ngew

Hello everyone,
I want to know how to calculate the execution time of the ANSI c code instructions;
I am using microcontroller, the crystal frequency is 20Mhz.

Thanks in advance,
z3ngew

推荐答案

你可以参加微控制器上的参考,检索所涉及的每个命令的持续时间,这通常在处理器的滴答中指定。根据使用的节拍频率计算节拍的持续时间。总结一下。



另一种方法是使用实​​时时钟(它可用;通常是这样),通过重复你的方式创建一个运行时间相对较长的代码代码片段相当多次,以减少可能的测量误差,这些误差与时间测量过程的贡献可以估计不太准确,计算经过时间,除以重复次数的事实有关。如果准确地进行这样的实验结果,通常会给出良好的准确性。



一些有效性说明:



Of当然,这种测量只有在你使用非多线程甚至多线程但真正的实时系统时才足够准确,执行不会被硬件中断中断,虚拟内存中不会出现页面错误;换句话说,在简单的系统中。即便如此,如果使用计算方法,您将获得纯粹的执行时间。真正观察到的时间可能稍长,因为在所有系统中,您通常仍会遇到至少一种硬件中断:系统计时器。



-SA
You can take the reference on the microcontroller, retrieve the duration of every commands involved, which is usually specified in processor's "ticks". Calculate the duration of the tick from the tact frequency used. Summarize all together.

Another approach is to use real-time clock (it it is available; usually it is), create a relatively long running code by repetition of your code fragment some considerable number of times, to reduce possible measurement errors related to the fact that the contribution of the time measurement process can be estimated not quite accurately, calculate elapsed time, divide by the number of repetition. Such experimental results, if conducted accurately, usually give good accuracy.

Some validity notes:

Of course, such measurements are only accurate enough when you are using non-multithreading or even multithreading but true real-time system, the execution is not interrupted by the hardware interrupts, there is no page faults used in virtual memory; in other words, in simple systems. Even then, if you use the calculation approach, you get "pure" execution time. The really observed time can be slightly longer, because in all systems you still typically experience at least one kind of hardware interrupt: system timer.

—SA


谢尔盖的答案是最好的,我只是想添加一些比评论中放置的更长的笔记...



首先,由于硬件乘法器和其他因素,晶体频率并没有真正规定处理器时钟周期时间。处理器每个晶体振荡可以执行一个以上的时钟周期。每次振荡的时钟数可以使用通常在初始化程序中设置的硬件和软件乘法器来计算。



其次,计算数字的唯一方法时钟周期的操作是将软件编译为字节码并分析输出。一些链接器在转到ELF或S32之前创建具有人类可读组件的汇编文件,通常这是您必须设置的链接选项。您可以打开此文件并使用处理器手册计算每个操作的周期数。这仍然不是100%准确,因为内存提取占用时钟周期,正如谢尔盖所​​说,如果启用中断也是如此。



因此,谢尔盖非常明确地说,获得准确执行时间的最佳方法是使用实​​时时钟(尽管很多处理器都没有) ,您可以使用计时器代替RTC来获取时间而不是挂钟)并测量所需的时间。最好通过平均多次运行代码来完成,例如完成1000次循环所需的时间。



如果你想做更多的研究,请看如何为处理器做Dhrystone和Whetstone基准测试。
Sergey's answer is the best, I just wanted to add a couple notes that are longer than what would be placed in a comment...

First, the crystal frequency does not really dictate processor clock cycle time, due to hardware multipliers and other factors. Processors can execute more than one clock cycle per crystal oscillation. The number of clocks per oscillation can be calculated using the hardware and software multipliers typically set up in the initialization routine.

Second, the only way to "calculate" the number of clock cycles it takes for an operation is to compile the software to byte code and analyze the output. Some linkers create assembly files that have the human readable assembly before going to ELF or S32, usually this is a linking option you have to set. You can open this file and use the processor manual to calculate the number of cycles for each operation. This still isn't 100% accurate as memory fetches take clock cycles and as Sergey said, so do the interrupts if enabled.

So as Sergey put pretty clearly above, the best way to get an accurate execution time is to use a real-time clock (although not available on a lot of processors, you can use timers though in place of an RTC to get a time instead of a wall-clock) and measure the time it takes. This is best done by averaging a number of runs of the code, for example the time it takes to do 1000 loops.

If you want to do more research, look up how to do Dhrystone and Whetstone benchmarking for processors.


这篇关于代码执行时间成本的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆