linux gettimeofday()的微秒时间是如何获得的,其精度是多少? [英] How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

查看:617
本文介绍了linux gettimeofday()的微秒时间是如何获得的,其精度是多少?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

壁钟时间通常由系统RTC提供.这主要仅提供时间到毫秒范围,并且通常具有10到20毫秒的粒度.但是 gettimeofday()的分辨率/粒度通常为答案,其中还有一些阅读结果:

Linux在引导时仅使用实时时钟(RTC) 与更高分辨率的计数器同步,例如时间戳记(TSC).引导gettimeofday()之后,将返回一个完全基于TSC值和该计数器频率的时间.通过将系统时间与外部时间源进行比较,可以校正/校准TSC frequency的初始值.调整是通过 adjtimex()函数完成/配置的.内核运行一个锁相环,以确保时间结果是单调且一致的.

这样可以说gettimeofday()具有微秒的分辨率.考虑到更多现代的时间戳记计数器正在GHz范围内运行,因此可获得的分辨率可能在纳秒范围内.因此,此有意义的评论

/**
407  * do_gettimeofday - Returns the time of day in a timeval
408  * @tv:         pointer to the timeval to be set
409  *
410  * NOTE: Users should be converted to using getnstimeofday()
411  */

可以在 Linux/kernel/time/timekeeping.c .这表明可能 是更高的分辨率功能,可在以后的某个时间点使用.现在getnstimeofday()仅在内核空间中可用.

但是,仔细查看所有相关代码以正确解决这个问题,它显示了很多有关不确定性的注释.可能有可能获得微秒的分辨率.函数gettimeofday()甚至可以在微秒范围内显示粒度. 但是:由于无法准确校正TSC频率的 drift ,因此对其准确性存在严重质疑.同样,在Linux内部处理此问题的代码的复杂性也暗示着人们相信,实际上很难正确地做到这一点.这是特别的,但并非仅由应在Linux上运行的大量硬件平台引起.

结果:gettimeofday()返回具有微秒粒度的单调时间,但是它提供的时间几乎与其他任何时间源都不是one microsecond的相位.

解决方案

如何实现gettimeofday()的微秒分辨率/粒度?

Linux在许多不同的硬件平台上运行,因此具体情况有所不同.在现代的x86平台上,Linux使用时间戳计数器,也称为TSC,它由以133.33 MHz运行的晶体振荡器的倍数驱动.晶体振荡器向处理器提供参考时钟,处理器将其乘以某个倍数-例如,在2.93 GHz处理器上,倍数为22.历史上TSC是不可靠的时间来源,因为实现会在以下情况下停止计数器:处理器进入睡眠状态,或者由于处理器移动乘数以更改性能状态或在天气变热时降低油门.现代x86处理器提供的TSC是恒定的,不变的和不停的.在这些处理器上,TSC是出色的高分辨率时钟,Linux内核在启动时确定初始近似频率. TSC为gettimeofday()系统调用提供了微秒的分辨率,为clock_gettime()系统调用提供了纳秒的分辨率.

如何完成同步?

您的第一个问题是关于Linux时钟如何提供高分辨率的,第二个问题是关于同步的,这是

  1. 网络时间协议
  2. 精确时间协议

这些协议定义了网络上的主时钟(或原子时钟提供的时钟层),然后测量网络延迟以估计与主时钟的偏移量.一旦确定了与主机的偏移,系统时钟就为disciplined以保持其准确性.可以通过

  1. 步进时钟(相对较大,突然且不频繁的时间调整),或
  2. Slewing时钟(定义为通过在给定时间段内缓慢增加或降低频率来调整时钟频率)

内核提供了 adjtimex系统调用,以允许进行时钟调节.有关现代英特尔多核处理器如何使内核之间的TSC保持同步的详细信息,请参见 kernel/time.c 内核/时间/计时.c.

Wall clock time is usually provided by the systems RTC. This mostly only provides times down to the millisecond range and typically has a granularity of 10-20 miliseconds. However the resolution/granularity of gettimeofday() is often reported to be in the few microseconds range. I assume the microsecond granularity must be taken from a different source.

How is the microsecond resolution/granularity of gettimeofday() accomplished?

When the part down to the millisecond is taken from the RTC and the mircoseconds are taken from a different hardware, a problem with phasing of the two sources arises. The two sources have to be synchronized somehow.

How is the synchronization/phasing between these two sources accomplished?

Edit: From what I've read in links provided by amdn, particulary the following Intel link, I would add a question here:

Does gettimeofday() provide resolution/granularity in the microsecond regime at all?


Edit 2: Summarizing the amdns answer with some more results of reading:

Linux only uses the realtime clock (RTC) at boot time to synchronize with a higher resolution counter, i.g. the Timestampcounter (TSC). After the boot gettimeofday() returns a time which is entirely based on the TSC value and the frequency of this counter. The initial value for the TSC frequency is corrected/calibrated by means of comparing the system time to an external time source. The adjustment is done/configured by the adjtimex() function. The kernel operates a phase locked loop to ensure that the time results are monotonic and consistent.

This way it can be stated that gettimeofday() has microsecond resolution. Taking into account that more modern Timestampcounter are running in the GHz regime, the obtainable resolution could be in the nanosecond regime. Therefore this meaningfull comment

/**
407  * do_gettimeofday - Returns the time of day in a timeval
408  * @tv:         pointer to the timeval to be set
409  *
410  * NOTE: Users should be converted to using getnstimeofday()
411  */

can be found in Linux/kernel/time/timekeeping.c. This suggest that there will possibly be an even higher resolution function available at a later point in time. Right now getnstimeofday() is only available in kernel space.

However, looking through all the code involved to get this about right, shows quite a few comments about uncertainties. It may be possible to obtain microsecond resolution. The function gettimeofday() may even show a granularity in the microsecond regime. But: There are severe daubts about its accuracy because the drift of the TSC frequency cannot be accurately corrected for. Also the complexity of the code dealing with this matter inside Linux is a hint to believe that it's in fact too difficult to get it right. This is particulary but not solely caused by the huge number of hardware platforms Linux is supposed to run on.

Result: gettimeofday() returns monotonic time with microsecond granularity but the time it provides is almost never is phase to one microsecond with any other time source.

解决方案

How is the microsecond resolution/granularity of gettimeofday() accomplished?

Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.

How is this synchronization accomplished?

Your first question was about how the Linux clock provides high resolution, this second question is about synchronization, this is the distinction between precision and accuracy. Most systems have a clock that is backed up by battery to keep time of day when the system is powered down. As you might expect this clock doesn't have high accuracy or precision, but it will get the time of day "in the ballpark" when the system starts. To get accuracy most systems use an optional component to get time from an external source on the network. Two common ones are

  1. Network Time Protocol
  2. Precision Time Protocol

These protocols define a master clock on the network (or a tier of clocks sourced by an atomic clock) and then measure network latencies to estimate offsets from the master clock. Once the offset from the master is determined the system clock is disciplined to keep it accurate. This can be done by

  1. Stepping the clock (a relatively large, abrupt, and infrequent time adjustment), or
  2. Slewing the clock (defined as how much the clock frequency should be adjusted by either slowly increasing or decreasing the frequency over a given time period)

The kernel provides the adjtimex system call to allow clock disciplining. For details on how modern Intel multi-core processors keep the TSC synchronized between cores see CPU TSC fetch operation especially in multicore-multi-processor environment.

The relevant kernel source files for clock adjustments are kernel/time.c and kernel/time/timekeeping.c.

这篇关于linux gettimeofday()的微秒时间是如何获得的,其精度是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆