linux gettimeofday()的微秒时间是怎么得到的,准确度是多少? [英] How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

查看:44
本文介绍了linux gettimeofday()的微秒时间是怎么得到的,准确度是多少?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

挂钟时间通常由系统 RTC 提供.这主要只提供低至毫秒范围的时间,通常具有 10-20 毫秒的粒度.然而,gettimeofday() 的分辨率/粒度通常是 报告 在几微秒范围内.我假设微秒粒度必须来自不同的来源.

gettimeofday() 的微秒分辨率/粒度是如何实现的?

当精确到毫秒的部分取自 RTC,而微秒取自不同的硬件时,就会出现两个源的定相问题.这两个来源必须以某种方式同步.

这两个来源之间的同步/相位是如何实现的?

根据我在 amdn 提供的链接中阅读的内容,特别是以下 Intel 链接,我会在这里添加一个问题:

gettimeofday() 是否提供微秒级的分辨率/粒度?

<小时>

编辑 2:总结 amdns answer 并提供更多阅读结果:

Linux 仅在启动时使用实时时钟 (RTC)与更高分辨率的计数器同步,例如时间戳计数器 (TSC).启动后 gettimeofday() 返回一个时间,该时间完全基于 TSC 值和此计数器的频率.TSC frequency 的初始值通过将系统时间与外部时间源进行比较来校正/校准.调整由 adjtimex() 函数完成/配置.内核运行锁相环以确保时间结果单调一致.

这样就可以说 gettimeofday() 具有微秒分辨率.考虑到更现代的 Timestampcounter 在 GHz 范围内运行,可获得的分辨率可能在纳秒范围内.因此这个有意义的评论

/**407 * do_gettimeofday - 返回时间值中的一天中的时间408 * @tv:指向要设置的时间值的指针409 *410 * 注意:用户应转换为使用 getnstimeofday()第411话

可以在 Linux/kernel/time/timekeeping.c 中找到.这表明可能会有成为稍后可用的更高分辨率的功能.现在 getnstimeofday() 仅在内核空间可用.

然而,查看所有涉及的代码以使其正确,显示了很多关于不确定性的评论.有可能获得微秒分辨率.函数 gettimeofday() 甚至可以显示微秒级的粒度.但是:由于无法准确校正 TSC 频率的漂移,因此对其准确性存在严重怀疑.此外,Linux 内部处理这个问题的代码的复杂性暗示着人们相信它实际上很难做到正确.这很特别,但不仅仅是由 Linux 应该运行的大量硬件平台造成的.

结果: gettimeofday() 返回具有微秒级粒度的单调时间,但它提供的时间几乎从不与任何相位一微秒其他时间源.

解决方案

gettimeofday() 的微秒分辨率/粒度是如何实现的?

Linux 在许多不同的硬件平台上运行,因此具体情况有所不同.在现代 x86 平台上,Linux 使用 时间戳计数器,也称为 TSC,由多个运行在 133.33 MHz 的晶振驱动.晶体振荡器为处理器提供参考时钟,处理器将其乘以某个倍数 - 例如在 2.93 GHz 处理器上,倍数为 22.TSC 历史上是不可靠的时间来源,因为实现当处理器进入睡眠状态时会停止计数器,或者因为当处理器移动乘数来改变倍数时倍数不是恒定的 性能状态 或在变热时节流.现代 x86 处理器提供了一个 TSC,它是恒定的、不变的和不间断的.在这些处理器上,TSC 是一个出色的高分辨率时钟,Linux 内核在启动时确定初始近似频率.TSC 为 gettimeofday() 系统调用提供微秒分辨率,为 clock_gettime() 系统调用提供纳秒分辨率.

这种同步是如何完成的?

你的第一个问题是关于Linux时钟如何提供高分辨率,第二个问题是关于同步,这是精确度和准确度.大多数系统都有一个由电池供电的时钟,以在系统断电时保持一天的时间.正如您可能期望的那样,这个时钟没有很高的准确度或精度,但是当系统启动时,它会大致"获得一天中的时间.为了获得准确性,大多数系统使用可选组件从网络上的外部源获取时间.两个常见的是

  1. 网络时间协议
  2. 精确时间协议

这些协议定义了网络上的主时钟(或由原子钟提供的时钟层),然后测量网络延迟以估计与主时钟的偏移.一旦确定了与主机的偏移量,系统时钟就会规范以保持其准确.这可以通过

  1. 步进时钟(相对较大、突然且不频繁的时间调整),或
  2. 摆动时钟(定义为在给定的时间段内通过缓慢增加或减少频率来调整时钟频率的量)

内核提供了adjtimex 系统调用以允许时钟调节.有关现代英特尔多核处理器如何在内核之间保持 TSC 同步的详细信息,请参见 CPU TSC fetch 操作,尤其是在多核多处理器环境中.

时钟调整的相关内核源文件为kernel/time.ckernel/time/timekeeping.c.

Wall clock time is usually provided by the systems RTC. This mostly only provides times down to the millisecond range and typically has a granularity of 10-20 miliseconds. However the resolution/granularity of gettimeofday() is often reported to be in the few microseconds range. I assume the microsecond granularity must be taken from a different source.

How is the microsecond resolution/granularity of gettimeofday() accomplished?

When the part down to the millisecond is taken from the RTC and the mircoseconds are taken from a different hardware, a problem with phasing of the two sources arises. The two sources have to be synchronized somehow.

How is the synchronization/phasing between these two sources accomplished?

Edit: From what I've read in links provided by amdn, particulary the following Intel link, I would add a question here:

Does gettimeofday() provide resolution/granularity in the microsecond regime at all?


Edit 2: Summarizing the amdns answer with some more results of reading:

Linux only uses the realtime clock (RTC) at boot time to synchronize with a higher resolution counter, i.g. the Timestampcounter (TSC). After the boot gettimeofday() returns a time which is entirely based on the TSC value and the frequency of this counter. The initial value for the TSC frequency is corrected/calibrated by means of comparing the system time to an external time source. The adjustment is done/configured by the adjtimex() function. The kernel operates a phase locked loop to ensure that the time results are monotonic and consistent.

This way it can be stated that gettimeofday() has microsecond resolution. Taking into account that more modern Timestampcounter are running in the GHz regime, the obtainable resolution could be in the nanosecond regime. Therefore this meaningfull comment

/**
407  * do_gettimeofday - Returns the time of day in a timeval
408  * @tv:         pointer to the timeval to be set
409  *
410  * NOTE: Users should be converted to using getnstimeofday()
411  */

can be found in Linux/kernel/time/timekeeping.c. This suggest that there will possibly be an even higher resolution function available at a later point in time. Right now getnstimeofday() is only available in kernel space.

However, looking through all the code involved to get this about right, shows quite a few comments about uncertainties. It may be possible to obtain microsecond resolution. The function gettimeofday() may even show a granularity in the microsecond regime. But: There are severe daubts about its accuracy because the drift of the TSC frequency cannot be accurately corrected for. Also the complexity of the code dealing with this matter inside Linux is a hint to believe that it's in fact too difficult to get it right. This is particulary but not solely caused by the huge number of hardware platforms Linux is supposed to run on.

Result: gettimeofday() returns monotonic time with microsecond granularity but the time it provides is almost never is phase to one microsecond with any other time source.

解决方案

How is the microsecond resolution/granularity of gettimeofday() accomplished?

Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.

How is this synchronization accomplished?

Your first question was about how the Linux clock provides high resolution, this second question is about synchronization, this is the distinction between precision and accuracy. Most systems have a clock that is backed up by battery to keep time of day when the system is powered down. As you might expect this clock doesn't have high accuracy or precision, but it will get the time of day "in the ballpark" when the system starts. To get accuracy most systems use an optional component to get time from an external source on the network. Two common ones are

  1. Network Time Protocol
  2. Precision Time Protocol

These protocols define a master clock on the network (or a tier of clocks sourced by an atomic clock) and then measure network latencies to estimate offsets from the master clock. Once the offset from the master is determined the system clock is disciplined to keep it accurate. This can be done by

  1. Stepping the clock (a relatively large, abrupt, and infrequent time adjustment), or
  2. Slewing the clock (defined as how much the clock frequency should be adjusted by either slowly increasing or decreasing the frequency over a given time period)

The kernel provides the adjtimex system call to allow clock disciplining. For details on how modern Intel multi-core processors keep the TSC synchronized between cores see CPU TSC fetch operation especially in multicore-multi-processor environment.

The relevant kernel source files for clock adjustments are kernel/time.c and kernel/time/timekeeping.c.

这篇关于linux gettimeofday()的微秒时间是怎么得到的,准确度是多少?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆