为什么System.nanoTime()和System.currentTimeMillis()如此迅速地分开? [英] Why do System.nanoTime() and System.currentTimeMillis() drift apart so rapidly?

查看:112
本文介绍了为什么System.nanoTime()和System.currentTimeMillis()如此迅速地分开?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

出于诊断目的,我希望能够在长时间运行的服务器应用程序中检测系统时钟的变化。由于 System.currentTimeMillis()是基于挂钟时间和 System.nanoTime()是基于系统计时器这是独立的(*)挂钟时间,我以为我可以使用这些值之间的差异来检测系统时间的变化。

For diagnostic purposes, I want to be able to detect changes in the system time-of-day clock in a long-running server application. Since System.currentTimeMillis() is based on wall clock time and System.nanoTime() is based on a system timer that is independent(*) of wall clock time, I thought I could use changes in the difference between these values to detect system time changes.

我写了一个快速测试应用程序看到这些值之间的差异是如何稳定的,令我惊讶的是,数值在几毫秒每秒的水平上立即分散。几次我看到更快的分歧。这是在使用Java 6的Win7 64位桌面上。我没有在Linux(或Solaris或MacOS)下面尝试下面的测试程序来查看它是如何执行的。对于这个应用程序的一些运行,分歧是积极的,一些运行它是负面的。它似乎取决于桌面正在做什么,但很难说。

I wrote up a quick test app to see how stable the difference between these values is, and to my surprise the values diverge immediately for me at the level of several milliseconds per second. A few times I saw much faster divergences. This is on a Win7 64-bit desktop with Java 6. I haven't tried this test program below under Linux (or Solaris or MacOS) to see how it performs. For some runs of this app, the divergence is positive, for some runs it is negative. It appears to depend on what else the desktop is doing, but it's hard to say.

public class TimeTest {
  private static final int ONE_MILLION  = 1000000;
  private static final int HALF_MILLION =  499999;

  public static void main(String[] args) {
    long start = System.nanoTime();
    long base = System.currentTimeMillis() - (start / ONE_MILLION);

    while (true) {
      try {
        Thread.sleep(1000);
      } catch (InterruptedException e) {
        // Don't care if we're interrupted
      }
      long now = System.nanoTime();
      long drift = System.currentTimeMillis() - (now / ONE_MILLION) - base;
      long interval = (now - start + HALF_MILLION) / ONE_MILLION;
      System.out.println("Clock drift " + drift + " ms after " + interval
                         + " ms = " + (drift * 1000 / interval) + " ms/s");
    }
  }
}

不符合 Thread.sleep()时间以及中断应该与计时器漂移完全无关。

Inaccuracies with the Thread.sleep() time, as well as interruptions, should be entirely irrelevant to timer drift.

这两个Java系统呼叫旨在用作测量 - 一个用于测量挂钟时间的差异,另一个用于测量绝对间隔,因此当实时时钟不改变时,这些值应该非常接近速度相同,对吧?这是Java中的错误还是弱点或失败?在操作系统或硬件上有什么东西可以防止Java更准确吗?

Both of these Java "System" calls are intended for use as a measurement -- one to measure differences in wall clock time and the other to measure absolute intervals, so when the real-time-clock is not being changed, these values should change at very close to the same speed, right? Is this a bug or a weakness or a failure in Java? Is there something in the OS or hardware that prevents Java from being more accurate?

我完全期望这些独立测量之间有一些漂移和抖动(**),但我预计每天漂流不到一分钟。每秒1毫秒漂移,如果单调,差不多90秒!我最糟糕的观察漂移可能是十倍。每次运行这个程序时,我会看到第一个测量的漂移。到目前为止,我没有运行程序超过约30分钟。

I fully expect some drift and jitter(**) between these independent measurements, but I expected well less than a minute per day of drift. 1 msec per second of drift, if monotonic, is almost 90 seconds! My worst-case observed drift was perhaps ten times that. Every time I run this program, I see drift on the very first measurement. So far, I have not run the program for more than about 30 minutes.

我预计在打印的值中会看到一些小的随机性,由于抖动,但在几乎程序的所有运行我看到差异的稳步增长,通常高达每秒3毫秒,增加了几倍。

I expect to see some small randomness in the values printed, due to jitter, but in almost all runs of the program I see steady increase of the difference, often as much as 3 msec per second of increase and a couple times much more than that.

是否有任何版本的Windows具有类似于Linux的机制,可以调整系统时钟速度,以缓慢地将时钟时钟与外部时钟源同步?这样的事情会影响定时器,还是只影响挂钟计时器?

Does any version of Windows have a mechanism similar to Linux that adjusts the system clock speed to slowly bring the time-of-day clock into sync with the external clock source? Would such a thing influence both timers, or only the wall-clock timer?

(*)我知道在某些架构上, System.nanoTime()将必然使用相同的机制作为 System.currentTimeMillis()。我还认为,任何现代Windows服务器都不是这样的硬件架构是公平的。这是一个坏的假设吗?

(*) I understand that on some architectures, System.nanoTime() will of necessity use the same mechanism as System.currentTimeMillis(). I also believe it's fair to assume that any modern Windows server is not such a hardware architecture. Is this a bad assumption?

(**)当然, System.currentTimeMillis()通常会有比起 System.nanoTime()更大的抖动,因为它的粒度在大多数系统上不是1毫秒。

(**) Of course, System.currentTimeMillis() will usually have a much larger jitter than System.nanoTime() since its granularity is not 1 msec on most systems.

推荐答案

您可能会发现关于JVM计时器的此Sun / Oracle博文,令人感兴趣。

You might find this Sun/Oracle blog post about JVM timers to be of interest.

以下是该文章中关于Windows下的JVM计时器的几段:

Here are a couple of the paragraphs from that article about JVM timers under Windows:


System.currentTimeMillis()是使用 GetSystemTimeAsFileTime 方法实现的,它基本上只读低分辨率时间Windows维护的日期值。读取这个全局变量当然是非常快的 - 大约6个周期根据报告的信息。无论定时器中断如何被编程 - 这取决于平台将为10ms或15ms(此值似乎与默认中断周期相关联),这个时间值以恒定速率更新。

System.currentTimeMillis() is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information. This time-of-day value is updated at a constant rate regardless of how the timer interrupt has been programmed - depending on the platform this will either be 10ms or 15ms (this value seems tied to the default interrupt period).

System.nanoTime()使用 QueryPerformanceCounter / QueryPerformanceFrequency API(如果可用,否则返回 currentTimeMillis * 10 ^ 6 )。 QueryPerformanceCounter (QPC)根据其运行的硬件以不同的方式实现。通常,它将使用可编程间隔定时器(PIT)或ACPI电源管理定时器(PMT)或CPU级时间戳计数器(TSC)。访问PIT / PMT需要执行慢I / O端口指令,因此QPC的执行时间为微秒级。相比之下,读取TSC大约是100个时钟周期(从芯片读取TSC,并将其转换为基于工作频率的时间值)。您可以通过检查QueryPerformanceFrequency是否返回3,579,545(即3.57MHz)的签名值,来判断系统是否使用ACPI PMT。如果您看到1.19Mhz左右的值,那么您的系统正在使用旧的8245 PIT芯片。否则,您应该看到一个近似于CPU频率的值(可以模拟任何速度限制或电源管理,可能会生效)。

System.nanoTime() is implemented using the QueryPerformanceCounter / QueryPerformanceFrequency API (if available, else it returns currentTimeMillis*10^6). QueryPerformanceCounter(QPC) is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency). You can tell if your system uses the ACPI PMT by checking if QueryPerformanceFrequency returns the signature value of 3,579,545 (ie 3.57MHz). If you see a value around 1.19Mhz then your system is using the old 8245 PIT chip. Otherwise you should see a value approximately that of your CPU frequency (modulo any speed throttling or power-management that might be in effect.)

这篇关于为什么System.nanoTime()和System.currentTimeMillis()如此迅速地分开?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆