C ++ windows时间 [英] C++ windows time

查看:116
本文介绍了C ++ windows时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在使用时间时遇到问题。
我想在Windows上使用C ++获取微秒。



我找不到办法。

解决方案

规范答案由放松给出:


一种流行的方式是使用QueryPerformanceCounter()调用。





  1. 它用于测量时间间隔,而不是时间。这意味着您必须编写代码以建立时期,从中测量精确的时间间隔。这称为校准

  2. 在校准时钟时,您还需要定期调整它,使其永远不会失去同步(这称为 c> 这意味着需要上下文切换来调用实现的内核侧,并且这是相对昂贵的(大约0.7微秒)。这似乎需要支持旧版硬件。

但并不是所有的都丢失。点1.和2.是你可以用一点编码做的事情,3.可以直接调用RDTSC(通过 __ rdtsc()在新版本的Visual C ++中可用)内在),只要你知道精确的CPU时钟频率。虽然在较老的CPU上,这样的调用将易受cpu内部时钟速度的变化的影响,但在所有较新的Intel和AMD CPU中,它保证给出相当精确的结果,并且不会受CPU时钟变化的影响(例如节电特性)。



让我们开始使用1.以下是保存校准数据的数据结构:

  struct init 
{
long long stamp; //上次调整时间
long long epoch; //上次同步时间为FILETIME
long long start; // counter ticks to match epoch
long long freq; //计数器频率(每10ms的ticks)

void sync(int sleep);
}

init data_ [2] = {};
const init * volatile init_ =& data_ [0];

这里是初始校准的代码;它必须给予时间(以毫秒为单位)等待时钟移动;我发现500毫秒给出了相当不错的结果(时间越短,校准越不准确)。为了校准,我们将使用 QueryPerformanceCounter()等等。你只需要为它调用 data_ [0] ,因为 data_ [1] 将通过周期性时钟调整(如下)更新。

  void init :: sync(int sleep)
{
LARGE_INTEGER t1,t2,p1,p2,r1,r2,f;
int cpu [4] = {};

//准备rdtsc校准 - 亲和度和优先级
SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_TIME_CRITICAL);
SetThreadAffinityMask(GetCurrentThread(),2);
Sleep(10);

//校准期间的时间测量频率
QueryPerformanceFrequency(& f);

//为了解释为什么现代CPU上的RDTSC是安全的,在
// Intel(R)64和IA-32 Architectures软件开发人员中查找Constant TSC和Invariant TSC手册(文档253668.pdf)

__cpuid(cpu,0); // flush CPU pipeline
r1.QuadPart = __rdtsc();
__cpuid(cpu,0);
QueryPerformanceCounter(& p1);

//睡一段时间,不要紧,不准确。
睡眠(睡眠);

//等待系统时钟移动,所以我们有确切的纪元
GetSystemTimeAsFileTime((FILETIME *)(& t1.u));
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME *)(& t2.u));
__cpuid(cpu,0); // flush CPU pipeline
r2.QuadPart = __rdtsc();
} while(t2.QuadPart == t1.QuadPart);

//测量已经过去的时间,使用更昂贵的QPC
__cpuid(cpu,0);
QueryPerformanceCounter(& p2);

stamp = t2.QuadPart;
epoch = t2.QuadPart;
start = r2.QuadPart;

//每10ms计算一次计数器ticks
freq = f.QuadPart *(r2.QuadPart-r1.QuadPart)/ 100 /(p2.QuadPart-p1.QuadPart);

SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_NORMAL);
SetThreadAffinityMask(GetCurrentThread(),0xFF);良好的校准数据,您可以从便宜的RDTSC计算确切的时间(我测量了调用和计算在我的机器〜25纳秒)。有三件事要注意:


  1. 返回类型与FILETIME结构是二进制兼容的,精确到100ns,不像 GetSystemTimeAsFileTime (以10-30毫秒为间隔或以最多1毫秒为增量)。


  2. 避免昂贵的转换整数倍到整数,整个计算以64位整数执行。即使这些可以容纳大量的数字,也有整数溢出的真正风险,因此 start 必须定期提交以避免它。


  3. 我们正在制作校准数据的副本,因为它可能已在我们的通话期间通过另一个线程中的时钟调整更新。


这里是以高精度读取当前时间的代码。返回值与FILETIME是二进制兼容的,即1601年1月1日以来的100纳秒间隔数。

  
{
//必须复制
const init * it = init_;
// __cpuid(cpu,0) - 不需要刷新CPU流水线
const long long p = __rdtsc();
//从计数器刻度中的时期经过的时间
long long d =(p-it-> start);
if(d> 0x8000000000011)
{
//关闭到整数溢出,必须立即调整
adjust();
}
//将10ms转换为100ns周期
d * = 100000ll;
d / = it-> freq;
//并添加到epoch,所以我们有正确的FILETIME
d + = it-> epoch;
return d;
}

对于时钟调整,我们需要捕获精确时间)和我们的时钟比较;这将给我们漂移值。接下来我们使用简单的公式来计算调整后的CPU频率,使我们的时钟在下一次调整时满足系统时钟。因此,重要的是要定期进行调整;我发现它在15分钟间隔内调用很好。我使用 CreateTimerQueueTimer ,在程序启动时调用一次以调度调整调用(未在此演示)。



捕获精确的系统时间(计算漂移的目的)是,我们需要等待系统时钟移动,并且可能需要长达30毫秒左右(这是一个长的时间)。如果不执行调整,则会在函数 now()内发生整数溢出,更不用说系统时钟的未校正漂移。在 now()中有内置保护防止溢出,但是我们真的不想在一个线程中同步触发它,而是调用 now )



这里是周期时钟调整的代码,时钟漂移在 r-> ; epoch- r-> stamp

  void adjust()
{
//必须复制
const init * it = init_;
init * r =(init_ ==& data_ [0]?& data_ [1]:& data_ [0]);
LARGE_INTEGER t1,t2;

//等待系统时钟移动,所以我们有准确的时间来比较
GetSystemTimeAsFileTime((FILETIME *)(& t1.u));
long long p = 0;
int cpu [4] = {};
do
{
Sleep(0);
GetSystemTimeAsFileTime((FILETIME *)(& t2.u));
__cpuid(cpu,0); // flush CPU pipeline
p = __rdtsc();
} while(t2.QuadPart == t1.QuadPart);

long long d =(p - it-> start);
//将10ms转换为100ns周期
d * = 100000ll;
d / = it-> freq;

r-> start = p;
r-> epoch = d + it-> epoch;
r-> stamp = t2.QuadPart;

const long long dt1 = t2.QuadPart - it-> epoch;
const long long dt2 = t2.QuadPart - it-> stamp;
const double s1 =(double)d / dt1;
const double s2 =(double)d / dt2;

r-> freq =(long long)(it-> freq *(s1 + s2-1)+0.5);

InterlockedExchangePointer((volatile PVOID *)& init_,r);

//如果你有日志输出,这里是很好的日志校准结果
}

最后两个效用函数。将保留微秒以分隔 int ,将FILETIME(包括 now()的输出)转换为SYSTEMTIME。其他将返回频率,因此您的程序可以直接使用 __ rdtsc()精确测量时间间隔(纳秒精度)。

  void convert(SYSTEMTIME& s,int& us,long long f)
{
LARGE_INTEGER i;
i.QuadPart = f;
FileTimeToSystemTime((FILETIME *)(& i.u),& s)
s.wMilliseconds = 0;
LARGE_INTEGER t;
SystemTimeToFileTime(& s,(FILETIME *)(& t.u));
us =(int)(i.QuadPart - t.QuadPart)/ 10;
}

long long frequency()
{
//必须复制
const init * it = init_;
return it-> freq * 100;
}

当然,以上都不是更精确 >比您的系统时钟,这不太可能比几百毫秒更准确。 精确时钟(与 精确相反)的目的是提供可用于两者的单个测量:


  1. 便宜且非常准确时间测量间隔 li>
  2. 不太准确,但单调且与上述一致,挂起时间

我认为它做得很好。示例使用的是日志,其中可以使用时间戳不仅查找事件的时间,而且还可以查看内部程序计时,延迟(以微秒为单位)等。



管道(调用初始校准,调度调整)作为温柔读者的练习。


I have a problem in using time. I want to use and get microseconds on windows using C++.

I can't find the way.

解决方案

The "canonical" answer was given by unwind :

One popular way is using the QueryPerformanceCounter() call.

There are however few problems with this method:

  1. it's intended for measurement of time intervals, not time. This means you have to write code to establish "epoch time" from which you will measure precise intervals. This is called calibration.
  2. As you calibrate your clock, you also need to periodically adjust it so it's never too much out of sync (this is called drift) with your system clock.
  3. QueryPerformanceCounter is not implemented in user space; this means context switch is needed to call kernel side of implementation, and that is relatively expensive (around 0.7 microsecond). This seems to be required to support legacy hardware.

Not all is lost, though. Points 1. and 2. are something you can do with a bit of coding, 3. can be replaced with direct call to RDTSC (available in newer versions of Visual C++ via __rdtsc() intrinsic), as long as you know accurate CPU clock frequency. Although, on older CPUs, such call would be susceptible to changes in cpu internal clock speed, in all newer Intel and AMD CPUs it is guaranteed to give fairly accurate results and won't be affected by changes in CPU clock (e.g. power saving features).

Lets get started with 1. Here is data structure to hold calibration data:

struct init
{
  long long stamp; // last adjustment time
  long long epoch; // last sync time as FILETIME
  long long start; // counter ticks to match epoch
  long long freq;  // counter frequency (ticks per 10ms)

  void sync(int sleep);
};

init                  data_[2] = {};
const init* volatile  init_ = &data_[0];

Here is code for initial calibration; it has to be given time (in milliseconds) to wait for the clock to move; I've found that 500 milliseconds give pretty good results (the shorter time, the less accurate calibration). For the purpose of callibration we are going to use QueryPerformanceCounter() etc. You only need to call it for data_[0], since data_[1] will be updated by periodic clock adjustment (below).

void init::sync(int sleep)
{
  LARGE_INTEGER t1, t2, p1, p2, r1, r2, f;
  int cpu[4] = {};

  // prepare for rdtsc calibration - affinity and priority
  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
  SetThreadAffinityMask(GetCurrentThread(), 2);
  Sleep(10);

  // frequency for time measurement during calibration
  QueryPerformanceFrequency(&f);

  // for explanation why RDTSC is safe on modern CPUs, look for "Constant TSC" and "Invariant TSC" in
  // Intel(R) 64 and IA-32 Architectures Software Developer’s Manual (document 253668.pdf)

  __cpuid(cpu, 0); // flush CPU pipeline
  r1.QuadPart = __rdtsc();
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p1);

  // sleep some time, doesn't matter it's not accurate.
  Sleep(sleep);

  // wait for the system clock to move, so we have exact epoch
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    r2.QuadPart = __rdtsc();
  } while(t2.QuadPart == t1.QuadPart);

  // measure how much time has passed exactly, using more expensive QPC
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p2);

  stamp = t2.QuadPart;
  epoch = t2.QuadPart;
  start = r2.QuadPart;

  // calculate counter ticks per 10ms
  freq = f.QuadPart * (r2.QuadPart-r1.QuadPart) / 100 / (p2.QuadPart-p1.QuadPart);

  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL);
  SetThreadAffinityMask(GetCurrentThread(), 0xFF);
}

With good calibration data you can calculate exact time from cheap RDTSC (I measured the call and calculation to be ~25 nanoseconds on my machine). There are three things to note:

  1. return type is binary compatible with FILETIME structure and is precise to 100ns , unlike GetSystemTimeAsFileTime (which increments in 10-30ms or so intervals, or 1 millisecond at best).

  2. in order to avoid expensive conversions integer to double to integer, the whole calculation is performed in 64 bit integers. Even though these can hold huge numbers, there is real risk of integer overflow, and so start must be brought forward periodically to avoid it. This is done in clock adjustment.

  3. we are making a copy of calibration data, because it might have been updated during our call by clock adjustement in another thread.

Here is the code to read current time with high precision. Return value is binary compatible with FILETIME, i.e. number of 100-nanosecond intervals since Jan 1, 1601.

long long now()
{
  // must make a copy
  const init* it = init_;
  // __cpuid(cpu, 0) - no need to flush CPU pipeline here
  const long long p = __rdtsc();
  // time passed from epoch in counter ticks
  long long d = (p - it->start);
  if (d > 0x80000000000ll)
  {
    // closing to integer overflow, must adjust now
    adjust();
  }
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;
  // and add to epoch, so we have proper FILETIME
  d += it->epoch;
  return d;
}

For clock adjustment, we need to capture exact time (as provided by system clock) and compare it against our clock; this will give us drift value. Next we use simple formula to calculate "adjusted" CPU frequency, to make our clock meet system clock at the time of next adjustment. Thus it is important that adjustments are called on regular intervals; I've found that it works well when called in 15 minutes intervals. I use CreateTimerQueueTimer, called once at program startup to schedule adjustment calls (not demonstrated here).

The slight problem with capturing accurate system time (for the purpose of calculating drift) is that we need to wait for the system clock to move, and that can take up to 30 milliseconds or so (it's a long time). If adjustment is not performed, it would risk integer overflow inside function now(), not to mention uncorrected drift from system clock. There is builtin protection against overflow in now(), but we really don't want to trigger it synchronously in a thread which happened to call now() at the wrong moment.

Here is the code for periodic clock adjustment, clock drift is in r->epoch - r->stamp:

void adjust()
{
  // must make a copy
  const init* it = init_;
  init* r = (init_ == &data_[0] ? &data_[1] : &data_[0]);
  LARGE_INTEGER t1, t2;

  // wait for the system clock to move, so we have exact time to compare against
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  long long p = 0;
  int cpu[4] = {};
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    p = __rdtsc();
  } while (t2.QuadPart == t1.QuadPart);

  long long d = (p - it->start);
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;

  r->start = p;
  r->epoch = d + it->epoch;
  r->stamp = t2.QuadPart;

  const long long dt1 = t2.QuadPart - it->epoch;
  const long long dt2 = t2.QuadPart - it->stamp;
  const double s1 = (double) d / dt1;
  const double s2 = (double) d / dt2;

  r->freq = (long long) (it->freq * (s1 + s2 - 1) + 0.5);

  InterlockedExchangePointer((volatile PVOID*) &init_, r);

  // if you have log output, here is good point to log calibration results
}

Lastly two utility functions. One will convert FILETIME (including output from now()) to SYSTEMTIME while preserving microseconds to separate int. Other will return frequency, so your program can use __rdtsc() directly for accurate measurements of time intervals (with nanosecond precision).

void convert(SYSTEMTIME& s, int &us, long long f)
{
  LARGE_INTEGER i;
  i.QuadPart = f;
  FileTimeToSystemTime((FILETIME*) (&i.u), &s);
  s.wMilliseconds = 0;
  LARGE_INTEGER t;
  SystemTimeToFileTime(&s, (FILETIME*) (&t.u));
  us = (int) (i.QuadPart - t.QuadPart)/10;
}

long long frequency()
{
  // must make a copy
  const init* it = init_;
  return it->freq * 100;
}

Well of course none of the above is more accurate than your system clock, which is unlikely to be more accurate than few hundred milliseconds. The purpose of precise clock (as opposed to accurate) as implemented above, is to provide single measure which can be used for both:

  1. cheap and very accurate measurement of time intervals (not wall time),
  2. much less accurate, but monotonous and consistent with the above, measure of wall time

I think it does it pretty well. Example use are logs, where one can use timestamps not only to find time of events, but also reason about internal program timings, latency (in microseconds) etc.

I leave the plumbing (call to initial calibration, scheduling adjustment) as an exercise for gentle readers.

这篇关于C ++ windows时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆