为什么使用(私有)gettimeoftheday()(即纪元)重复微秒时间戳 [英] Why is microsecond timestamp is repetitive using (a private) gettimeoftheday() i.e. epoch

查看:144
本文介绍了为什么使用(私有)gettimeoftheday()(即纪元)重复微秒时间戳的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用gettimeofday()连续打印微秒。如程序输出所示,您可以看到时间不是更新的微秒间隔,而是某些样本的重复时间,然后不是以微秒为单位而是以毫秒为单位递增。

  while(1)
{
gettimeofday(& capture_time,NULL);
printf(。%ld\n,capture_time.tv_usec);
}

程序输出:

  .414719 
.414719
.414719
.414719
.430344
.430344
.430344
.430344


我希望输出顺序增加例如,

  .414719 
.414720
.414721
.414722
.414723

  .414723,.414723 + x,.414723 + 2x,.414723 + 3x + ... + .414723 + nx 

当我从capture_time.tv_usec获取微秒时,似乎没有刷新。



======== ========================
//完整程序

  #include< iostream> 
#include< windows.h>
#include< conio.h>
#include< time.h>
#include< stdio.h>

#如果已定义(_MSC_VER)||定义(_MSC_EXTENSIONS)
#定义DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64
#else
#定义DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL
#endif

结构时区
b int tz_minuteswest; / *格林威治时间W * /
int tz_dsttime; / * dst校正的类型* /
};

timeval capture_time; // //

int gettimeofday(struct timeval * tv,struct timezone * tz)
{
FILETIME ft;
无符号__int64 tmpres = 0;
static int tzflag;

if(NULL!= tv)
{
GetSystemTimeAsFileTime(&ft);

tmpres | = ft.dwHighDateTime;
tmpres<< = 32;
tmpres | = ft.dwLowDateTime;

/ *将文件时间转换为Unix时代* /
tmpres-= DELTA_EPOCH_IN_MICROSECS;
tmpres / = 10; / *转换为微秒* /
tv-> tv_sec =(long)(tmpres / 1000000UL);
tv-> tv_usec =(long)(tmpres%1000000UL);
}

if(NULL!= tz)
{
if(!tzflag)
{
_tzset();
tzflag ++;
}

tz-> tz_minuteswest = _timezone / 60;
tz-> tz_dsttime = _daylight;
}

返回0;
}

int main()
{
while(1)
{
gettimeofday(& capture_time,NULL);
printf(。%ld\n,capture_time.tv_usec); //仅打印微不足道的内容
}
}


解决方案

您观察到的时间变化为0.414719 s至0.430344 s。差异为15.615毫秒。数字表示为微秒的事实并不意味着它增加了1微秒。实际上,我原本期望15.625毫秒。这是标准硬件上的系统时间增量。我在此处此处
这称为系统时间的粒度



Windows:



但是,有一种方法可以改善此问题,也可以减小粒度:多媒体计时器。特殊情况获取并设置计时器分辨率将公开一种增加系统中断频率的方法。



代码:

  #define TARGET_PERIOD 1 // 1-毫秒目标中断时间


TIMECAPS tc;
UINT wTimerRes;

if(timeGetDevCaps(& tc,sizeof(TIMECAPS))!= TIMERR_NOERROR)
//此调用查询系统计时器的硬件功能
//返回wPeriodMin和具有TIMECAPS结构的wPeriodMax
{
//错误;申请无法继续。
}

//找到可能的最小中断时间:

wTimerRes = min(max(tc.wPeriodMin,TARGET_PERIOD),tc.wPeriodMax);
//并设置最短时间:

timeBeginPeriod(wTimerRes);

这将强制系统以其最大中断频率运行。结果
也会更频繁地更新系统时间,并且在大多数系统上,系统时间增量的粒度将接近1毫秒。 / p>

当您应得的分辨率/粒度以上时,您必须研究 QueryPerformanceCounter 。但是,长时间使用时要格外小心。可以通过调用 QueryPerformanceFrequency 。操作系统将此频率视为常数,并且始终提供相同的值。但是,某些硬件会产生此频率,并且实际频率与给定值不同。它具有偏移量,并且显示热漂移。因此,误差应在几微秒/秒的范围内。可以在上面的第二个此处链接中找到有关此问题的更多详细信息。



Linux:



对于Linux,情况看起来有些不同。请参阅以了解想法。 Linux
使用函数混合CMOS时钟信息。 getnstimeofday (从纪元开始数秒)和使用功能 timekeeping_get_ns 。由于两个源均由不同的硬件支持,因此这并非易事,并且在准确性方面存在疑问。这两个信号源不是锁相的,因此有可能每秒获得更多/更少超过一百万微秒。


I am printing microseconds continuously using gettimeofday(). As given in program output you can see that the time is not updated microsecond interval rather its repetitive for certain samples then increments not in microseconds but in milliseconds.

while(1)
{
  gettimeofday(&capture_time, NULL);
  printf(".%ld\n", capture_time.tv_usec);
}

Program output:

.414719
.414719
.414719
.414719
.430344
.430344
.430344
.430344

 e.t.c

I want the output to increment sequentially like,

.414719
.414720
.414721
.414722
.414723

or

.414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx

It seems that microseconds are not refreshed when I acquire it from capture_time.tv_usec.

================================= //Full Program

#include <iostream>
#include <windows.h>
#include <conio.h>
#include <time.h>
#include <stdio.h>

#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000Ui64
#else
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000ULL
#endif

struct timezone 
{
  int  tz_minuteswest; /* minutes W of Greenwich */
  int  tz_dsttime;     /* type of dst correction */
};

timeval capture_time;  // structure

int gettimeofday(struct timeval *tv, struct timezone *tz)
{
  FILETIME ft;
  unsigned __int64 tmpres = 0;
  static int tzflag;

  if (NULL != tv)
  {
    GetSystemTimeAsFileTime(&ft);

    tmpres |= ft.dwHighDateTime;
    tmpres <<= 32;
    tmpres |= ft.dwLowDateTime;

    /*converting file time to unix epoch*/
    tmpres -= DELTA_EPOCH_IN_MICROSECS; 
    tmpres /= 10;  /*convert into microseconds*/
    tv->tv_sec = (long)(tmpres / 1000000UL);
    tv->tv_usec = (long)(tmpres % 1000000UL);
  }

  if (NULL != tz)
  {
    if (!tzflag)
    {
      _tzset();
      tzflag++;
    }

    tz->tz_minuteswest = _timezone / 60;
    tz->tz_dsttime = _daylight;
  }

  return 0;
}

int main()
{
   while(1)
  {     
    gettimeofday(&capture_time, NULL);     
    printf(".%ld\n", capture_time.tv_usec);// JUST PRINTING MICROSECONDS    
   }    
}

解决方案

The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here. This is called granularity of the system time.

Windows:

However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.

The code:

#define TARGET_PERIOD 1         // 1-millisecond target interrupt period


TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
  // Error; application can't continue.
}

// finding the minimum possible interrupt period:

wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:

timeBeginPeriod(wTimerRes); 

This will force the system to run at its maximum interrupt frequency. As a consequence also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond on most systems.

When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.

Linux:

The situation looks somewhat different for Linux. See this to get an idea. Linux mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.

这篇关于为什么使用(私有)gettimeoftheday()(即纪元)重复微秒时间戳的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆