为什么微秒时间戳是重复使用(私有)gettimeoftheday()即epoch [英] why is microsecond timestamp is repetetive using (a private) gettimeoftheday() i.e. epoch

查看:1916
本文介绍了为什么微秒时间戳是重复使用(私有)gettimeoftheday()即epoch的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用gettimeofday()连续打印微秒。作为程序的输出给了你可以看到,时间不会更新,而其重复微秒的时间间隔某些样品然后递增不是微秒,但以毫秒为单位。

 而(1)
{
的gettimeofday(安培; capture_time,NULL);
printf(。%ld\\\
,capture_time.tv_usec);
}

程序输出:

  0.414719 
0.414719
0.414719
0.414719
0.430344
0.430344
$ 0.430344 b $ b 0.430344


我所要的输出顺序递增喜欢,

  0.414719 
0.414720
0.414721
0.414722
.414723

  0.414723,0.414723 + X,0.414723 + 2X,0.414723 + 3X + ... + 0.414723 + NX 

看来,当我从capture_time.tv_usec获取它微秒不会刷新。



======== =======================
//完整计划

  #include< iostream> 
#include< windows.h>
#include< conio.h>
#include< time.h>
#include< stdio.h>

#if defined(_MSC_VER)||定义(_MSC_EXTENSIONS)
的#define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64
的#else
的#define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL
#ENDIF

结构时区
{
int tz_minuteswest; / * minutes W of Greenwich * /
int tz_dsttime; / * dst correction的类型* /
};

timeval capture_time; // structure

int gettimeofday(struct timeval * tv,struct timezone * tz)
{
FILETIME ft;
unsigned __int64 tmpres = 0;
static int tzflag;

if(NULL!= tv)
{
GetSystemTimeAsFileTime(& ft);

tmpres | = ft.dwHighDateTime;
tmpres<< = 32;
tmpres | = ft.dwLowDateTime;

/ *将文件时间转换为unix epoch * /
tmpres - = DELTA_EPOCH_IN_MICROSECS;
tmpres / = 10; / *转换为微秒* /
tv-> tv_sec =(long)(tmpres / 1000000UL);
tv-> tv_usec =(long)(tmpres%1000000UL);
}

if(NULL!= tz)
{
if(!tzflag)
{
_tzset
tzflag ++;
}

tz-> tz_minuteswest = _timezone / 60;
tz-> tz_dsttime = _daylight;
}

return 0;
}

int main()
{
while(1)
{
gettimeofday(& capture_time,NULL);
printf(。%ld \\\
,capture_time.tv_usec); //只需打印微软
}
}


解决方案

您观察到的时间变化是0.414719秒到0.430344秒。差值为15.615 ms。数字的表示是微秒的事实意味着它增加1微秒。事实上,我本来希望15.625毫秒。这是标准硬件上的系统时间增量。我已仔细查看此处此处
这称为系统时间的粒度



Windows:
$ b

但是,有一种方法来改进这一点,减少粒度:多媒体计时器。 Particulary 获取和设置计时器分辨率将公开一种增加系统中断频率的方法。



代码:

  #define TARGET_PERIOD 1 // 1-毫秒目标中断周期


TIMECAPS tc;
UINT wTimerRes;

if(timeGetDevCaps(& tc,sizeof(TIMECAPS))!= TIMERR_NOERROR)
//此调用查询系统计时器硬件功能
//返回wPeriodMin和wPeriodMax与TIMECAPS结构
{
//错误;应用程序无法继续。
}

//找到最小可能的中断周期:

wTimerRes = min(max(tc.wPeriodMin,TARGET_PERIOD),tc.wPeriodMax);
//并设置最小周期:

timeBeginPeriod(wTimerRes);

这将强制系统以其最大中断频率运行。因此,
也会更频繁地更新系统时间,并且在大多数系统上,系统时间增量的粒度将接近1毫秒。



当您需要超出此范围的分辨率/粒度时,您必须查看 QueryPerformanceCounter 。但是在长时间使用时要小心使用。此计数器的频率可以通过调用 QueryPerformanceFrequency 。操作系统将此频率视为常数,并始终给出相同的值。然而,一些硬件产生该频率,并且真实频率不同于给定值。它有一个偏移,它显示热漂移。因此,误差应假定在几到几微秒/秒的范围内。有关详情,请参阅上面第二个此处链接。



Linux:



Linux的情况看起来有点不同。请参见以了解相关信息。 Linux
使用函数 getnstimeofday混合CMOS时钟的信息(自纪元以来的秒数)和来自高频率计数器的信息(微秒),使用函数 timekeeping_get_ns 。这不是微不足道的,并且在准确性方面是有问题的,因为两个源都由不同的硬件支持。这两个来源不是锁相的,因此可以获得比每秒100万微秒更多/更少


I am printing microseconds continuously using gettimeofday(). As given in program output you can see that the time is not updated microsecond interval rather its repetitive for certain samples then increments not in microseconds but in milliseconds.

while(1)
{
  gettimeofday(&capture_time, NULL);
  printf(".%ld\n", capture_time.tv_usec);
}

Program output:

.414719
.414719
.414719
.414719
.430344
.430344
.430344
.430344

 e.t.c

I want the output to increment sequentially like,

.414719
.414720
.414721
.414722
.414723

or

.414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx

It seems that microseconds are not refreshed when I acquire it from capture_time.tv_usec.

================================= //Full Program

#include <iostream>
#include <windows.h>
#include <conio.h>
#include <time.h>
#include <stdio.h>

#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000Ui64
#else
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000ULL
#endif

struct timezone 
{
  int  tz_minuteswest; /* minutes W of Greenwich */
  int  tz_dsttime;     /* type of dst correction */
};

timeval capture_time;  // structure

int gettimeofday(struct timeval *tv, struct timezone *tz)
{
  FILETIME ft;
  unsigned __int64 tmpres = 0;
  static int tzflag;

  if (NULL != tv)
  {
    GetSystemTimeAsFileTime(&ft);

    tmpres |= ft.dwHighDateTime;
    tmpres <<= 32;
    tmpres |= ft.dwLowDateTime;

    /*converting file time to unix epoch*/
    tmpres -= DELTA_EPOCH_IN_MICROSECS; 
    tmpres /= 10;  /*convert into microseconds*/
    tv->tv_sec = (long)(tmpres / 1000000UL);
    tv->tv_usec = (long)(tmpres % 1000000UL);
  }

  if (NULL != tz)
  {
    if (!tzflag)
    {
      _tzset();
      tzflag++;
    }

    tz->tz_minuteswest = _timezone / 60;
    tz->tz_dsttime = _daylight;
  }

  return 0;
}

int main()
{
   while(1)
  {     
    gettimeofday(&capture_time, NULL);     
    printf(".%ld\n", capture_time.tv_usec);// JUST PRINTING MICROSECONDS    
   }    
}

解决方案

The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here. This is called granularity of the system time.

Windows:

However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.

The code:

#define TARGET_PERIOD 1         // 1-millisecond target interrupt period


TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
  // Error; application can't continue.
}

// finding the minimum possible interrupt period:

wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:

timeBeginPeriod(wTimerRes); 

This will force the system to run at its maximum interrupt frequency. As a consequence also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond on most systems.

When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.

Linux:

The situation looks somewhat different for Linux. See this to get an idea. Linux mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.

这篇关于为什么微秒时间戳是重复使用(私有)gettimeoftheday()即epoch的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆