现代电脑的内部时钟精度如何? [英] How precise is the internal clock of a modern PC?

查看:1565
本文介绍了现代电脑的内部时钟精度如何?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道10年前,典型的时钟精度等于系统刻度,这在10-30ms的范围内。在过去几年中,精度在多个步骤中得到提高。如今,有些方法可以测量实际纳秒的时间间隔。然而,通常的框架仍然以15秒左右的精度返回时间。



我的问题是,哪些步骤提高了精度,可以以毫秒为单位来测量,和为什么我们仍然经常获得小于微秒的精度(例如在.NET中)。



(免责声明:它使我奇怪的是,以前没有问过,所以我想我在搜索时错过了这个问题,请关闭并指出我的问题,谢谢,我相信这属于SO,而不是任何其他SOFU网站,我明白精度和精度之间的差异。)

解决方案

它真的是PC历史的一个特征。
原来的IBM-PC使用称为实时时钟的芯片进行电池备份(您还记得需要更换这些电池吗?)这些电源在机器关机并保持时间时运行。这些频率为32.768 kHz(2 ^ 15周期/秒),这使得在16位系统上计算时间变得容易。
然后将这个实时时钟写入CMOS,这可以通过旧版操作系统中的中断系统使用。



微软和英特尔称之为新的标准高精度事件计时器,指定时钟速度为10MHz
http://www.intel。 com / hardwaredesign / hpetspec_1.pdf
即使更新的PC架构也将其放在北桥控制器上,HPET可以以100MHz甚至更高的速度播放。
在10Mhz,我们应该能够得到100纳秒的分辨率,在100MHZ时,我们应该可以获得10纳秒的分辨率。



已知以下操作系统无法使用HPET:Windows XP,Windows Server 2003和更低版本的Windows版本,较旧的Linux版本



以下操作系统已知可以使用HPET:Windows Vista,Windows 2008,Windows 7,x86版本的Mac OS X,使用2.6内核和FreeBSD的Linux操作系统。



Linux内核,您需要更新的rtc-cmos硬件时钟设备驱动程序,而不是原始的rtc驱动程序

所有这些说我们如何访问这个额外的解决方案?
我可以从以前的stackoverflow文章剪切和粘贴,但不是 - 只是搜索HPET,你会找到如何获得更精细的计时器的答案


I know that 10 years ago, typical clock precision equaled a system-tick, which was in the range of 10-30ms. Over the past years, precision was increased in multiple steps. Nowadays, there are ways to measure time intervals in actual nanoseconds. However, usual frameworks still return time with a precision of only around 15ms.

My question is, which steps did increase the precision, how is it possible to measure in nanoseconds, and why are we still often getting less-than-microsecond precision (for instance in .NET).

(Disclaimer: It strikes me as odd that this was not asked before, so I guess I missed this question when I searched. Please close and point me to the question in that case, thanks. I believe this belongs on SO and not on any other SOFU site. I understand the difference between precision and accuracy.)

解决方案

It really is a feature of the history of the PC. The original IBM-PC used a chip called the Real Time Clock which was battery backed up (Do you remember needing to change the batteries on these ?) These operated when the machine was powered off and kept the time. The frequency of these was 32.768 kHz (2^15 cycles/second) which made it easy to calculate time on a 16 bit system. This real time clock was then written to CMOS which was available via an interrupt system in older operating systems.

A newer standard is out from Microsoft and Intel called High Precision Event Timer which specifies a clock speed of 10MHz http://www.intel.com/hardwaredesign/hpetspec_1.pdf Even newer PC architectures take this and put it on the Northbridge controller and the HPET can tun at 100MHz or even greater. At 10Mhz we should be able to get a resolution of 100 nano-seconds and at 100MHZ we should be able to get 10 nano-second resolution.

The following operating systems are known not to be able to use HPET: Windows XP, Windows Server 2003, and earlier Windows versions, older Linux versions

The following operating systems are known to be able to use HPET: Windows Vista, Windows 2008, Windows 7, x86 based versions of Mac OS X, Linux operating systems using the 2.6 kernel and FreeBSD.

With a Linux kernel, you need the newer "rtc-cmos" hardware clock device driver rather than the original "rtc" driver

All that said how do we access this extra resolution? I could cut and paste from previous stackoverflow articles, but not - Just search for HPET and you will find the answers on how to get finer timers working

这篇关于现代电脑的内部时钟精度如何?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆