C++ 如何在 Linux 中使计时器准确 [英] C++ How to make timer accurate in Linux

查看:36
本文介绍了C++ 如何在 Linux 中使计时器准确的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

考虑这个代码:

#include <iostream>
#include <vector>
#include <functional>
#include <map>
#include <atomic>
#include <memory>
#include <chrono>
#include <thread>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
#include <boost/asio/high_resolution_timer.hpp>

static const uint32_t FREQUENCY = 5000; // Hz
static const uint32_t MKSEC_IN_SEC = 1000000;

std::chrono::microseconds timeout(MKSEC_IN_SEC / FREQUENCY);
boost::asio::io_service ioservice;
boost::asio::high_resolution_timer timer(ioservice);

static std::chrono::system_clock::time_point lastCallTime = std::chrono::high_resolution_clock::now();
static uint64_t deviationSum = 0;
static uint64_t deviationMin = 100000000;
static uint64_t deviationMax = 0;
static uint32_t counter = 0;

void timerCallback(const boost::system::error_code &err) {
  auto actualTimeout = std::chrono::high_resolution_clock::now() - lastCallTime;
  std::chrono::microseconds actualTimeoutMkSec = std::chrono::duration_cast<std::chrono::microseconds>(actualTimeout);
  long timeoutDeviation = actualTimeoutMkSec.count() - timeout.count();
  deviationSum += abs(timeoutDeviation);
  if(abs(timeoutDeviation) > deviationMax) {
    deviationMax = abs(timeoutDeviation);
  } else if(abs(timeoutDeviation) < deviationMin) {
    deviationMin = abs(timeoutDeviation);
  }

  ++counter;
  //std::cout << "Actual timeout: " << actualTimeoutMkSec.count() << "\t\tDeviation: " << timeoutDeviation << "\t\tCounter: " << counter << std::endl;

  timer.expires_from_now(timeout);
  timer.async_wait(timerCallback);
  lastCallTime = std::chrono::high_resolution_clock::now();
}

using namespace std::chrono_literals;

int main() {
  std::cout << "Frequency: " << FREQUENCY << " Hz" << std::endl;
  std::cout << "Callback should be called each: " << timeout.count() << " mkSec" << std::endl;
  std::cout << std::endl;

  ioservice.reset();
  timer.expires_from_now(timeout);
  timer.async_wait(timerCallback);
  lastCallTime = std::chrono::high_resolution_clock::now();
  auto thread = new std::thread([&] { ioservice.run(); });
  std::this_thread::sleep_for(1s);

  std::cout << std::endl << "Messages posted: " << counter << std::endl;
  std::cout << "Frequency deviation: " << FREQUENCY - counter << std::endl;
  std::cout << "Min timeout deviation: " << deviationMin << std::endl;
  std::cout << "Max timeout deviation: " << deviationMax << std::endl;
  std::cout << "Avg timeout deviation: " << deviationSum / counter << std::endl;

  return 0;
}

它运行计时器以指定频率定期调用 timerCallback(..).在此示例中,回调必须每秒调用 5000 次.人们可以玩频率,并看到实际(测量)的呼叫频率与期望的频率不同.事实上,频率越高,偏差就越大.我用不同的频率做了一些测量,这里是总结:https://docs.google.com/spreadsheets/d/1SQtg2slNv-9VPdgS0RD4yKRnyDK1ijKrjVz7BBMSg24/edit?usp=sharing

It runs timer to call timerCallback(..) periodically with specified frequency. In this example, callback must be called 5000 times per second. One can play with frequency and see that actual (measured) frequency of calls is different from desired one. In fact the higher is the frequency, the higher is deviation. I did some measurements with different frequencies and here is summary: https://docs.google.com/spreadsheets/d/1SQtg2slNv-9VPdgS0RD4yKRnyDK1ijKrjVz7BBMSg24/edit?usp=sharing

当所需频率为 10000Hz 时,系统会错过 10% (~ 1000) 的呼叫.当所需频率为 100000Hz 时,系统会错过 40%(~40000)的呼叫.

When desired frequency is 10000Hz, system miss 10% (~ 1000) of calls. When desired frequency is 100000Hz, system miss 40% (~ 40000) of calls.

问题:在Linux\C++环境下有没有可能达到更好的精度?如何?我需要它在 500000Hz 的频率下没有明显偏差

Question: Is it possible to achieve better accuracy in Linux \ C ++ environment? How? I need it to work without significant deviation with frequency of 500000Hz

附言我的第一个想法是它是 timerCallback(..) 方法的主体本身导致了延迟.我测量了它.执行稳定需要不到 1 微秒.所以不影响进程.

P.S. My first idea was that it is the body of the timerCallback(..) method itself causes delay. I measured it. It takes a stably takes less than 1 microsecond to execute. So it does not affect the process.

推荐答案

如果需要实现每两微秒间隔一次调用,最好附加到绝对时间位置,不要考虑每个请求的时间将需要.... 您遇到的问题是,每个时隙所需的处理可能比执行所需的时间更需要 CPU.

If you need to achieve one call each two microsecond interval, you'd better to attach to absolute time positions, and don't consider the time each request is going to require.... You run although into the problem that the processing required at each timeslot could be more cpu demanding than the time required for it to execute.

如果您有一个多核 cpu,我会在每个内核之间划分时隙(以多线程方法),以便每个内核的时隙更长,因此假设您对四核 cpu 有要求,那么您可以允许每个线程每 8usec 执行 1 cal,这可能更实惠.在这种情况下,您使用绝对计时器(一个绝对计时器是一个等待直到挂钟滴答特定绝对时间的计时器,而不是您调用它的时间的延迟)并将它们偏移等于线程数 2usec 的数量延迟,在这种情况下(4 个内核),您将在时间 T 启动线程 #1,在时间 T + 2usec 启动线程 #2,在时间 T + 4usec 启动线程 #3,......以及在时间 T + 2* 启动线程 #N(N-1) 使用然后每个线程将在 oldT + 2usec 时间再次启动,而不是执行某种 nsleep(3) 调用.这不会累积延迟调用的处理时间,因为这很可能是您遇到的情况.pthread 库定时器都是绝对时间定时器,所以你可以使用它们.我认为这是您能够达到如此严格的规范的唯一方法.(并准备看看电池如何受到影响,假设您处于 android 环境中)

If you have a multicore cpu, I'd divide the timeslot between each core (in a multithreaded approach) for it to be longer for each core, so suppose that you have your requirements in a four core cpu, then you can allow each thread to execute 1 cal per 8usec, which is probably more affordable. In this case you use absolute timers (one absolute timer is one that waits until the wall clock ticks a specific absolute time, and not a delay from the time you called it) and will offset them by an amount equal to the thread number of 2usec delay, in this case (4 cores) you will start thread #1 at time T, thread #2 at time T + 2usec, thread #3 at time T + 4usec, ... and thread #N at time T + 2*(N-1)usec. Each thread will then start itself again at time oldT + 2usec, instead of doing some kind of nsleep(3) call. This will not accumulate the processing time to the delay call, as this is most probably what you are experiencing. The pthread library timers are all absolute time timers, so you can use them. I think this is the only way you'll be capable of reaching such a hard spec. (and prepare to see how the battery suffers with that, assuming you're in an android environment)

在这种方法中,外部总线可能是一个瓶颈,所以即使你让它工作,可能最好用NTP同步几台机器(这可以做到usec级别,以实际GBit的速度链接)并使用并行运行的不同处理器.由于您没有描述您必须如此密集地重复的任何过程,因此我无法为问题提供更多帮助.

in this approach, the external bus can be a bottleneck, so even if you get it working, probably it would be better to synchronize several machines with NTP (this can be done to the usec level, at the speed of actual GBit links) and use different processors running in parallel. As you don't describe anything of the process you have to repeat so densely, I cannot provide more help to the problem.

这篇关于C++ 如何在 Linux 中使计时器准确的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆