Python,树莓派,每 10 毫秒精确调用一次任务 [英] Python, Raspberry pi, call a task avery 10 milliseconds precisely

查看:200
本文介绍了Python,树莓派,每 10 毫秒精确调用一次任务的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在尝试每 10 毫秒调用一次函数以从传感器获取数据.

I'm currently trying to have a function called every 10ms to acquire data from a sensor.

基本上我是从 gpio 中断触发回调,但我更改了我的传感器,而我目前使用的传感器没有 INT 引脚来驱动回调.

Basically I was trigerring the callback from a gpio interupt but I changed my sensor and the one I'm currently using doesn't have a INT pin to drive the callback.

所以我的目标是具有相同的行为,但具有由计时器生成的内部中断.

So my goal is to have the same behaviour but with an internal interupt generated by a timer.

我从这个主题中尝试了这个

import threading

def work (): 
  threading.Timer(0.25, work).start ()
  print(time.time())
  print "stackoverflow"

work ()

但是当我运行它时,我可以看到计时器不是很精确,并且正如您所看到的那样随着时间的推移而推导.

But when I run it I can see that the timer is not really precise and it's derivating over time as you can see.

1494418413.1584847
stackoverflow
1494418413.1686869
stackoverflow
1494418413.1788757
stackoverflow
1494418413.1890721
stackoverflow
1494418413.1992736
stackoverflow
1494418413.2094712
stackoverflow
1494418413.2196639
stackoverflow
1494418413.2298684
stackoverflow
1494418413.2400634
stackoverflow
1494418413.2502584
stackoverflow
1494418413.2604961
stackoverflow
1494418413.270702
stackoverflow
1494418413.2808678
stackoverflow
1494418413.2910736
stackoverflow
1494418413.301277
stackoverflow

所以计时器每 10 毫秒推导 0.2 毫秒,这在几秒钟后是一个很大的偏差.

So the timer is derivating by 0.2 milliseconds every 10 milliseconds which is quite a big bias after few seconds.

我知道python并不是真正为实时"而设计的,但我认为应该有办法做到这一点.

I know that python is not really made for "real-time" but I think there should be a way to do it.

如果有人已经不得不使用 python 处理时间限制,我很乐意提供一些建议.

If someone already have to handle time constraints with python I would be glad to have some advices.

谢谢.

推荐答案

此代码适用于我的笔记本电脑 - 记录目标时间和实际时间之间的差异 - 主要是尽量减少 work() 函数中所做的工作,因为例如打印和滚动屏幕可能需要很长时间.

This code works on my laptop - logs the delta between target and actual time - main thing is to minimise what is done in the work() function because e.g. printing and scrolling screen can take a long time.

关键是根据调用时间和目标时间之间的差异启动下一个计时器.

Key thing is to start the next timer based on difference between the time when that call is made and the target.

我将间隔减慢到 0.1 秒,因此更容易看到在我的 Win7 x64 上可能超过 10 毫秒的抖动,这会导致将负值传递给 Timer() 调用时出现问题:-o

I slowed down the interval to 0.1s so it is easier to see the jitter which on my Win7 x64 can exceed 10ms which would cause problems with passing a negative value to thte Timer() call :-o

这会记录 100 个样本,然后打印它们 - 如果您重定向到 .csv 文件,您可以加载到 Excel 中以显示图表.

This logs 100 samples, then prints them - if you redirect to a .csv file you can load into Excel to display graphs.

from multiprocessing import Queue
import threading
import time

# this accumulates record of the difference between the target and actual times
actualdeltas = []

INTERVAL = 0.1

def work(queue, target):
    # first thing to do is record the jitter - the difference between target and actual time
    actualdeltas.append(time.clock()-target+INTERVAL)
#    t0 = time.clock()
#    print("Current time\t" + str(time.clock()))
#    print("Target\t" + str(target))
#    print("Delay\t" + str(target - time.clock()))
#    print()
#    t0 = time.clock()
    if len(actualdeltas) > 100:
        # print the accumulated deltas then exit
        for d in actualdeltas:
            print d
        return
    threading.Timer(target - time.clock(), work, [queue, target+INTERVAL]).start()

myQueue = Queue()

target = time.clock() + INTERVAL
work(myQueue, target)

典型输出(即不要依赖 Python 中 Windows 上的毫秒计时):

Typical output (i.e. don't rely on millisecond timing on Windows in Python):

0.00947008617187
0.0029628920052
0.0121824719378
0.00582923077099
0.00131316206917
0.0105631524709
0.00437298744466
-0.000251418553351
0.00897956530515
0.0028528821332
0.0118192949105
0.00546301269675
0.0145723546788
0.00910063698529

这篇关于Python,树莓派,每 10 毫秒精确调用一次任务的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆