如何提高 PySerial 读取速度 [英] How can I improve PySerial read speed

查看:43
本文介绍了如何提高 PySerial 读取速度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在构建一台使用 Arduino Mega2560 作为主控制器的机器.Arduino 通过串行连接,获取命令,执行它并每 1 毫秒吐出一堆测量数据.我有一个运行 Python 的 Raspberry Pi,它为用户提供了一个很好的 GUI 来发送命令,并以可读的形式呈现数据.

I'm currently building a machine that uses an Arduino Mega2560 as its main controller. The Arduino is connected to over serial, gets a command, executes it and spits out a bunch of measurement data every 1ms. I have a Raspberry Pi running Python to give the user a nice GUI to send the command, and to present the data in a readable form.

我面临的问题:Arduino 能够每毫秒吐出 15 个字节的数据(所以只有 15kbyte/s),但是我运行的代码每 10 毫秒只能处理大约 15 个字节,所以 1.5千字节/秒.

The problem I face: the Arduino is able to spit out 15 byte of data each millisecond (so that's only 15kbyte/s), but the code I'm running can only cope with about 15 byte each 10 milliseconds, so 1.5kB/s.

当我运行 cat/dev/ttyACM0 >somefile,我很好地看到了所有数据点.

When I run cat /dev/ttyACM0 > somefile, I nicely see all datapoints.

我有以下精简的 Python 代码

I have the following slimmed down Python code

# Reset Arduino by starting serial
microprocBusy = True
serialPort = serial.Serial("/dev/ttyACM0", baudrate=460800, timeout=0)
time.sleep(0.22);
serialPort.setDTR(False);
time.sleep(0.22);
serialPort.setDTR(True);
time.sleep(0.10);

logfile = open(logfilenamePrefix + "_" + datetime.datetime.now().isoformat() + '.txt', 'a')

# Bootloader has some timeout, we need to wait for that
serialPort.flushInput()
while(serialPort.inWaiting() == 0):
    time.sleep(0.05)

# Wait for welcome message
time.sleep(0.1)
logfile.write(serialPort.readline().decode('ascii'))
logfile.flush()

# Send command
serialPort.write((command + '\n').encode('ascii'))

# Now, receive data
while(True):
    incomingData = serialPort.readline().decode('ascii')
    logfile.write(incomingData)
    logfile.flush() 

    if(incomingData[:5] == "FATAL" or incomingData[:6] == "HALTED" or incomingData[:5] == "RESET"):
        break;
    elif(incomingData[:6] == "RESULT"):
            resultData = incomingData;

logfile.flush() 

当我运行它时,前 ~350 个数据点进来,然后我看到一些损坏的数据并错过了大约 2000 个数据点,之后我看到另外 350 个左右的数据点.进程中CPU使用率为100%

When I run this, the first ~350 datapoints come in, then I see some mangled data and miss about 2000 datapoints, after which I see another 350 or so datapoints. The CPU usage is at 100% during the process

怎么了?PySerial 是否优化不当,或者我遗漏的代码中是否存在一些错误?我可以运行 cat/dev/ttyACM0 >somefile 从 Python 中读取该文件,但这不是一个很好的解决方案,是吗?

What is going wrong? Is PySerial poorly optimized, or is there some mistake in my code I missed? I could just run cat /dev/ttyACM0 > somefile from Python and then read that file, but that's not really a nice solution, is it?

非常感谢:)

推荐答案

我已经从 PySerial 切换到 PyTTY,解决了我的问题.只需将其插入此代码(进行一些小的更改,例如将 serialPort.inWaiting() == 0 替换为 serialPort.peek() == b'' )使我的代码能够处理数据流并且不会超过 50% 的 CPU 使用率,这意味着它的速度至少是原来的 10 倍.不过,我仍在使用 PySerial 来设置 DTR 线.

I've switched from PySerial to PyTTY, which solves my problem. Just plugging it into this code (with some small changes, like replacing serialPort.inWaiting() == 0 by serialPort.peek() == b'' for example) makes my code able to handle the datastream and not get above 50% CPU usage, which means it is at least 10x as fast. I'm still using PySerial to set the DTR lines though.

所以,我想问题的答案是 PySerial 确实优化得很差.

So, I guess the answer to the question is that indeed PySerial is indeed poorly optimised.

这篇关于如何提高 PySerial 读取速度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆