将 h264 字节字符串转换为 OpenCV 图像 [英] Convert an h264 byte string to OpenCV images

查看:61
本文介绍了将 h264 字节字符串转换为 OpenCV 图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 Python 中,如何将 h264 字节字符串转换为 OpenCV 可以读取的图像,只保留最新图像?

In Python, how do I convert an h264 byte string to images OpenCV can read, only keeping the latest image?

长版:

大家好.

在 Python 中工作,我试图从 adb screenrecord 管道中获取输出,这种方式允许我在需要时捕获帧并将其与 OpenCV 一起使用.据我了解,我需要不断阅读流,因为它是 h264.

Working in Python, I'm trying to get the output from adb screenrecord piped in a way that allows me to capture a frame whenever I need it and use it with OpenCV. As I understand, I need to constantly read the stream because it's h264.

我尝试了多种方法以使其正常工作并得出结论,我需要寻求特定帮助.

I've tried multiple things to get it working and concluded that I needed to ask for specific help.

当我打印 stream.stdout.read(n) 时,以下内容为我提供了我需要的流并且效果很好.

The following gets me the stream I need and works very well when I print stream.stdout.read(n).

import subprocess as sp

adbCmd = ['adb', 'exec-out', 'screenrecord', '--output-format=h264', '-']
stream = sp.Popen(adbCmd, stdout = sp.PIPE, universal_newlines = True)

通用换行符是使其在 Windows 上运行所必需的.

Universal newlines was necessary to get it to work on Windows.

正在做:

sp.call(['ffplay', '-'], stdin = stream.stdout, universal_newlines = True)

作品.

问题是我现在正在尝试使用 ffmpeg 来获取输入的 h264 流并输出尽可能多的帧,如果需要,覆盖最后一帧.

The problem is I am now trying to use ffmpeg to take the input h264 stream and output as many frames as possible, overwriting the last frame if needed.

ffmpegCmd = ['ffmpeg', '-f', 'image2pipe', '-pix_fmt', 'bgr24', '-vcodec', 'h264', 'fps=30', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE, universal_newlines = True)

这是我认为应该使用的,但我总是收到错误输出文件 #0 不包含任何流".

This is what I think should be used, but I always get the error "Output file #0 does not contain any stream".

结果是universal_newlines选项破坏了行尾并逐渐破坏了输出.另外,ffmpeg 命令是错误的,参见 LordNeckbeard 的回答.

Turns out the universal_newlines option was ruining the line endings and gradually corrupting the output. Also, the ffmpeg command was wrong, see LordNeckbeard's answer.

这是实现所用内容的正确 ffmpeg 命令:

Here's the correct ffmpeg command to achieve what was used:

ffmpegCmd = ['ffmpeg', '-i', '-', '-f', 'rawvideo', '-vcodec', 'bmp', '-vf', 'fps=5', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE)

然后要将结果转换为 OpenCV 图像,请执行以下操作:

And then to convert the result into an OpenCV image, you do the following:

fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for i in xrange(4):
    fileSize += fileSizeBytes[i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(np.fromstring(bmpData, dtype = np.uint8), 1)

这会将流的每一帧都作为 OpenCV 图像.

This will get every single frame of a stream as an OpenCV image.

推荐答案

使用以下任何一种:

ffmpeg -i - -pix_fmt bgr24 -f rawvideo -
ffmpeg -i pipe: -pix_fmt bgr24 -f rawvideo pipe:
ffmpeg -i pipe:0 -pix_fmt bgr24 -f rawvideo pipe:1

  • 您没有提供太多有关输入的信息,因此您可能需要添加其他输入选项.

    • You didn't provide much info about your input so you may need to add additional input options.

      您没有指定所需的输出格式,所以我只选择了 rawvideo.您可以使用 ffmpeg -muxers(或 ffmpeg -formats 如果您的 ffmpeg 已过时)查看支持的输出格式(多路复用器)列表.并非所有都适合管道,例如 MP4.

      You didn't specify your desired output format so I just chose rawvideo. You can see a list of supported output formats (muxers) with ffmpeg -muxers (or ffmpeg -formats if your ffmpeg is outdated). Not all are suitable for piping, such as MP4.

      请参阅 FFmpeg 协议:管道.

      这篇关于将 h264 字节字符串转换为 OpenCV 图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆