将h264字节字符串转换为OpenCV图像 [英] Convert an h264 byte string to OpenCV images

查看:1049
本文介绍了将h264字节字符串转换为OpenCV图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在Python中,如何将h264字节字符串转换为OpenCV可以读取的图像,只保留最新图像?



长版本:



大家好。



在Python中工作,我试图从adb screenrecord输出输出,让我只要我需要它,并使用它与OpenCV捕获框架。据了解,我需要不断阅读流,因为它是h264。



我已经尝试了多项工作,得出结论,我需要请求具体的帮助



以下内容使我得到我需要的流,当我打印stream.stdout.read(n)时,效果非常好。

  import subprocess as sp 

adbCmd = ['adb','exec-out','screenrecord','--output-format = h264',' - ']
stream = sp.Popen(adbCmd,stdout = sp.PIPE,universal_newlines = True)

通用换行符是必要的,让它在Windows上工作。



执行:

  sp.call(['ffplay',' - '],stdin = stream.stdout,universal_newlines = True)

Works。



问题是我现在试图使用ffmpeg来输入h264流,并输出为尽可能多的框架,如果需要,覆盖最后一帧。

  ffmpegCmd = ['ffmpeg','-f','image2pipe','-pix_fmt','bgr24','-vcodec','h264','fps = 30' ' - '] 
ffmpeg = sp.Popen(ffmpegCmd,stdin = stream.stdout,stdout = sp.PIPE,universal_newlines = True)

这是我认为应该使用的,但我总是得到错误输出文件#0不包含任何流。



编辑:



最终答案



结果是,universal_newlines选项破坏了行尾,逐渐破坏输出。此外,ffmpeg命令是错误的,请参阅LordNeckbeard的答案。



这是正确的ffmpeg命令来实现所使用的:



ffmpegCmd = ['ffmpeg','-i',' - ','-f','rawvideo','-vcodec','bmp','-vf' ,'fps = 5',' - ']
ffmpeg = sp.Popen(ffmpegCmd,stdin = stream.stdout,stdout = sp.PIPE)

然后将结果转换为OpenCV图像,请执行以下操作:

  fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for x in(x)(4):
fileSize + = fileSizeBytes [i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(np.fromstring(bmpData,dtype = np.uint8),1)

这将获得流的每一帧作为OpenCV图像。

  ffmpeg -i  -  -pix_fmt bgr24 -f rawvideo  -  
ffmpeg -i pipe:-pix_fmt bgr24 -f rawvideo pipe:
ffmpeg -i pipe:0 -pix_fmt bgr24 -f rawvideo pipe:1




  • 您没有提供很多有关您的输入的信息,因此您可能需要添加其他输入选项。


  • 您没有指定所需的输出格式,所以我只是选择了rawvideo。您可以使用 ffmpeg -muxers (或 ffmpeg -formats 查看支持的输出格式(muxers)列表,如果您的 ffmpeg 已过期)。不是所有的都适用于管道,如MP4。


  • 请参阅 FFmpeg协议:管道



In Python, how do I convert an h264 byte string to images OpenCV can read, only keeping the latest image?

Long version:

Hi everyone.

Working in Python, I'm trying to get the output from adb screenrecord piped in a way that allows me to capture a frame whenever I need it and use it with OpenCV. As I understand, I need to constantly read the stream because it's h264.

I've tried multiple things to get it working and concluded that I needed to ask for specific help.

The following gets me the stream I need and works very well when I print stream.stdout.read(n).

import subprocess as sp

adbCmd = ['adb', 'exec-out', 'screenrecord', '--output-format=h264', '-']
stream = sp.Popen(adbCmd, stdout = sp.PIPE, universal_newlines = True)

Universal newlines was necessary to get it to work on Windows.

Doing:

sp.call(['ffplay', '-'], stdin = stream.stdout, universal_newlines = True)

Works.

The problem is I am now trying to use ffmpeg to take the input h264 stream and output as many frames as possible, overwriting the last frame if needed.

ffmpegCmd = ['ffmpeg', '-f', 'image2pipe', '-pix_fmt', 'bgr24', '-vcodec', 'h264', 'fps=30', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE, universal_newlines = True)

This is what I think should be used, but I always get the error "Output file #0 does not contain any stream".

Edit:

Final Answer

Turns out the universal_newlines option was ruining the line endings and gradually corrupting the output. Also, the ffmpeg command was wrong, see LordNeckbeard's answer.

Here's the correct ffmpeg command to achieve what was used:

ffmpegCmd = ['ffmpeg', '-i', '-', '-f', 'rawvideo', '-vcodec', 'bmp', '-vf', 'fps=5', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE)

And then to convert the result into an OpenCV image, you do the following:

fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for i in xrange(4):
    fileSize += fileSizeBytes[i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(np.fromstring(bmpData, dtype = np.uint8), 1)

This will get every single frame of a stream as an OpenCV image.

解决方案

Use any of these:

ffmpeg -i - -pix_fmt bgr24 -f rawvideo -
ffmpeg -i pipe: -pix_fmt bgr24 -f rawvideo pipe:
ffmpeg -i pipe:0 -pix_fmt bgr24 -f rawvideo pipe:1

  • You didn't provide much info about your input so you may need to add additional input options.

  • You didn't specify your desired output format so I just chose rawvideo. You can see a list of supported output formats (muxers) with ffmpeg -muxers (or ffmpeg -formats if your ffmpeg is outdated). Not all are suitable for piping, such as MP4.

  • See FFmpeg Protocols: pipe.

这篇关于将h264字节字符串转换为OpenCV图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆