如何在Windows上从ffmpeg到python同时获取实时视频帧和时间戳 [英] How to fetch both live video frame and timestamp from ffmpeg to python on Windows

查看:128
本文介绍了如何在Windows上从ffmpeg到python同时获取实时视频帧和时间戳的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在OpenCV中搜索替代项不会为计算机视觉算法所需的实时摄像机流(在Windows上)提供时间戳,我发现了ffmpeg,优秀文章 https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/ 该解决方案使用ffmpeg,访问其标准输出(stdout)流.我将其扩展为也读取标准错误(stderr)流.

Searching for an alternative as OpenCV would not provide timestamps for live camera stream (on Windows), which are required in my computer vision algorithm, I found ffmpeg and this excellent article https://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/ The solution uses ffmpeg, accessing its standard output (stdout) stream. I extended it to read the standard error (stderr) stream as well.

在Windows上处理python代码时,虽然我从ffmpeg stdout接收了视频帧,但在为第一帧提供了showinfo videofilter详细信息(时间戳)后,stderr冻结了.

Working up the python code on windows, while I received the video frames from ffmpeg stdout, but the stderr freezes after delivering the showinfo videofilter details (timestamp) for first frame.

我回忆起在ffmpeg论坛上看到的某个地方,重定向时绕过了诸如showinfo之类的视频过滤器.这就是为什么以下代码无法按预期运行的原因?

I recollected seeing on ffmpeg forum somewhere that the video filters like showinfo are bypassed when redirected. Is this why the following code does not work as expected?

预期:应将视频帧以及打印时间戳记详细信息写入磁盘.
实际:它写入视频文件,但没有获得时间戳记(showinfo)详细信息.

Expected: It should write video frames to disk as well as print timestamp details.
Actual: It writes video files but does not get the timestamp (showinfo) details.

这是我尝试的代码:

import subprocess as sp
import numpy
import cv2

command = [ 'ffmpeg', 
            '-i', 'e:\sample.wmv',
            '-pix_fmt', 'rgb24',
            '-vcodec', 'rawvideo',
            '-vf', 'showinfo', # video filter - showinfo will provide frame timestamps
            '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
            '-f', 'image2pipe', '-'] # we need to output to a pipe

pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE) # TODO someone on ffmpeg forum said video filters (e.g. showinfo) are bypassed when stdout is redirected to pipes??? 

for i in range(10):
    raw_image = pipe.stdout.read(1280*720*3)
    img_info = pipe.stderr.read(244) # 244 characters is the current output of showinfo video filter
    print "showinfo output", img_info
    image1 =  numpy.fromstring(raw_image, dtype='uint8')
    image2 = image1.reshape((720,1280,3))  

    # write video frame to file just to verify
    videoFrameName = 'Video_Frame{0}.png'.format(i)
    cv2.imwrite(videoFrameName,image2)

    # throw away the data in the pipe's buffer.
    pipe.stdout.flush()
    pipe.stderr.flush()

所以如何仍然将ffmpeg的帧时间戳转换为python代码,以便可以将其用于我的计算机视觉算法中……

So how to still get the frame timestamps from ffmpeg into python code so that it can be used in my computer vision algorithm...

推荐答案

重定向stderr在python中有效.
所以代替这个pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE)
做到pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT)

Redirecting stderr works in python.
So instead of this pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.PIPE)
do this pipe = sp.Popen(command, stdout = sp.PIPE, stderr = sp.STDOUT)

我们可以通过添加异步调用来读取ffmpeg的标准流(stdout和stderr)来避免重定向.这将避免视频帧和时间戳的任何混合,从而避免容易出错的分离. 因此,修改原始代码以使用threading模块如下所示:

We could avoid redirection by adding an asynchronous call to read both the standard streams (stdout and stderr) of ffmpeg. This would avoid any mixing of the video frame and timestamp and thus the error prone seperation. So modifying the original code to use threading module would look like this:

# Python script to read video frames and timestamps using ffmpeg
import subprocess as sp
import threading

import matplotlib.pyplot as plt
import numpy
import cv2

ffmpeg_command = [ 'ffmpeg',
                   '-nostats', # do not print extra statistics
                    #'-debug_ts', # -debug_ts could provide timestamps avoiding showinfo filter (-vcodec copy). Need to check by providing expected fps TODO
                    '-r', '30', # output 30 frames per second
                    '-i', 'e:\sample.wmv',
                    '-an','-sn', #-an, -sn disables audio and sub-title processing respectively
                    '-pix_fmt', 'rgb24',
                    '-vcodec', 'rawvideo', 
                    #'-vcodec', 'copy', # very fast!, direct copy - Note: No Filters, No Decode/Encode, no quality loss
                    #'-vframes', '20', # process n video frames only. For Debugging
                    '-vf', 'showinfo', # showinfo videofilter provides frame timestamps as pts_time
                    '-f', 'image2pipe', 'pipe:1' ] # outputs to stdout pipe. can also use '-' which is redirected to pipe


# seperate method to read images on stdout asynchronously
def AppendProcStdout(proc, nbytes, AppendList):
    while proc.poll() is None: # continue while the process is alive
        AppendList.append(proc.stdout.read(nbytes)) # read image bytes at a time

# seperate method to read image info. on stderr asynchronously
def AppendProcStderr(proc, AppendList):
    while proc.poll() is None: # continue while the process is alive
        try: AppendList.append(proc.stderr.next()) # read stderr until empty
        except StopIteration: continue # ignore stderr empty exception and continue


if __name__ == '__main__':
    # run ffmpeg command
    pipe = sp.Popen(ffmpeg_command, stdout=sp.PIPE, stderr=sp.PIPE) 

    # 2 threads to talk with ffmpeg stdout and stderr pipes
    framesList = [];
    frameDetailsList = []
    appendFramesThread = threading.Thread(group=None, target=AppendProcStdout, name='FramesThread', args=(pipe, 1280*720*3, framesList), kwargs=None, verbose=None) # assuming rgb video frame with size 1280*720 
    appendInfoThread = threading.Thread(group=None, target=AppendProcStderr, name='InfoThread', args=(pipe, frameDetailsList), kwargs=None, verbose=None) 

    # start threads to capture ffmpeg frames and info.
    appendFramesThread.start()
    appendInfoThread.start()

    # wait for few seconds and close - simulating cancel
    import time; time.sleep(2) 
    pipe.terminate() 

    # check if threads finished and close
    appendFramesThread.join() 
    appendInfoThread.join() 

    # save an image per 30 frames to disk 
    savedList = []
    for cnt,raw_image in enumerate(framesList):
        if (cnt%30 != 0): continue
        image1 =  numpy.fromstring(raw_image, dtype='uint8')
        image2 = image1.reshape((720,1280,3))  # assuming rgb image with size 1280 X 720
        # write video frame to file just to verify
        videoFrameName = 'video_frame{0}.png'.format(cnt)
        cv2.imwrite(videoFrameName,image2)
        savedList.append('{} {}'.format(videoFrameName, image2.shape))

    print '### Results ###'
    print 'Images captured: ({}) \nImages saved to disk:{}\n'.format(len(framesList), savedList) # framesList contains all the video frames got from the ffmpeg
    print 'Images info captured: \n', ''.join(frameDetailsList) # this contains all the timestamp details got from the ffmpeg showinfo videofilter and some initial noise text which can be easily removed while parsing

这篇关于如何在Windows上从ffmpeg到python同时获取实时视频帧和时间戳的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆