声音与原始视频数据之间如何避免ffmpeg的延迟? [英] How to avoid a growing delay with ffmpeg between sound and raw video data ?

查看:1000
本文介绍了声音与原始视频数据之间如何避免ffmpeg的延迟?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,谢谢你阅读!

这是我的问题:我有一个程序将原始视频帧管线到标准输出。该程序正在使用OpenCV来捕获和处理视频并直接输出已处理的帧。循环与我选择的帧速率同步。我使用ffmpeg从标准输入读取,一切都适用于视频。
但现在我添加了声音我有一个大问题:发生越来越多的延迟,我真的需要摆脱它。所以这里是我的想法,但我真的需要你的帮助:

Here is my problem : I have a program piping raw video frames to the standard output. This program is using OpenCV to capture and process the video and outputs directly the processed frames. The loop is synced to the framerate I chose. I'm using ffmpeg to read from the standard input and everything works fine for the video. But now that I added the sound I have a big problem : a growing delay is occuring and I really need to get rid of it. So here is my idea, but I really need your help :

我必须找到一种方法,将时间戳信息包含在原始视频中。由ffmpeg可以理解,它需要是一个已知的原始视频兼容容器。然后我将需要使用容器API并将其管理到我程序中的标准输出。我真的不知道在丛林的视频格式和编解码器中使用什么,我不知道如何在ffmpeg中启用时间戳同步...

I have to find a way to include a timestamp information to the raw video. To be understandable by ffmpeg, it needs to be a known raw video compatible container. And then I will need to use the container API and pipe it to the standard output in my program. I really do not know what to use in the jungle of video formats and codecs, and I don't event know how to enable timestamp synchronizing in ffmpeg...

如果任何人都有一个想法,我真的很感兴趣。有关信息,这里是我用来管理原始视频的命令行:

If anyone has an idea, i am really interested here. For information, here is the command line i use to pipe the raw video :

./myprogram | ffmpeg -y -f alsa -i pulse -ac 2  -f rawvideo -vcodec rawvideo -r 24 -s 640x480 -pix_fmt bgr24 -i - -vcodec libx264 -pix_fmt yuv420p -r 24 -f flv -ar 44100 out.flv;

很多,

Roland

推荐答案

简单的出路是处理音频和音频视频文件分段,说切割30分钟的视频和音频。由于流是不同步的,您可以使用ffmpeg控制它,请参阅此处这里,好的是你不需要两个文件(流),因为ffmpeg可以与源SAME文件一旦你弄清了延迟,重复下一个段,等等。

The easy way out is to process the audio & video files in segments, say cut 30mins of video and audio. Since streams are desync, you can control the it with ffmpeg, see guide here or here, the nice thing is you don't need two files (streams) as ffmpeg can work with source from the SAME file.

有时音频可能会超过30分钟,比如33分钟。然后,我将使用 Audacity 将其长度缩短到30分钟,然后再合并。

Sometimes the audio may be longer than 30 mins, say 33mins. Then I'd use 'Audacity' to squeeze the length back to 30mins before merging.

这篇关于声音与原始视频数据之间如何避免ffmpeg的延迟?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆