将原始yuv框架加载到具有时间戳的ffmpeg [英] feed raw yuv frame to ffmpeg with timestamp

查看:449
本文介绍了将原始yuv框架加载到具有时间戳的ffmpeg的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试管道音频和视频原始数据到ffmpeg,并通过Android上的RTSP协议推送实时流。
命令行看起来像这样

 ffmpeg -re -f image2pipe -vcodec mjpeg -i+ vpipepath 
+-f s16le -acodec pcm_s16le -ar 8000 -ac 1 -i -
+-vcodec libx264
+-preset slow -pix_fmt yuv420p -crf 30 -s 160x120 -r 6电影
+-g 6 -keyint_min 6 -bf 16 -b_strategy 1
+-acodec libopus -ac 1 -ar 48000 -b:a 80k -vbr on -frame_duration 20
+-compression_level 10 -application voip -packet_loss 20
+-f rtsp rtsp://remote-rtsp-server/live.sdp;

我使用的是libx264的视频编解码器和libopus的音频编解码器。
yuv框架通过由mkfifo创建的命名管道提供,pcm框架通过stdin进行输入。



它工作,我可以获取并播放流ffplay。但是存在音频/视频同步问题。音频比视频晚5〜10秒。
我猜,问题是yuv框架和pcm框架没有任何时间戳。 FFmpeg在与数据一起提供时添加时间戳。但是音频/视频捕获线程不可能以相同的速率运行。
有没有办法为每个原始数据框添加时间戳? (像PST / DST?)



我使用的方式是从这个线程:
使用FFmpeg的Android相机捕获

解决方案

FFmpeg添加时间戳它从管道检索样本的时刻,所以你需要做的就是将它们同步进行。您的案例中可能出现的问题是您已经有了一个音频缓冲区,并且实时提供视频帧。这使得音频迟到。您必须缓冲视频帧与缓冲音频相同的时间量。如果您无法控制音频缓冲区大小,请尝试尽可能小,监视其大小并相应调整视频缓冲。


I've trying pipe audio and video raw data to ffmpeg and push realtime stream through RTSP protocol on android. the command-line is look like this

"ffmpeg -re -f image2pipe -vcodec mjpeg -i "+vpipepath
+ " -f s16le -acodec pcm_s16le -ar 8000 -ac 1 -i - "
+ " -vcodec libx264 "
+ " -preset slow -pix_fmt yuv420p -crf 30 -s 160x120 -r 6 -tune film "
+ " -g 6 -keyint_min 6 -bf 16 -b_strategy 1 "
+ " -acodec libopus -ac 1 -ar 48000 -b:a 80k -vbr on -frame_duration 20 "
+ " -compression_level 10 -application voip -packet_loss 20 "
+ " -f rtsp rtsp://remote-rtsp-server/live.sdp";

I'm using libx264 for video codec and libopus for audio codec. the yuv frames are feed through a named pipe created by mkfifo, the pcm frames are feed through stdin.

It works, and I can fetch and play the stream by ffplay. But there is serverely audio/video sync issue. Audio is 5~10 seconds later than video. I guess the problem is both yuv frame and pcm frame doesn't have any timestamp on them. FFmpeg add timestamp when it feed with the data. but audio/video capture thread is impossible to run at the same rate. Is there a way to add timestamp to each raw data frame? (something like PST/DST?)

the way I used was from this thread: Android Camera Capture using FFmpeg

解决方案

FFmpeg adds timestamps the moment it retrieves the samples from the pipe, so all you need to do is feed them in sync. Likely problem in your case is that you already have an audio buffer, and are offering video frames in real time. That makes audio late. You must buffer video frames to the same amount of time as you are buffering audio. If you have no control over your audio buffer size, then try to keep it as small as possible, monitor its size and adjust your video buffering accordingly.

这篇关于将原始yuv框架加载到具有时间戳的ffmpeg的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆