使用ffmpeg将流转换为帧时进行缓冲 [英] Buffering while converting stream to frames with ffmpeg

查看:528
本文介绍了使用ffmpeg将流转换为帧时进行缓冲的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用ffmpeg将udp流转换为帧.我运行以下命令:

I am trying to convert udp stream into frames using ffmpeg. I run following command:

ffmpeg -loglevel debug -strict 2 -re -i "udp://192.168.15.50:3200?fifo_size=1000000&overrun_nonfatal=1" -r 8 -vf scale=432:243 -f image2pipe -vcodec ppm pipe:1

它发生在不同的流类型(mpeg2video和h264)上.用于核心处理此特定流的Cpu负载低于30%,即分辨率为640x576的低质量sd流.

It happens with different stream types, mpeg2video and h264. Cpu load for core processing this specific stream is under 30%, its low quality sd stream with resolution of 640x576.

在大多数情况下,它运行良好,但是有时有时会发生延迟,并且帧会在稍后到达.所以我想精确地达到8 fps,但是有时候我得到的是更少,有时更多.

It works well for the most time, however sometimes, once in a while, latency occurs and frames arrive later. So i want exactly 8 fps but sometime i get less, sometimes more.

为什么会出现这种延迟,我该如何减少呢?

Why does this latency occur and how can i reduce it?

更新:我尝试将其更改为:

update: i tried changing it to:

ffmpeg -loglevel debug -i "udp://192.168.15.50:3200?fifo_size=1000000&overrun_nonfatal=1" -r 8 -preset ultrafast -fflags nobuffer -vf scale=432:243 -f image2pipe -vcodec ppm pipe:1

但是我仍然遇到问题.例如,在ffmpeg日志中,我得到:

But i still get the issue. For example, in ffmpeg log i get:

[2016/02/11 13:32:30] frame= 7477 fps=8.0 q=-0.0 size= 2299638kB time=00:15:34.62 bitrate=20156.4kbits/s dup=7 drop=15867 ^M*** dropping frame 7477 from stream 0 at ts 7475
[2016/02/11 13:32:30] ***dropping frame 7477 from stream 0 at ts 7476
[2016/02/11 13:32:30] ***dropping frame 7478 from stream 0 at ts 7476
[2016/02/11 13:32:32] Last message repeated 1 times
[2016/02/11 13:32:32] frame= 7479 fps=8.0 q=-0.0 size= 2300253kB time=00:15:34.87 bitrate=20156.4kbits/s dup=7 drop=15871 ^M*** dropping frame 7479 from stream 0 at ts 7477

如您所见,在第二31帧期间,没有输出任何帧...并且ffmpeg报告的两帧之间的时间为0.25s

As you can see, during second 31, no frames are output... and ffmpeg reported time between two frames is 0.25s

推荐答案

ffmpeg命令通常通过管道传递到另一个二进制文件中.该二进制文件将保存ffmpeg提供的帧并对其进行一些处理.

ffmpeg command posted in the question is normally piped into another binary. That binary saves frames provided by ffmpeg and does some processing on them.

在一开始,我没有使用"fifo_size = 1000000& overrun_nonfatal = 1" 选项,而我从ffmpeg遇到了以下错误:

In the beginning I didn't use the "fifo_size=1000000&overrun_nonfatal=1" options, and I was getting the following error from ffmpeg:

[udp @ 0x4ceb8a0] Circular buffer overrun. To avoid, increase fifo_size URL option. To survive in such case, use overrun_nonfatal option
udp://192.168.15.50:3200: Input/output error

,然后ffmpeg会崩溃.为了避免这种情况,我添加了:"fifo_size = 1000000& overrun_nonfatal = 1" ,如ffmpeg建议的那样.

and then ffmpeg would crash. To avoid it I added : "fifo_size=1000000&overrun_nonfatal=1", as ffmpeg suggests.

但是,在使用这些参数之后,我将得到问题中所述的时移,有时它还会在帧中带有伪像.

However, after using those parameters, I would get timeshift as described in the question, and sometimes it would also come with artifacts in frames.

如前所述,CPU没有任何问题,因此最初,我们怀疑udp流,特别是udp缓冲区的大小:

As mentioned, there was no issues with CPU, so initially, we suspected the udp stream, specifically udp buffer size:

https://access.redhat.com/documentation/zh-CN/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

所以我们通过以下方式更改了udp缓冲区的大小:

so we changed the udp buffer size with:

sysctl -w net.core.rmem_max=26214400

并将ffmpeg命令更改为"udp://231.20.20.8:2005?buffer_size = 26214400"

and changed ffmpeg command to "udp://231.20.20.8:2005?buffer_size=26214400"

但是,这不能解决问题.ffmpeg仍然会出现循环缓冲区溢出"并崩溃.而且我无法重现此循环缓冲区溢出,它只是随机发生的.

However, this didn't fix the issue. ffmpeg would still get "Circular buffer overrun" and crash. And I couldn't reproduce this circular buffer overrun, it was just happening randomly.

下一个想法是管道缓冲区的大小,因为我发现了以下内容:

My next thought was pipe buffer size since I found the following:

http://blog.dataart.com/linux-pipes-tips-tricks/

自内核版本2.6.11起,缓冲区的大小为65536字节(64K),并且等于旧内核中的页面内存.尝试从空缓冲区读取时,读取过程将被阻止,直到出现数据为止.
同样,如果您尝试写入一个完整的缓冲区,则记录过程将被阻塞,直到有足够的可用空间为止.

The size of the buffer since kernel version 2.6.11 is 65536 bytes (64K) and is equal to the page memory in older kernels. When attempting to read from an empty buffer, the read process is blocked until data appears.
Similarly, if you attempt to write to a full buffer, the recording process will be blocked until the necessary amount of space is available.

http://ffmpeg.gusari.org/viewtopic.php?f = 12& t = 624 [link now dead]

http://ffmpeg.gusari.org/viewtopic.php?f=12&t=624 [link now dead]

Poster1 :是什么导致这些循环缓冲区超载?我的假设是ffmpeg正在将输入流读取到上述循环缓冲区中,然后代码生成的输出流也从同一缓冲区读取.当生成输出的代码跟不上将其写入缓冲区的速率时,会发生溢出吗?
Poster2 :查看源代码,似乎缓冲区由于输入速度太快或输出速度太慢(CPU速度慢?)而溢出.您的假设是正确的.

Poster1: What causes these circular buffer overruns? My assumption is that ffmpeg is reading the input stream into the aforementioned circular buffer, and the code then generates the output stream also reads from that same buffer. The overrun would happen when the code that generates the output doesn't keep up with the rate at which it's being written to the buffer, right?
Poster2: Looking at the source code it appears that the buffer gets overflowed either by too fast input or too slow output (slow cpu?). Your assumption is correct.

所以理论是我们的二进制文件读取管道的速度不够快.结果,管道被阻塞,并且ffmpeg无法对其进行写入,并且这导致udp fifo缓冲区溢出(ffmpeg继续将udp读入FIFO,但无法从它写入我们的管道).

So theory was that our binary doesn't read pipe fast enough. As a result pipe gets blocked, and ffmpeg cannot write to it, and THAT results in udp fifo buffer overrun (ffmpeg keeps reading udp INTO FIFO, but cannot write FROM it into our pipe).

我设法通过运行(在单独的终端中)证明了这一理论:

I managed to prove this theory by running (in separate terminals):

mkfifo mypipe
ffmpeg -loglevel debug -i "udp://192.168.15.50:3200?fifo_size=1000000&overrun_nonfatal=1" -r 8 -preset ultrafast -fflags nobuffer -vf scale=432:243 -f image2pipe -vcodec ppm pipe:1 > mypipe
cat < mypipe > /dev/null # run this for 10 seconds, allowing ffmpeg to start. then pause it with CTRL-Z and see ffmpeg crashing because it cannot read more udp stream

接下来是调查为什么我们的二进制文件有时会停止读取管道.似乎没有理由,因为通常它会在管道中存入某些内容后立即将其读入内存中.

Next was investigating why our binary, at some point, stops reading the pipe. There seemed to be no reason, because normally it would just read into memory immediately after something comes to pipe.

但是,它也将帧保存到硬盘驱动器上,并且在某些时间点(有时12分钟,有时15小时),由于读/写操作,磁盘操作将变慢(它是bcache(SSD和HDD混合,使用SSD作为缓存)).当我从该驱动器并行删除数百万个文件进行调试时,我随机发现了这个事实.

However, it was also saving frames to hard drive, and at SOME POINT (sometimes 12 minutes, sometimes 15 hours), disk operations would slow down due to read/write operations (it was bcache (SSD and HDD hybrid, using SSD as cache)). I caught this fact randomly when I was removing few millions of files from this drive in parallel for debugging.

因此,将文件写入繁忙的硬盘驱动器将暂时阻止我们的二进制文件读取输入管道.

So, writing files to busy hard drive would temporarily block our binary from reading the input pipe.

udp循环缓冲区溢出问题和最终时移的原因是HDD,而理论上的解决方案是SSD.

The reason for udp circular buffer overrun issue and eventual timeshift was a HDD, and a theoretical solution is SSD.

此调查大约花了3周的时间,因此希望将所有这些信息发布出去,以期至少对将来的某个人有所帮助.

This investigation took about 3 weeks, so posting all this in hope it will at least in part, help someone in future.

更新:

后来,我还检测到另一个导致相同问题的瓶颈(更换硬盘还不够),这是后端插入postgres导致的tcp套接字缓冲区溢出.

I also detected another bottleneck causing this same issue later on (replacing HDD was not enough), which was tcp socket buffer overflow caused by postgres insertions on the backend.

整个管道如下所示:

udp_videostream -> ffmpeg -> linux_pipe -> ourclient_side_binary -> tcp -> our_server_side_binary -> postgres

udp_videostream -> ffmpeg -> linux_pipe -> our_client_side_binary -> tcp -> our_server_side_binary -> postgres

Postgres查询有时很慢,这导致我们的服务器读取TCP套接字的速度比our_binary推送给它的速度慢.结果,tcp套接字将被阻塞(最大为4Mb),结果,客户端将阻塞其输入管道,并且由于ffmpeg的崩溃而导致CBO错误.

Postgres queries were sometimes slow, which was causing our server to read TCP socket slower than our_binary was pushing to it. As a result, tcp socket would get blocked (it was maximum 4Mb), and as a result of that, client would block its input pipe, and as a result of that ffmpeg would crash with this CBO error.

这篇关于使用ffmpeg将流转换为帧时进行缓冲的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆