WINDOWS 中来自 c++ opencv 应用程序的低延迟视频流 [英] Low-latency video streaming from c++ opencv application in WINDOWS

查看:100
本文介绍了WINDOWS 中来自 c++ opencv 应用程序的低延迟视频流的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于这个主题有很多问题,但大多数都涉及使用不受欢迎的协议 - HTML5、WebRTC 等.

There are quite a lot of questions on the topic, but most of them involve the use of undesired protocols - HTML5, WebRTC, etc.

基本上,问题可以表述如下:如何在 Windows 中通过 RTSP 或 MJPEG [AFAIK 实时流式传输更好] 流传输我自己的 cv::Mat 图像?我能找到的几乎所有东西都依赖于操作系统是 Linux,只是不适用于该项目.

Basically, the problem can formulated as follows: how do I stream my own cv::Mat images over either RTSP or MJPEG [AFAIK it is better for realtime streaming] streams in Windows? Nearly everything I can find relies on the OS being Linux, and is just not applicable to the project.

FFMPEG 的流水线工作正常,但延迟大约 10 秒.可以使用一些 - 难以理解的长参数列表-那个-ffmpeg-team-loves-so-much 将其缩短到 3-4 秒,但这还不够,因为正在考虑的项目是一个具有活跃用户的监视应用程序控制相机,所以我需要尽可能接近实时.

FFMPEG's pipelining sort of worked, but the delay was about 10 seconds. Could take it down to 3-4 seconds using some -incomprehensibly-long-parameter-list-that-ffmpeg-team-loves-so-much, but it is not enough, because the project under consideration is a surveillance app with active user controlling the camera, so I need to be as close to realtime as possible.

另一个问题是解决方案不应占用全部内核,因为它们已经因对象跟踪算法而过载.

Another issue is that the solution should not eat the whole amount of cores, because they are already overloaded with object tracking algorithms.

提前感谢您的帮助!

编辑:ffmpeg -re -an -f mjpeg -i http://..addr.../video.mjpg -vcodec libx264 -tune zerolatency -f rtp rtp://127.0.0.1:1234 -sdp_file stream.sdp - 我曾经在没有任何预处理的情况下直接重新翻译流的命令,它在本地主机上产生了大约 4 秒的延迟.

EDIT: ffmpeg -re -an -f mjpeg -i http://..addr.../video.mjpg -vcodec libx264 -tune zerolatency -f rtp rtp://127.0.0.1:1234 -sdp_file stream.sdp - command I used to retranslate the stream directly without any preprocessing, and it yielded about 4 seconds of delay on a localhost.

推荐答案

首先,您必须找出延迟的来源.

First you have to find out from where your latency is coming from.

有四种基本的延迟来源:

There are basic 4 sources of latency:

  1. 视频采集
  2. 编码
  3. 传输
  4. 解码(播放器)

由于您是从本地主机进行测量,我们可以将传输视为 0 秒.如果你的视频分辨率和帧率不是很大,解码时间也接近于零.

Since you are measuring from localhost, we could consider transmission as 0 seconds. If your video resolution and frame rate are not gargantuan, decoding times would also be close to zero.

我们现在应该关注前两个项目:捕获和编码.

We now should focus on the first 2 itens: Capture and Encoding.

这里的问题"是 libx264 是一个软件编码器.因此,它使用 CPU 能力并且需要主内存中的数据,而不是首先创建图像的 GPU 内存中的数据.

The "problem" here is that libx264 is a software encoder. So, it uses CPU power AND needs the data in the main memory, not in the GPU memory where the image is first created.

因此,当 FFMPEG 捕获帧时,它必须将操作系统的各个层从视频内存传递到主内存.

So, when FFMPEG captures a frame it has to pass the layers of the OS from video memory to main memory.

不幸的是,如果你使用 libx264,你不会得到比 3 或 2 秒更好的结果.

Unfortunately you wont get any better results than 3 or 2 seconds if you use libx264.

我建议您查看 Nvidia Capture 解决方案.https://developer.nvidia.com/capture-sdk

I suggest you take a look on the Nvidia Capture solution. https://developer.nvidia.com/capture-sdk

如果您使用功能强大的 GPU,则可以直接在 GPU 中从后台缓冲区或帧内缓冲区捕获和编码每一帧.您可以根据需要使用 ffmpeg 发送它.

If you are using a capable GPU you can than capture and encode each frame from the backbuffer or intra frame buffer directly in the GPU. You can than use ffmpeg to send it as you please.

这篇关于WINDOWS 中来自 c++ opencv 应用程序的低延迟视频流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆