没有SFU的多方WebRTC [英] Multi-party WebRTC without SFU

查看:79
本文介绍了没有SFU的多方WebRTC的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

基于这篇文章,在没有服务器的情况下实施WebRTC解决方案时,我认为这意味着SFU,瓶颈是只有4-6名参与者可以工作.

Based on this article, when implementing a WebRTC solution without a server, I assume it means SFU, the bottleneck is that only 4-6 participants can work.

是否有可以解决此问题的解决方案?例如,我只想将Firebase用作唯一的后端,主要是信令而不是SFU.达到至少25-50名WebRTC参与者的一般实施策略是什么?

Is there a solution that can work around this? For example, I just want to use Firebase as the only backend, mainly signaling and no SFU. What is the general implementation strategy to achieve at least 25-50 participants in WebRTC?

更新:此 Github项目共享不同的内容陈述.它指出完整的网格非常适合多达约100个连接"

Update: This Github project shares a different statement. It states "A full mesh is great for up to ~100 connections"

推荐答案

MESH的真正瓶颈是每个RTCPeerConnection都会在浏览器中进行自己的视频编码.

Your real bottleneck with MESH is that each RTCPeerConnection will do its own video encoding in the browser.

p2p概念自然包括两个对等方均应根据网络条件调整编码质量的要求.因此,当您的浏览器将两个流发送到对等点X(良好的下载速度)和Y(不良的下载速度)时,X和Y的编码将有所不同-Y将获得比X更低的帧速率和比特率.

The p2p concept naturally includes the requirement that both peers should adjust encoding quality based on network conditions. So, when your browser sends two streams to peers X (good download speed) and Y (bad download speed), the encodings for X and Y will be different - Y will receive lower framerate and bitrate than X.

听起来合理,对不对?但是,不幸的是,每个对等连接都要求单独的视频编码.

Sounds reasonable, right? But, unfortunately, mandates separate video encoding for each peer connection.

如果多个对等连接可以重新使用相同的视频编码,则MESH将更加可行.但是Google并未在浏览器中提供该选项. Simulcast需要SFU,所以情况并非如此.

If multiple peer connections could re-use the same video encoding, then MESH would be much more viable. But Google didn't provide that option in the browser. Simulcast requires SFU, so that's not your case.

那么,对于720p 30 fps视频,浏览器可以在一台典型的计算机上执行多少种并发视频编码? 5-6,不多.对于640x480 15 fps?也许有20种编码.

So, how many concurrent video encodings can browser perform on a typical machine, for 720p 30 fps video? 5-6, not more. For 640x480 15 fps? Maybe 20 encodings.

我认为,在WebRTC设计中可以将编码层和网络层分开,甚至可以将getUserMedia扩展为getEncodedUserMedia,以便您可以将相同的编码内容发送给多个对等端.

In my opinion, the encoding layer and networking layer could be separated in WebRTC design, and even getUserMedia could be extended to getEncodedUserMedia, so that you could send the same encoded content to multiple peers.

这就是人们在多点WebRTC中使用SFU的真正实际原因.

So that's the real practical reason people use SFU for multi-peer WebRTC.

这篇关于没有SFU的多方WebRTC的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆