是否可以通过WebRTC进行屏幕共享广播音频 [英] Is it possible broadcast audio with screensharing with WebRTC

查看:592
本文介绍了是否可以通过WebRTC进行屏幕共享广播音频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以通过WebRTC进行屏幕共享广播音频? 用audio: true简单调用getUserMedia失败,原因是权限被拒绝错误. 是否有任何可用于播放音频的工作原理? 可以在屏幕共享旁边实现音频吗?

is it possible broadcast audio with screensharing with WebRTC? Simple calling getUserMedia with audio: true fails by permission denied error. Is there any workeround which could be used to broadcast audio also? Will be audio implemented beside screensharing?

谢谢.

推荐答案

请参阅此演示: 多个流并将其附加到单个对等连接. AFAIK,与chromeMediaSource:screen一起的音频为"静止" .

Multiple streams are captured and attached to a single peer connection. AFAIK, audio alongwith chromeMediaSource:screen is "still" not permitted.

现在,您可以在Firefox和Chrome上使用单个getUserMedia请求捕获音频+屏幕.

Now you can capture audio+screen using single getUserMedia request both on Firefox and Chrome.

但是Chrome仅支持 audio + tab ,也就是说,您无法同时捕获全屏和音频.

However Chrome merely supports audio+tab i.e. you can NOT capture full-screen along with audio.

Audio + Tab表示任何镶边选项卡以及麦克风.

Audio+Tab means any chrome tab along with microphone.

您可以通过发出两个并行的(UNIQUE)getUserMedia请求来捕获音频流和屏幕流.

You can capture both audio and screen streams by making two parallel (UNIQUE) getUserMedia requests.

现在,您可以使用addTrack方法将音轨添加到屏幕流中:

Now you can use addTrack method to add audio tracks into screen stream:

var audioStream = captureUsingGetUserMedia();
var screenStream = captureUsingGetUserMedia();

var audioTrack = audioStream.getAudioTracks()[0];

// add audio tracks into screen stream
screenStream.addTrack( audioTrack );

现在screenStream同时具有音频和视频轨道.

Now screenStream has both audio and video tracks.

nativeRTCPeerConnection.addStream( screenStream );
nativeRTCPeerConnection.createOffer(success, failure, options);

这篇关于是否可以通过WebRTC进行屏幕共享广播音频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆