MediaRecorder 切换视频轨道 [英] MediaRecorder switch video tracks

查看:21
本文介绍了MediaRecorder 切换视频轨道的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 MediaRecorder API 在网络应用程序中录制视频.该应用程序可以选择在相机和屏幕之间切换.我正在使用 Canvas 来增强流录制.该逻辑涉及从相机捕获流并将其重定向到视频元素.然后将该视频呈现在画布上,并将来自画布的流传递给 MediaRecorder.我注意到的是,只要用户不切换/最小化 chrome 窗口,从屏幕切换到视频(反之亦然)就可以正常工作.画布渲染使用 requestAnimationFrame 并在选项卡失去焦点后冻结.

有什么方法可以让 chrome 不暂停 requestAnimationFrame 的执行?有没有其他方法可以在不影响 MediaRecorder 录制的情况下切换流?

更新:通读文档后,播放音频或具有活动 websocket 连接的选项卡不会受到限制.这是我们目前没有做的事情.这可能是一种解决方法,但希望来自社区的任何替代解决方案.(setTimeout 或 setInterval 被限制太多,因此不使用它,而且它会影响渲染质量)

更新 2:我可以使用 Worker 解决这个问题.工作线程调用 API 并通过 postMessage 将通知发送到主线程,而不是使用主 UI 线程来处理 requestAnimationFrame.UI Thread 渲染完成后,会向 Worker 发回一条消息.还有一个增量周期计算来限制来自工作人员的大量消息.

解决方案

有一个正在进行的 建议在MediaRecorder API中添加一个.replaceTrack()方法,但目前,规范 仍在阅读

<块引用>

如果在任何时候,一个轨道被添加到流的轨道集中或从流的轨道集中删除,UA 必须立即停止收集数据,丢弃它收集的任何数据 [...]

这就是实现的内容.


所以我们还是得靠hacks自己做这个...

最好的可能是创建本地RTC连接,并记录接收端.

//创建一个可混合的流异步函数 mixableStream(initial_track) {const source_stream = new MediaStream( [] );const pc1 = 新 RTCPeerConnection();const pc2 = 新 RTCPeerConnection();pc1.oniceccandidate = (evt) =>pc2.addIceCandidate( evt.candidate );pc2.oniceccandidate = (evt) =>pc1.addIceCandidate( evt.candidate );const wait_for_stream = waitForEvent( pc2, 'track').then( evt => new MediaStream( [ evt.track ] ) );pc1.addTrack(initial_track, source_stream);等待 waitForEvent( pc1, 'negotiationneeded' );尝试 {等待 pc1.setLocalDescription( 等待 pc1.createOffer() );等待 pc2.setRemoteDescription( pc1.localDescription );等待 pc2.setLocalDescription( 等待 pc2.createAnswer() );等待 pc1.setRemoteDescription( pc2.localDescription );}抓住(错误){控制台错误(错误);}返回 {流:等待wait_for_stream,异步替换轨道(新轨道){const sender = pc1.getSenders().find(( { track } ) => track.kind == new_track.kind );返回发件人 &&sender.replaceTrack(new_track) ||Promise.reject("没有这样的轨道");}}}{//重新映射不稳定的 FF 版本const proto = HTMLMediaElement.prototype;如果(!proto.captureStream){ proto.captureStream = proto.mozCaptureStream;}}waitForEvent( document.getElementById( 'starter' ), 'click' ).then( (evt) => evt.target.parentNode.remove() ).then( (async() => {常量网址 = ["2/22/Volcano_Lava_Sample.webm",/a/a4/BBH_gravitational_lensing_of_gw150914.webm"].map( (后缀) => "https://upload.wikimedia.org/wikipedia/commons/" + 后缀);const switcher_btn = document.getElementById('switcher');const stop_btn = document.getElementById('stopper');const video_out = document.getElementById('out');让电流 = 0;//'recordVid' 见下文const video_tracks = await Promise.all( urls.map( (url, index) => getVideoTracks( url ) ) );const mixable_stream = await mixableStream( video_tracks[ current ].track );switcher_btn.onclick = async (evt) =>{当前 = +!当前;等待 mixable_stream.replaceTrack( video_tracks[ current ].track );};//下面的最后录制部分//仅用于演示,所以我们可以看看现在发生了什么video_out.srcObject = mixable_stream.stream;const rec = new MediaRecorder(mixable_stream.stream);常量块 = [];rec.ondataavailable = (evt) =>块.push(evt.data);rec.onerror = 控制台日志;rec.onstop = (evt) =>{const final_file = new Blob(块);video_tracks.forEach( (track) => track.stop() );//仅用于演示,因为我们确实设置了它的 srcObjectvideo_out.srcObject = null;video_out.src = URL.createObjectURL(final_file);switcher_btn.remove();stop_btn.remove();const anchor = document.createElement('a');anchor.download = 'file.webm';anchor.textContent = '下载';anchor.href = video_out.src;document.body.prepend(anchor);};stop_btn.onclick = (evt) =>rec.stop();rec.start();})).catch(控制台错误)//下面的一些助手//返回加载到给定 url 的视频功能makeVid(网址){const vid = document.createElement('video');vid.crossOrigin = 真;vid.loop = true;vid.muted = true;vid.src = url;返回 vid.play().then( (_) => vid );}/* 从给定的 url 录制视频** @method stop() ::暂停链接的<video>** @property track :: 视频轨道*/异步函数 getVideoTracks( url ) {const player = await makeVid( url );const track = player.captureStream().getVideoTracks()[0];返回 {追踪,停止(){ player.pause();}};}//承诺 EventTarget.addEventListener函数waitForEvent(目标,类型){return new Promise( (res) => target.addEventListener( type, res, { once: true } ) );}

video { max-height: 100vh;最大宽度:100vw;垂直对齐:顶部;}.overlay {背景:#ded;位置:固定;z-索引:999;高度:100vh;宽度:100vw;顶部:0;左:0;显示:弹性;对齐项目:居中;对齐内容:居中;}

<button id="starter">开始演示</button>

<button id="switcher">切换源</button><button id="stopper">停止录音</button><video id="out" 静音控制自动播放></video>


否则你仍然可以使用画布方式,使用 网络音频计时器当页面模糊时,即使这在 Firefox 中不起作用,因为它们确实在内部挂钩到 rAF 以在记录器中推送新帧...

I am using MediaRecorder API to record videos in web applications. The application has the option to switch between the camera and screen. I am using Canvas to augment stream recording. The logic involves capturing stream from the camera and redirecting it to the video element. This video is then rendered on canvas and the stream from canvas is passed to MediaRecorder. What I noticed is that switching from screen to video (and vice-versa) works fine as long as the user doesn't switch/minimize the chrome window. The canvas rendering uses requestAnimationFrame and it freezes after the tab loses its focus.

Is there any way to instruct chrome not to pause the execution of requestAnimationFrame? Is there any alternate way to switch streams without impacting MediaRecorder recording?

Update: After reading through the documentation, tabs which play audio or having active websocket connection are not throttled. This is something which we are not doing at this moment. This might be a workaround, but hoping for any alternative solution from community. (setTimeout or setInterval are too throttled and hence not using that, plus it impacts rendering quality)

Update 2: I could able to fix this problem using Worker. Instead of using Main UI Thread for requestAnimationFrame, the worker is invoking the API and the notification is sent to Main Thread via postMessage. Upon completion of rendering by UI Thread, a message is sent back to Worker. There is also a delta period calculation to throttle overwhelming messages from worker.

解决方案

There is an ongoing proposal to add a .replaceTrack() method to the MediaRecorder API, but for the time being, the specs still read

If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data, discard any data that it has gathered [...]

And that's what is implemented.


So we still have to rely on hacks to make this by ourselves...

The best one is probably to create a local RTC connection, and to record the receiving end.

// creates a mixable stream
async function mixableStream( initial_track ) {
  
  const source_stream = new MediaStream( [] );
  const pc1 = new RTCPeerConnection();
  const pc2 = new RTCPeerConnection();
    pc1.onicecandidate = (evt) => pc2.addIceCandidate( evt.candidate );
    pc2.onicecandidate = (evt) => pc1.addIceCandidate( evt.candidate );

  const wait_for_stream = waitForEvent( pc2, 'track')
    .then( evt => new MediaStream( [ evt.track ] ) );

    pc1.addTrack( initial_track, source_stream );
  
  await waitForEvent( pc1, 'negotiationneeded' );
  try {
    await pc1.setLocalDescription( await pc1.createOffer() );
    await pc2.setRemoteDescription( pc1.localDescription );
    await pc2.setLocalDescription( await pc2.createAnswer() );
    await pc1.setRemoteDescription( pc2.localDescription );
  } catch ( err ) {
    console.error( err );
  }
  
  return {
    stream: await wait_for_stream,
    async replaceTrack( new_track ) {
      const sender = pc1.getSenders().find( ( { track } ) => track.kind == new_track.kind );
      return sender && sender.replaceTrack( new_track ) ||
        Promise.reject( "no such track" );
    }
  }  
}


{ // remap unstable FF version
  const proto = HTMLMediaElement.prototype;
  if( !proto.captureStream ) { proto.captureStream = proto.mozCaptureStream; }
}

waitForEvent( document.getElementById( 'starter' ), 'click' )
  .then( (evt) => evt.target.parentNode.remove() )
  .then( (async() => {

  const urls = [
    "2/22/Volcano_Lava_Sample.webm",
    "/a/a4/BBH_gravitational_lensing_of_gw150914.webm"
  ].map( (suffix) => "https://upload.wikimedia.org/wikipedia/commons/" + suffix );
  
  const switcher_btn = document.getElementById( 'switcher' );
  const stop_btn =     document.getElementById( 'stopper' );
  const video_out =    document.getElementById( 'out' );
  
  let current = 0;
  
  // see below for 'recordVid'
  const video_tracks = await Promise.all( urls.map( (url, index) =>  getVideoTracks( url ) ) );
  
  const mixable_stream = await mixableStream( video_tracks[ current ].track );

  switcher_btn.onclick = async (evt) => {

    current = +!current;
    await mixable_stream.replaceTrack( video_tracks[ current ].track );
    
  };

  // final recording part below

  // only for demo, so we can see what happens now
  video_out.srcObject = mixable_stream.stream;

  const rec = new MediaRecorder( mixable_stream.stream );
  const chunks = [];

  rec.ondataavailable = (evt) => chunks.push( evt.data );
  rec.onerror = console.log;
  rec.onstop = (evt) => {

    const final_file = new Blob( chunks );
    video_tracks.forEach( (track) => track.stop() );
    // only for demo, since we did set its srcObject
    video_out.srcObject = null;
    video_out.src = URL.createObjectURL( final_file );
    switcher_btn.remove();
    stop_btn.remove();

        const anchor = document.createElement( 'a' );
    anchor.download = 'file.webm';
    anchor.textContent = 'download';
        anchor.href = video_out.src;
    document.body.prepend( anchor );
    
  };

  stop_btn.onclick = (evt) => rec.stop();

  rec.start();
      
}))
.catch( console.error )

// some helpers below



// returns a video loaded to given url
function makeVid( url ) {

  const vid = document.createElement('video');
  vid.crossOrigin = true;
  vid.loop = true;
  vid.muted = true;
  vid.src = url;
  return vid.play()
    .then( (_) => vid );
  
}

/* Records videos from given url
** @method stop() ::pauses the linked <video>
** @property track ::the video track
*/
async function getVideoTracks( url ) {
  const player = await makeVid( url );
  const track = player.captureStream().getVideoTracks()[ 0 ];
  
  return {
    track,
    stop() { player.pause(); }
  };
}

// Promisifies EventTarget.addEventListener
function waitForEvent( target, type ) {
  return new Promise( (res) => target.addEventListener( type, res, { once: true } ) );
}

video { max-height: 100vh; max-width: 100vw; vertical-align: top; }
.overlay {
  background: #ded;
  position: fixed;
  z-index: 999;
  height: 100vh;
  width: 100vw;
  top: 0;
  left: 0;
  display: flex;
  align-items: center;
  justify-content: center;
}

<div class="overlay">
  <button id="starter">start demo</button>
</div>
<button id="switcher">switch source</button>
<button id="stopper">stop recording</button> 
<video id="out" muted controls autoplay></video>


Otherwise you can still go the canvas way, with the Web Audio Timer I made for when the page is blurred, even though this will not work in Firefox since they do internally hook to rAF to push new frames in the recorder...

这篇关于MediaRecorder 切换视频轨道的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆