MediaRecorder切换视频轨道 [英] MediaRecorder switch video tracks

查看:107
本文介绍了MediaRecorder切换视频轨道的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用MediaRecorder API在Web应用程序中录制视频.该应用程序具有在相机和屏幕之间切换的选项.我正在使用Canvas来增加流录制.逻辑涉及从摄像机捕获流并将其重定向到视频元素.然后将该视频呈现在画布上,并将画布中的流传递到MediaRecorder. 我注意到的是,只要用户不切换/最小化Chrome窗口,从屏幕切换到视频(反之亦然)就可以正常工作.画布渲染使用requestAnimationFrame,并且在选项卡失去焦点后冻结.

I am using MediaRecorder API to record videos in web applications. The application has the option to switch between the camera and screen. I am using Canvas to augment stream recording. The logic involves capturing stream from the camera and redirecting it to the video element. This video is then rendered on canvas and the stream from canvas is passed to MediaRecorder. What I noticed is that switching from screen to video (and vice-versa) works fine as long as the user doesn't switch/minimize the chrome window. The canvas rendering uses requestAnimationFrame and it freezes after the tab loses its focus.

是否有任何方法指示chrome不要暂停requestAnimationFrame的执行?是否有其他替代方法可以在不影响MediaRecorder录制的情况下切换流?

Is there any way to instruct chrome not to pause the execution of requestAnimationFrame? Is there any alternate way to switch streams without impacting MediaRecorder recording?

更新: 阅读完文档后,播放音频或具有有效Websocket连接的选项卡不会受到限制.这是我们目前不要做的事情.这可能是一种解决方法,但希望社区提供其他替代解决方案. (setTimeout或setInterval限制太多,因此不使用它,还会影响渲染质量)

Update: After reading through the documentation, tabs which play audio or having active websocket connection are not throttled. This is something which we are not doing at this moment. This might be a workaround, but hoping for any alternative solution from community. (setTimeout or setInterval are too throttled and hence not using that, plus it impacts rendering quality)

更新2: 我可以使用Worker修复此问题.工作者使用API​​而不是将Main UI Thread用于requestAnimationFrame,而是通过postMessage将通知发送到Main Thread.通过UI线程完成渲染后,将消息发送回Worker.还有一个增量周期计算可以限制来自工作人员的压倒性消息.

Update 2: I could able to fix this problem using Worker. Instead of using Main UI Thread for requestAnimationFrame, the worker is invoking the API and the notification is sent to Main Thread via postMessage. Upon completion of rendering by UI Thread, a message is sent back to Worker. There is also a delta period calculation to throttle overwhelming messages from worker.

推荐答案

正在进行的建议.replaceTrack()方法添加到MediaRecorder API,但暂时是

There is an ongoing proposal to add a .replaceTrack() method to the MediaRecorder API, but for the time being, the specs still read

如果在任何时候,在流的轨道集中添加了轨道或从轨道中删除了轨道,则UA必须立即停止收集数据,丢弃其收集的所有数据[...]

If at any point, a track is added to or removed from stream’s track set, the UA MUST immediately stop gathering data, discard any data that it has gathered [...]

这就是实现的方式.

因此,我们仍然必须依靠丑陋的骇客自行实现...

So we still have to rely on ugly hacks to make this by ourselves...

这里是一个,似乎只能在Firefox中正常工作,因为我仍然不知道是什么原因,使用MediaSource作为混合器.

Here is one, which seems to correctly work only in Firefox for I still don't know what reasons, using a MediaSource as mixer.

这是这样的:

  • 捕获视频流,
  • 使用每个视频的MediaRecorder记录所有内容
  • 捕获这些MediaRecorder的dataavailable并将其大块输入MediaSource
  • 捕获播放此MediaSource的视频元素的流
  • 记录此混合流
  • capture your videos streams,
  • record them all using a MediaRecorder per video
  • catch the dataavailable of these MediaRecorders and feed a MediaSource with their chunks
  • capture the stream of a video element that plays this MediaSource
  • record this mixed stream

但是,整个设置会增加一个显着的延迟(如果您需要等待几秒钟才能看到源切换,请不要感到惊讶),这对CPU来说是非常疯狂的负担.

However this whole setup adds a significant delay (don't be surprised if you have to wait a few seconds before the switching of sources is visible), and it's crazy heavy on the CPU...

{ // remap unstable FF version
  const proto = HTMLMediaElement.prototype;
  if( !proto.captureStream ) { proto.captureStream = proto.mozCaptureStream; }
}

waitForEvent( document.getElementById( 'starter' ), 'click' )
  .then( (evt) => evt.target.parentNode.remove() )
  .then( (async() => {

  const urls = [
    "2/22/Volcano_Lava_Sample.webm",
    "/a/a4/BBH_gravitational_lensing_of_gw150914.webm"
  ].map( (suffix) => "https://upload.wikimedia.org/wikipedia/commons/" + suffix );
  
  const switcher_btn = document.getElementById( 'switcher' );
  const stop_btn = document.getElementById( 'stopper' );
  const video_out = document.getElementById( 'out' );
  
  const type = 'video/webm; codecs="vp8"';
  if( !MediaSource.isTypeSupported( type ) ) {
    throw new Error( 'Not Supported' );
  }
  let stopped = false;
  let current = 0;
  switcher_btn.onclick = (evt) => { current = +!current; };
  
  console.log( 'loading videos, please wait' );
  // see below for 'recordVid'
  const recorders = await Promise.all( urls.map( (url, index) =>  recordVid( url, type ) ) );
  
  const source = new MediaSource();

  // create an offscreen video so it doesn't get paused when hidden
  const mixed_vid = document.createElement( 'video' );
  mixed_vid.autoplay = true;
  mixed_vid.muted = true;
  mixed_vid.src = URL.createObjectURL( source );
  
  await waitForEvent( source, 'sourceopen' );
  
  const buffer = source.addSourceBuffer( type );
  buffer.mode = "sequence";
  
  // init our requestData loop
  appendBuffer();
  mixed_vid.play();
  await waitForEvent( mixed_vid, 'playing' );
  console.clear();
  
  // final recording part below
  const mixed_stream = mixed_vid.captureStream();
  // only for demo, so we can see what happens now
  video_out.srcObject = mixed_stream;
  
  const rec = new MediaRecorder( mixed_stream );
  const chunks = [];

  rec.ondataavailable = (evt) => chunks.push( evt.data );

  rec.onstop = (evt) => {
    stopped = true;
    const final_file = new Blob( chunks );
    recorders.forEach( (rec) => rec.stop() );
    // only for demo, since we did set its srcObject
    video_out.srcObject = null;
    video_out.src = URL.createObjectURL( final_file );
    switcher_btn.remove();
    stop_btn.remove();
  };

  stop_btn.onclick = (evt) => rec.stop();

  rec.start();
  
  // requestData loop
  async function appendBuffer() {
    if( stopped ) { return; }
    const chunks = await Promise.all( recorders.map( rec => rec.requestData() ) );
    const chunk = chunks[ current ];
    // first iteration is generally empty
    if( !chunk.byteLength ) { setTimeout( appendBuffer, 100 ); return; }
    buffer.appendBuffer( chunk );
    await waitForEvent( buffer, 'update' );
    appendBuffer();
  };
    
}))
.catch( console.error )

// some helpers below

// returns a video loaded to given url
function makeVid( url ) {

  const vid = document.createElement('video');
  vid.crossOrigin = true;
  vid.loop = true;
  vid.muted = true;
  vid.src = url;
  return vid.play()
    .then( (_) => vid );
  
}

/* Records videos from given url
** returns an object which exposes two method
** 'requestData()' returns a Promise resolved by the latest available chunk of data
** 'stop()' stops the video element and the recorder
*/
async function recordVid( url, type ) {
  const player = await makeVid( url );
  const stream = videoStream( player.captureStream() );
//  const stream = await navigator.mediaDevices.getUserMedia({ video: true });
  const recorder = new MediaRecorder( stream, { mimeType: type } );
  const chunks = [];
  recorder.start( );
  
  return {
    requestData() {
      
      recorder.requestData();
      const data_prom = waitForEvent( recorder, "dataavailable" )
        .then( (evt) => evt.data.arrayBuffer() );
      return data_prom;
      
    },
    stop() { recorder.stop(); player.pause(); }
  };
}
// removes the audio tracks from a MediaStream
function videoStream( mixed ) {
  return new MediaStream( mixed.getVideoTracks() );
}
// Promisifies EventTarget.addEventListener
function waitForEvent( target, type ) {
  return new Promise( (res) => target.addEventListener( type, res, { once: true } ) );
}

video { max-height: 100vh; max-width: 100vw; vertical-align: top; }
.overlay {
  background: #ded;
  position: fixed;
  z-index: 999;
  height: 100vh;
  width: 100vw;
  top: 0;
  left: 0;
  display: flex;
  align-items: center;
  justify-content: center;
}

<div class="overlay">
  <button id="starter">start demo</button>
</div>
<button id="switcher">switch source</button>
<button id="stopper">stop recording</button> 
<video id="out" muted controls autoplay></video>

另一个类似的方法是创建本地RTC连接,并记录接收端.

An other such hack is to create a local RTC connection, and to record the receiving end.

尽管在纸张上应该可以使用,但我的Firefox会将两种流奇怪地混合在一起,我建议癫痫病读者避免使用,而Chrome记录器会产生单帧视频,这可能是因为视频大小确实发生了变化.

Though, while on paper this should have worked, my Firefox will just weirdly mix up both streams in something I would advise epileptic readers to avoid, and Chrome recorders produce a single frame video, possibly because the video size does change...

因此,当前似乎无法在任何地方使用,但是在这种情况下,是为了防止浏览器在实施MediaRecorder.replaceTrack之前修正其错误.

So, this currently doesn't seem to work anywhere, but here it is in case browsers fix their bugs before implementing the MediaRecorder.replaceTrack.

{ // remap unstable FF version
  const proto = HTMLMediaElement.prototype;
  if( !proto.captureStream ) { proto.captureStream = proto.mozCaptureStream; }
}

waitForEvent( document.getElementById( 'starter' ), 'click' )
  .then( (evt) => evt.target.parentNode.remove() )
  .then( (async() => {

  const urls = [
    "2/22/Volcano_Lava_Sample.webm",
    "/a/a4/BBH_gravitational_lensing_of_gw150914.webm"
  ].map( (suffix) => "https://upload.wikimedia.org/wikipedia/commons/" + suffix );
  
  const switcher_btn = document.getElementById( 'switcher' );
  const stop_btn = document.getElementById( 'stopper' );
  const video_out = document.getElementById( 'out' );
  
  let current = 0;
  
  // see below for 'recordVid'
  const video_tracks = await Promise.all( urls.map( (url, index) =>  getVideoTracks( url ) ) );
  
  const mixable_stream = await mixableStream( video_tracks[ current ].track );

  switcher_btn.onclick = async (evt) => {

    current = +!current;
    await mixable_stream.replaceTrack( video_tracks[ current ].track );
    
  };

  // final recording part below

  // only for demo, so we can see what happens now
  video_out.srcObject = mixable_stream.stream;

  const rec = new MediaRecorder( mixable_stream.stream );
  const chunks = [];

  rec.ondataavailable = (evt) => chunks.push( evt.data );
  rec.onerror = console.log;
  rec.onstop = (evt) => {

    const final_file = new Blob( chunks );
    video_tracks.forEach( (track) => track.stop() );
    // only for demo, since we did set its srcObject
    video_out.srcObject = null;
    video_out.src = URL.createObjectURL( final_file );
    switcher_btn.remove();
    stop_btn.remove();

		const anchor = document.createElement( 'a' );
    anchor.download = 'file.webm';
    anchor.textContent = 'download';
		anchor.href = video_out.src;
    document.body.prepend( anchor );
    
  };

  stop_btn.onclick = (evt) => rec.stop();

  rec.start();
      
}))
.catch( console.error )

// some helpers below


// creates a mixable stream
async function mixableStream( initial_track ) {
  
  const source_stream = new MediaStream( [] );
  const pc1 = new RTCPeerConnection();
  const pc2 = new RTCPeerConnection();
	pc1.onicecandidate = (evt) => pc2.addIceCandidate( evt.candidate );
	pc2.onicecandidate = (evt) => pc1.addIceCandidate( evt.candidate );

  const wait_for_stream = waitForEvent( pc2, 'track')
    .then( evt => new MediaStream( [ evt.track ] ) );

	pc1.addTrack( initial_track, source_stream );
  
  await waitForEvent( pc1, 'negotiationneeded' );
  try {
    await pc1.setLocalDescription(await pc1.createOffer());
    await pc2.setRemoteDescription(pc1.localDescription);
    await pc2.setLocalDescription(await pc2.createAnswer());
    await pc1.setRemoteDescription(pc2.localDescription);
  } catch (e) {
    console.error(e);
  }
  
  return {
    stream: await wait_for_stream,
    async replaceTrack( new_track ) {
      const sender = pc1.getSenders().find( ( { track } ) => track.kind == new_track.kind );
      console.log( new_track );
      return sender && sender.replaceTrack( new_track ) ||
        Promise.reject('no such track');
    }
  }  
}

// returns a video loaded to given url
function makeVid( url ) {

  const vid = document.createElement('video');
  vid.crossOrigin = true;
  vid.loop = true;
  vid.muted = true;
  vid.src = url;
  return vid.play()
    .then( (_) => vid );
  
}

/* Records videos from given url
** @method stop() ::pauses the linked <video>
** @property track ::the video track
*/
async function getVideoTracks( url ) {
  const player = await makeVid( url );
  const track = player.captureStream().getVideoTracks()[ 0 ];
  
  return {
    track,
    stop() { player.pause(); }
  };
}
// Promisifies EventTarget.addEventListener
function waitForEvent( target, type ) {
  return new Promise( (res) => target.addEventListener( type, res, { once: true } ) );
}

video { max-height: 100vh; max-width: 100vw; vertical-align: top; }
.overlay {
  background: #ded;
  position: fixed;
  z-index: 999;
  height: 100vh;
  width: 100vw;
  top: 0;
  left: 0;
  display: flex;
  align-items: center;
  justify-content: center;
}

<div class="overlay">
  <button id="starter">start demo</button>
</div>
<button id="switcher">switch source</button>
<button id="stopper">stop recording</button> 
<video id="out" muted controls autoplay></video>

然后,目前最好的方法还是使用我在何时使用的 Web音频计时器该页面是模糊的,即使在Firefox上也无法使用.

Then, currently the best is probably to still go the canvas way, with the Web Audio Timer I made for when the page is blurred, even though this will not work on Firefox.

这篇关于MediaRecorder切换视频轨道的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆