是否可以将多个 webm blob/剪辑合并到一个连续视频客户端? [英] Is it possible to merge multiple webm blobs/clips into one sequential video clientside?

查看:41
本文介绍了是否可以将多个 webm blob/剪辑合并到一个连续视频客户端?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经看过这个问题了 -

I already looked at this question -

并在这里尝试了示例代码 - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (没有修改)希望将 blob 转换为数组缓冲区并将它们附加到 MediaSource WebAPI 的源缓冲区中,但即使是示例代码也不是't 在我的 chrome 浏览器上工作,据说它与之兼容.

And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.

我的问题的症结在于,我无法将多个 blob webm 剪辑组合成一个,而不会在第一次播放后播放不正确.要直接解决问题,请滚动到前两段代码之后的那一行,供后台继续阅读.

The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.

我正在设计一个网络应用程序,它允许演示者记录他/她自己解释图表和视频的场景.

I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.

我正在使用 MediaRecorder WebAPI 在 chrome/firefox 上录制视频.(附带问题 - 有没有其他方式(除了闪光灯)可以通过网络摄像头和麦克风录制视频/音频?因为非 Chrome/Firefox 用户代理不支持 MediaRecorder).

I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).

navigator.mediaDevices.getUserMedia(constraints)
    .then(gotMedia)
    .catch(e => { console.error('getUserMedia() failed: ' + e); });

function gotMedia(stream) {
    recording = true;
    theStream = stream;
    vid.src = URL.createObjectURL(theStream);
    try {
        recorder = new MediaRecorder(stream);
    } catch (e) {
        console.error('Exception while creating MediaRecorder: ' + e);
        return;
    }

    theRecorder = recorder;
    recorder.ondataavailable = 
        (event) => {
            tempScene.push(event.data);
        };

    theRecorder.start(100);
}

function finishRecording() {
    recording = false;
    theRecorder.stop();
    theStream.getTracks().forEach(track => { track.stop(); });

    while(tempScene[0].size != 1) {
        tempScene.splice(0,1);
    }

    console.log(tempScene);

    scenes.push(tempScene);
    tempScene = [];
}

函数finishRecording 被调用并且一个场景(一个mimetype 'video/webm' 的blob 数组)被保存到场景数组中.保存后.用户可以通过这个过程记录和保存更多的场景.然后他可以使用以下代码块查看某个场景.

The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.

function showScene(sceneNum) {
    var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
    vid.src = URL.createObjectURL(sceneBlob);
    vid.play();
}

在上面的代码中,场景的 blob 数组变成了一个大 blob,为此创建了一个 url 并由视频的 src 属性指向,所以 -[blob, blob, blob] => sceneBlob(一个对象,不是数组)

In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so - [blob, blob, blob] => sceneBlob (an object, not array)

到目前为止一切正常.这就是问题开始的地方

Up until this point everything works fine and dandy. Here is where the issue starts

我尝试通过将每个场景的 blob 数组合并为一个长 blob 数组来将所有场景合并为一个.此功能的目的是让用户可以订购他/她认为合适的场景,因此他/她可以选择不包括场景.所以它们的顺序不一定与记录的顺序相同,所以 -

I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -

场景 1:[blob-1,blob-1] 场景 2:[blob-2,blob-2]最后:[blob-2, blob-2, blob-1, blob-1]

scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2] final: [blob-2, blob-2, blob-1, blob-1]

然后我制作了最终的 blob 数组的 blob,所以 -最终:[blob,blob,blob,blob] => finalBlob下面的代码用于合并场景 blob 数组

and then I make a blob of the final blob array, so - final: [blob, blob, blob, blob] => finalBlob The code is below for merging the scene blob arrays

function mergeScenes() {
    scenes[scenes.length] = [];
    for(var i = 0; i < scenes.length - 1; i++) {
        scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
    }
    mergedScenes = scenes[scenes.length - 1];
    console.log(scenes[scenes.length - 1]);
}

可以通过在第二小段代码中使用 showScene 函数查看最后一个场景,因为它作为场景数组中的最后一个场景附加.当使用 showScene 函数播放视频时,它会一直播放所有场景.但是,如果我在第一次播放完视频后按播放键,它只会播放最后一个场景.此外,如果我通过浏览器下载并播放视频,第一次正确播放 - 随后的时间,我会看到相同的错误.

This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene. Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.

我做错了什么?如何将文件合并为一个包含所有场景的视频?非常感谢您花时间阅读本文并帮助我,如果我需要澄清任何事情,请告诉我.

What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.

我正在使用一个元素来显示场景

I am using a element to display the scenes

推荐答案

文件的标题(元数据)应该只附加到您获得的第一个数据块中.
你不能通过一个接一个地粘贴来制作一个新的视频文件,它们有一个结构.

The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.

那么如何解决这个问题?

So how to workaround this ?

如果我正确理解您的问题,您需要的是能够合并所有录制的视频,就像它只是暂停一样.嗯,这可以实现,感谢 MediaRecorder.pause() 方法.

If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused. Well this can be achieved, thanks to the MediaRecorder.pause() method.

您可以保持流打开,只需暂停 MediaRecorder.在每个 pause 事件中,您将能够生成包含从录制开始到此事件的所有帧的新视频.

You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.

这是一个外部演示,因为堆栈片段不适用于 gUM...

Here is an external demo because stacksnippets don't works well with gUM...

如果您还需要在每个恢复和暂停事件之间制作更短的视频,您可以简单地为这些较小的部分创建新的 MediaRecorder,同时保持大部分的运行.

And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.

这篇关于是否可以将多个 webm blob/剪辑合并到一个连续视频客户端?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆