是否可以将多个webm blob /剪辑合并为一个顺序视频客户端? [英] Is it possible to merge multiple webm blobs/clips into one sequential video clientside?

查看:1357
本文介绍了是否可以将多个webm blob /剪辑合并为一个顺序视频客户端?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经看过这个问题了 -

I already looked at this question -

并尝试了示例代码 - https://developer.mozilla.org/en-US/docs/Web/API/ MediaSource - (没有修改)希望将blob转换为arraybuffers并将它们附加到MediaSource WebAPI的sourcebuffer,但是即使示例代码也没有在我的chrome浏览器上工作,据说它是兼容。

And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.

我的问题的关键在于我第一次播放后无法将多个blob webm剪辑组合成一个没有错误播放的剪辑。要直接找到问题,请在前两个代码块之后滚动到该行,以便继续阅读背景。

The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.

我正在设计一个允许演示者使用的Web应用程序记录他/她自己解释图表和视频的场景。

I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.

我正在使用MediaRecorder WebAPI在chrome / firefox上录制视频。 (附带问题 - 是否还有其他方式(除了闪光灯)我可以通过网络摄像头和麦克风录制视频/音频?因为Chrome / Firefox用户代理不支持MediaRecorder)。

I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).

navigator.mediaDevices.getUserMedia(constraints)
    .then(gotMedia)
    .catch(e => { console.error('getUserMedia() failed: ' + e); });

function gotMedia(stream) {
    recording = true;
    theStream = stream;
    vid.src = URL.createObjectURL(theStream);
    try {
        recorder = new MediaRecorder(stream);
    } catch (e) {
        console.error('Exception while creating MediaRecorder: ' + e);
        return;
    }

    theRecorder = recorder;
    recorder.ondataavailable = 
        (event) => {
            tempScene.push(event.data);
        };

    theRecorder.start(100);
}

function finishRecording() {
    recording = false;
    theRecorder.stop();
    theStream.getTracks().forEach(track => { track.stop(); });

    while(tempScene[0].size != 1) {
        tempScene.splice(0,1);
    }

    console.log(tempScene);

    scenes.push(tempScene);
    tempScene = [];
}

调用函数finishRecording和一个场景(一个mimetype的blob数组' video / webm')保存到场景数组中。保存后。然后,用户可以通过此过程记录和保存更多场景。然后,他可以使用以下代码块查看某个场景。

The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.

function showScene(sceneNum) {
    var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
    vid.src = URL.createObjectURL(sceneBlob);
    vid.play();
}

在上面的代码中,会发生的情况是场景的blob数组变成一个大blob,其中一个url是由视频的src属性创建并指向的,所以 -
[blob,blob,blob] => sceneBlob(一个对象,不是数组)

In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so - [blob, blob, blob] => sceneBlob (an object, not array)

直到这一刻,一切正常,花花公子。这是问题开始的地方

Up until this point everything works fine and dandy. Here is where the issue starts

我尝试将每个场景的blob数组合成一个长blob数组,将所有场景合并为一个。该功能的关键在于用户可以按照他/她认为合适的方式对场景进行排序,因此他可以选择不包括场景。因此它们不一定与它们记录的顺序相同,所以 -

I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -

场景1:[blob-1,blob-1]场景2:[blob -2,blob-2]
final:[blob-2,blob-2,blob-1,blob-1]

scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2] final: [blob-2, blob-2, blob-1, blob-1]

然后我做了一个最后一个blob数组的blob,所以 -
final:[blob,blob,blob,blob] => finalBlob
以下代码用于合并场景blob数组

and then I make a blob of the final blob array, so - final: [blob, blob, blob, blob] => finalBlob The code is below for merging the scene blob arrays

function mergeScenes() {
    scenes[scenes.length] = [];
    for(var i = 0; i < scenes.length - 1; i++) {
        scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
    }
    mergedScenes = scenes[scenes.length - 1];
    console.log(scenes[scenes.length - 1]);
}

可以使用第二个小块中的showScene函数查看最终场景因为它被附加为场景数组中的最后一个场景。当使用showScene功能播放视频时,它会一直播放所有场景。但是,如果我在第一次播放后按下视频播放,它只播放最后一个场景。
此外,如果我通过浏览器下载并播放视频,第一次正确播放 - 随后的时间,我看到同样的错误。

This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene. Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.

我做错了什么?如何将文件合并为一个包含所有场景的视频?非常感谢您抽出时间阅读本文并帮助我,如果我需要澄清任何内容,请告诉我。

What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.

我正在使用元素来显示场景

I am using a element to display the scenes

推荐答案

文件的标题(元数据)只应附加到您获得的第一个数据块。

你不能通过一个接一个地粘贴一个新的视频文件,它们有一个结构。

The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.

那么如何解决这个问题呢?

So how to workaround this ?

如果我理解你的问题,你需要的是能够合并所有录制的视频,就像它只是暂停一样。
这可以实现,这要归功于 MediaRecorder.pause()方法。

If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused. Well this can be achieved, thanks to the MediaRecorder.pause() method.

你可以保持流打开,只需暂停MediaRecorder。在每个暂停事件中,您将能够生成一个新视频,其中包含从录制开始到此事件的所有帧。

You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.

这是一个外部演示,因为stacksnippets don对于gum来说效果很好...

Here is an external demo because stacksnippets don't works well with gUM...

如果你需要在每次简历和暂停事件之间都有更短的视频,你可以简单地为这些创建新的MediaRecorders较小的部件,同时保持大的部件运行。

And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.

这篇关于是否可以将多个webm blob /剪辑合并为一个顺序视频客户端?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆