流html5 canvas内容的有效方法? [英] efficient way of streaming a html5 canvas content?

查看:56
本文介绍了流html5 canvas内容的有效方法?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 websockets 和<$实时流式传输 html5画布的内容c $ c> nodejs 。



html5画布的内容只是一个视频。



到目前为止,我所做的是:



我将画布转换为 blob ,然后获取 blob URL ,然后使用websockets将该URL发送到我的nodejs服务器。



我得到这样的Blob URL:

  canvas.toBlob(function(blob){
url = window.URL.createObjectURL(blob);
});

每个视频帧(准确地说是每秒20帧)生成Blob URL像这样:

  blob:null / e3e8888e-98da-41aa-a3c0-8fe3f44frt53 

然后我通过websocket从服务器获取该blob URL,以便可以将其拖放到另一个画布上,以供其他用户查看。 / p>

我确实搜索了如何从Blob URL绘制到画布上,但是找不到与我要执行的操作相似的事情。



所以我的问题是:


  1. 这是正确的方法吗做我想达到的目标?任何
    的利弊都将受到赞赏。


  2. 是否有其他更有效的方式做到这一点,或者我在正确的
    路径?


预先感谢。



编辑:



我应该提到我不能在该项目中使用WebRTC,所以我必须这样做



使我现在所在的每个地方都更容易,这就是我尝试在画布上显示上面提到的Blob URL的方式使用websockets:

  websocket.onopen = function(event){

websocket.onmessage = function( evt){
var val = evt.data;
console.log(新数据 + val);
var canvas2 = document.querySelector(’。canvMotion2’);
var ctx2 = canvas2.getContext(’2d’);
var img = new Image();

img.onload = function(){
ctx2.drawImage(img,0,0)
}
img.src = val;
};

//监听套接字关闭
websocket.onclose = function(event){

};

websocket.onerror = function(evt){

};

};

问题是,当我在FireFox中运行该代码时,画布始终为空/空白,但是我在控制台中看到Blob URL,这样我就觉得自己在做错了。



在Google chrome中,我得到不允许加载本地资源:blob:错误。



第二次编辑:



这是我现在的位置。



第一个选项



我试图通过websockets发送整个blob,但我成功地做到了。但是,由于某些奇怪的原因,我无法在客户端读回它!



当我查看自己的Node.js服务器控制台时,我可以看到每个类似的东西我正在发送到服务器的Blob:

 <缓冲区fd67676 hdsjuhsd8 sjhjs .... 

第二个选项:



因此上述选项失败了,我想到了其他方法,即将每个画布框架转换为base64(jpeg),然后通过websocket将其发送到服务器,然后将这些base64图像显示/绘制到客户端的画布上。



我每秒向服务器发送24帧。



这可行。但是客户端画布在其中再次显示这些base64图像的速度非常慢,并且就像每秒绘制1帧一样。



第三种选择:



我也尝试使用没有画布的视频。因此,使用WebRTC,我将视频流作为单个Blob 。但是我不太确定如何使用它并将其发送给客户端,以便人们可以看到它。



重要提示:我正在工作的不是对等连接。

解决方案

最自然的方式是它流画布内容:WebRTC


OP明确表示他们无法使用它,并且许多人可能会遇到这种情况,因为


  1. 浏览器支持仍然不是那么好。

  2. 这意味着要运行MediaServer(至少要运行ICE + STUN / TURN,如果要流到多个对等端,则可能是网关)。

但是,如果您负担得起,那么从画布元素中获取MediaStream所需要做的就是

  const canvas_stream = canvas.captureStream(minimumFrameRate); 

,然后只需将其添加到RTCPeerConnection中即可。

  pc.addTrack(stream.getVideoTracks()[0],stream); 

下面的示例仅将MediaStream显示为< video> 元素。


  let x = 0; 
const ctx = canvas.getContext(’2d’);
draw();
startStream();

function startStream(){
//抓取我们的MediaStream
const stream = canvas.captureStream(30);
//输入< video>
vid.srcObject =流;
vid.play();
}
函数draw(){
x =(x +1)%(canvas.width + 50);
ctx.fillStyle ='白色';
ctx.fillRect(0,0,canvas.width,canvas.height);
ctx.fillStyle ='红色';
ctx.beginPath();
ctx.arc(x-25,75,25,0,Math.PI * 2);
ctx.fill();
requestAnimationFrame(draw);
}

 视频,画布{border:1px实心}  

 < canvas id = canvas> 75 /帆布
< video id = vid controls>< / video>


< hr />

流式实时画布绘图的最有效方法是:流式绘图操作。


OP再次说,他们不希望这种解决方案,因为它们的设置不匹配,但可能对许多读者有帮助:


与其发送画布结果,不如将绘画命令发送给您的同伴,然后它们将执行


但是这种方法有其自身的警告:



  • 您将必须编写自己的编码器/ decoder来传递命令。

  • 某些情况下可能难以共享(例如,外部媒体必须在所有同位体上共享并预加载相同的方式,更糟糕的情况是绘制

  • 您可能希望避免对所有同级对象进行密集的图像处理(例如ImageData操作)。




因此,第三种方法绝对是性能较低的方法,就像OP尝试做的一样:


定期上传帧


在这里我不会详细介绍,但是请记住,您发送的是独立的图像文件,因此比起编码为视频。


相反,我将重点介绍 为什么OP的代码无法正常工作?


首先,最好先提醒一下Blob是什么(在 canvas.toBlob(callback)回调中提供的东西)。 / p>

Blob是一个特殊的JavaScript对象,它表示二进制数据,通常存储在浏览器的内存中,或者至少存储在用户磁盘上,浏览器可以访问。

但是,此二进制数据不能直接用于JavaScript。为了能够访问它,我们需要(通过FileReader或Response对象)读取此Blob,或者创建一个BlobURI ,这是一个伪造的URI,允许大多数API指向二进制数据就像存储在真实服务器上一样,即使二进制数据仍然只是在浏览器分配的内存中。


但是此BlobURI只是伪造的,临时的和域受限制的浏览器路径,不能与任何其他跨域文档,应用程序甚至更少的计算机共享。


所有这些都表明应该发送给WebSocket的内容,直接是Blob,而不是BlobURI。


您只能在消费者方面创建BlobURI,以便他们可以从Blob的二进制数据中加载这些图像。在分配的内存中。


发射器端:

  canvas.toBlob( blob => ws.send(blob)); 

消费者方面:

  ws。 onmessage = function(evt){
const blob = evt.data;
const url = URL.createObjectURL(blob);
img.src =网址;
};




但实际上,为了更好地回答OP的问题,最终的解决方案可能是


共享在画布上绘制的视频流。


I'm trying to stream the content of a html5 canvas on a live basis using websockets and nodejs.

The content of the html5 canvas is just a video.

What I have done so far is:

I convert the canvas to blob and then get the blob URL and send that URL to my nodejs server using websockets.

I get the blob URL like this:

canvas.toBlob(function(blob) {
   url = window.URL.createObjectURL(blob);
});

The blob URLs are generated per video frame (20 frames per second to be exact) and they look something like this:

blob:null/e3e8888e-98da-41aa-a3c0-8fe3f44frt53

I then get that blob URL back from the the server via websockets so I can use it to DRAW it onto another canvas for other users to see.

I did search how to draw onto canvas from blob URL but I couldn't find anything close to what i am trying to do.

So the questions I have are:

  1. Is this the correct way of doing what i am trying to achieve? any pros and cons would be appreciated.

  2. Is there any other more efficient way of doing this or I'm on a right path?

Thanks in advance.

EDIT:

I should have mentioned that I cannot use WebRTC in this project and I have to do it all with what I have.

to make it easier for everyone where I am at right now, this how I tried to display the blob URLs that I mentioned above in my canvas using websockets:

websocket.onopen = function(event) {

        websocket.onmessage = function(evt) {
            var val = evt.data;
            console.log("new data "+val);
            var canvas2 = document.querySelector('.canvMotion2');
            var ctx2 = canvas2.getContext('2d');
            var img = new Image();     

            img.onload = function(){
                ctx2.drawImage(img, 0, 0)
            }
            img.src = val;
        };

        // Listen for socket closes
        websocket.onclose = function(event) {

        };

        websocket.onerror = function(evt) {

        };

};

The issue is that when I run that code in FireFox, the canvas is always empty/blank but I see the blob URLs in my console so that makes me think that what I am doing is wrong.

and in Google chrome, i get Not allowed to load local resource: blob: error.

SECOND EDIT:

This is where I am at the moment.

First option

I tried to send the whole blob(s) via websockets and I managed that successfully. However, I couldn't read it back on the client side for some strange reason!

when I looked on my nodejs server's console, I could see something like this for each blob that I was sending to the server:

<buffer fd67676 hdsjuhsd8 sjhjs....

Second option:

So the option above failed and I thought of something else which is turning each canvas frame to base64(jpeg) and send that to the server via websockets and then display/draw those base64 image onto the canvas on the client side.

I'm sending 24 frames per second to the server.

This worked. BUT the client side canvas where these base64 images are being displayed again is very slow and and its like its drawing 1 frame per second. and this is the issue that i have at the moment.

Third option:

I also tried to use a video without a canvas. So, using WebRTC, I got the video Stream as a single Blob. but I'm not entiely sure how to use that and send it to the client side so people can see it.

IMPORTANT: this system that I am working on is not a peer to peer connection. its just a one way streaming that I am trying to achieve.

解决方案

The most natural way to stream a canvas content: WebRTC

OP made it clear that they can't use it, and it may be the case for many because,

  1. Browser support is still not that great.
  2. It implies to have a MediaServer running (at least ICE+STUN/TURN, and maybe a gateway if you want to stream to more than one peer).

But still, if you can afford it, all you need then to get a MediaStream from your canvas element is

const canvas_stream = canvas.captureStream(minimumFrameRate);

and then you'd just have to add it to your RTCPeerConnection:

pc.addTrack(stream.getVideoTracks()[0], stream);

Example below will just display the MediaStream to a <video> element.

let x = 0;
const ctx = canvas.getContext('2d');
draw();
startStream();

function startStream() {
  // grab our MediaStream
  const stream = canvas.captureStream(30);
  // feed the <video>
  vid.srcObject = stream;
  vid.play();
}
function draw() {
  x = (x + 1) % (canvas.width + 50);
  ctx.fillStyle = 'white';
  ctx.fillRect(0,0,canvas.width,canvas.height);
  ctx.fillStyle = 'red';
  ctx.beginPath();
  ctx.arc(x - 25, 75, 25, 0, Math.PI*2);
  ctx.fill();
  requestAnimationFrame(draw);
}

video,canvas{border:1px solid}

<canvas id="canvas">75</canvas>
<video id="vid" controls></video>


The most efficient way to stream a live canvas drawing: stream the drawing operations.

Once again, OP said they didn't want this solution because their set-up doesn't match, but might be helpful for many readers:

Instead of sending the result of the canvas, simply send the drawing commands to your peers, which will then execute these on their side.

But this approach has its own caveats:

  • You will have to write your own encoder/decoder to pass the commands.
  • Some cases might get hard to share (e.g external media would have to be shared and preloaded the same way on all peers, and the worse case being drawing an other canvas, where you'd have to also have shared its own drawing process).
  • You may want to avoid intensive image processing (e.g ImageData manipulation) to be done on all peers.

So a third, definitely less performant way to do it, is like OP tried to do:

Upload frames at regular interval.

I won't go in details in here, but keep in mind that you are sending standalone image files, and hence a whole lot more data than if it had been encoded as a video.

Instead, I'll focus on why OP's code didn't work?

First it may be good to have a small reminder of what is a Blob (the thing that is provided in the callback of canvas.toBlob(callback)).

A Blob is a special JavaScript object, which represents binary data, generally stored either in browser's memory, or at least on user's disk, accessible by the browser.
This binary data is not directly available to JavaScript though. To be able to access it, we need to either read this Blob (through a FileReader or a Response object), or to create a BlobURI, which is a fake URI, allowing most APIs to point at the binary data just like if it was stored on a real server, even though the binary data is still just in the browser's allocated memory.

But this BlobURI being just a fake, temporary, and domain restricted path to the browser's memory, can not be shared to any other cross-domain document, application, and even less computer.

All this to say that what should have been sent to the WebSocket, are the Blobs directly, and not the BlobURIs.

You'd create the BlobURIs only on the consumers' side, so that they can load these images from the Blob's binary data that is now in their allocated memory.

Emitter side:

canvas.toBlob(blob=>ws.send(blob));

Consumer side:

ws.onmessage = function(evt) {
  const blob = evt.data;
  const url = URL.createObjectURL(blob);
  img.src = url;
};


But actually, to even better answer OP's problem, a final solution, which is probably the best in this scenario,

Share the video stream that is painted on the canvas.

这篇关于流html5 canvas内容的有效方法?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆