在Web Worker中解码图像 [英] Decode images in web worker

查看:102
本文介绍了在Web Worker中解码图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我们的WebGL应用程序中,我试图在Web工作程序中加载和解码纹理图像,以避免在主线程中渲染堆积.在工作程序中使用createImageBitmap并将图像位图传输回主线程效果很好,但是在Chrome浏览器中,它将使用三个或更多(可能取决于内核数量?)单独的工作程序(ThreadPoolForegroundWorker),这些工作程序一起使用主线程和我自己的工作人员将产生五个线程.

In our WebGL application I'm trying to load and decode texture images in a web worker in order to avoid rendering hick-ups in the main thread. Using createImageBitmap in the worker and transferring the image bitmap back to the main thread works well, but in Chrome this will use three or more (maybe depending on number of cores?) separate workers (ThreadPoolForegroundWorker) which together with the main thread and my own worker will result in five threads.

我猜想这会导致四核上剩余的渲染干扰,因为我可以在Chrome的DevTools的性能"功能中看到一些无法解释的长时间情况.

I'm guessing this causes my remaining rendering disturbances on my quad core since I can see some inexplicable long times in the Performance feature of Chrome's DevTools.

那么,我可以以某种方式限制createImageBitmap使用的工作程序数量吗?即使我将图像作为Blob或数组缓冲区传输到主线程并从那里激活createImageBitmap,它的工作程序也会与我自己的工作程序和主线程竞争.

So, can I limit the number of workers used by createImageBitmap somehow? Even if I transfer the images as blobs or array buffers to the main thread and activate createImageBitmap from there, its workers will compete with my own worker and the main thread.

我尝试在工作器中创建常规图像,而不是在其中明确解码它们,但是如果我想将它们创建为元素,则图像未在工作器上下文中定义,文档也未定义.而且常规图像也不可转让,因此在主线程上创建它们并将其传递给工作程序似乎也不可行.

I have tried creating regular Images in the worker instead to explicitly decode them there, but Image is not defined in the worker context, neither is document if I'd like to create them as elements. And regular Images are not transferable either, so creating them on the main thread and transferring them to the worker doesn't seem feasible either.

期待任何建议...

推荐答案

没有理由在工作线程中使用createImageBitmap(很好,请参阅底部).浏览器已经在单独的线程中对图像进行解码.在一个工人中做任何事情都不会给你带来任何好处.更大的问题是ImageBitmap无法在最终将图像传递给WebGL时知道如何使用图像.如果您要求的格式与ImageBitmap所解码的格式不同,则WebGL必须再次对其进行转换和/或解码,并且您无法为ImageBitmap提供足够的信息以告知其要解码的格式.

There's no reason to use createImageBitmap in a worker (well, see bottom). The browser already decodes the image in a separate thread. Doing it in a worker doesn't get you anything. The bigger issue is there's no way for ImageBitmap to know how you are going to use the image when you finally pass it to WebGL. If you ask for a format that's different than what ImageBitmap decoded then WebGL has to convert and/or decode it again and you can't give ImageBitmap enough info to tell it the format you want it to decode in.

最重要的是,Chrome中的WebGL必须将图像数据从渲染过程传输到GPU过程,对于大图像来说,GPU副本是一个相对较大的副本(RGBA的1024x1024为4meg)

On top of that WebGL in Chrome has to transfer the data of the image from the render process to the GPU process which for a large image is a relatively big copy (1024x1024 by RGBA is 4meg)

更好的API IMO可以让您告诉ImageBitmap您想要什么格式以及您想要的格式(CPU,GPU).这样,浏览器就可以异步准备图像,并且完成后不需要繁重的工作.

A better API IMO would have allowed you to tell ImageBitmap what format you want and where you want it (CPU, GPU). That way the browser could prep the image asynchonously and have it require no heavy work when done.

无论如何,这是一个测试.如果您取消选中更新纹理",那么它仍在下载和解码纹理,但只是不调用 gl.texImage2D 来上传纹理.在那种情况下,我看不到任何垃圾(无法证明是问题所在,但我认为那是问题所在)

In any case, here's a test. If you uncheck "update texture" then it's still downloading and decoding textures but it's just not calling gl.texImage2D to upload the texture. In that case I see no jank (not proof that's the issue but that's where I think it is)

const m4 = twgl.m4;
const gl = document.querySelector('#webgl').getContext('webgl');
const ctx = document.querySelector('#graph').getContext('2d');

let update = true;
document.querySelector('#update').addEventListener('change', function() {
  update = this.checked;
});

const vs = `
attribute vec4 position;
uniform mat4 matrix;
varying vec2 v_texcoord;
void main() {
  gl_Position = matrix * position;
  v_texcoord = position.xy;
}
`

const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D tex;
void main() {
  gl_FragColor = texture2D(tex, v_texcoord);
}
`;

const program = twgl.createProgram(gl, [vs, fs]);
const posLoc = gl.getAttribLocation(program, 'position');
const matLoc = gl.getUniformLocation(program, 'matrix');

const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  0, 0,
  1, 0,
  0, 1,
  0, 1,
  1, 0,
  1, 1,
]), gl.STATIC_DRAW);

gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(
    gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
    new Uint8Array([0, 0, 255, 255]));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);

const m = m4.identity();
let frameCount = 0;
let previousTime = 0;
let imgNdx = 0;
let imgAspect = 1;

const imageUrls = [
  'https://i.imgur.com/KjUybBD.png',
  'https://i.imgur.com/AyOufBk.jpg',
  'https://i.imgur.com/UKBsvV0.jpg',
  'https://i.imgur.com/TSiyiJv.jpg',
];

async function loadNextImage() {
  const url = `${imageUrls[imgNdx]}?cachebust=${performance.now()}`;
  imgNdx = (imgNdx + 1) % imageUrls.length;
  const res = await fetch(url, {mode: 'cors'});
  const blob = await res.blob();
  const bitmap = await createImageBitmap(blob, {
    premultiplyAlpha: 'none',
    colorSpaceConversion: 'none',
  });
  if (update) {
    gl.bindTexture(gl.TEXTURE_2D, tex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, bitmap);
    imgAspect = bitmap.width / bitmap.height;
  }
  setTimeout(loadNextImage, 1000);
}
loadNextImage();

function render(currentTime) {
  const deltaTime = currentTime - previousTime;
  previousTime = currentTime;
  
  {
    const {width, height} = ctx.canvas;
    const x = frameCount % width;
    const y = 1000 / deltaTime / 60 * height / 2;
    ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
    ctx.clearRect(x, 0, 1, height);
    ctx.fillRect(x, y, 1, height);
    ctx.clearRect(0, 0, 30, 15);
    ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
  }

  gl.useProgram(program);
  const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  m4.scaling([1 / dispAspect, 1, 1], m);
  m4.rotateZ(m, currentTime * 0.001, m);
  m4.scale(m, [imgAspect, 1, 1], m);
  m4.translate(m, [-0.5, -0.5, 0], m);
  gl.uniformMatrix4fv(matLoc, false, m);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
  
  ++frameCount;
  requestAnimationFrame(render);
}
requestAnimationFrame(render);

canvas { border: 1px solid black; margin: 2px; }
#ui { position: absolute; }

<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<div id="ui"><input type="checkbox" id="update" checked><label for="update">Update Texture</label></div>
<canvas id="webgl"></canvas>
<canvas id="graph"></canvas>

我很确定,唯一可以避免麻烦的方法是自己在工作线程中解码图像,以arraybuffer的形式传输到主线程,并使用 gl将帧上载到WebGL几行.bufferSubData .

I'm pretty sure the only way you could maybe guarentee no jank is to decode the images yourself in a worker, transfer to the main thread as an arraybuffer, and upload to WebGL a few rows a frame with gl.bufferSubData.

const m4 = twgl.m4;
const gl = document.querySelector('#webgl').getContext('webgl');
const ctx = document.querySelector('#graph').getContext('2d');

const vs = `
attribute vec4 position;
uniform mat4 matrix;
varying vec2 v_texcoord;
void main() {
  gl_Position = matrix * position;
  v_texcoord = position.xy;
}
`

const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D tex;
void main() {
  gl_FragColor = texture2D(tex, v_texcoord);
}
`;

const program = twgl.createProgram(gl, [vs, fs]);
const posLoc = gl.getAttribLocation(program, 'position');
const matLoc = gl.getUniformLocation(program, 'matrix');

const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  0, 0,
  1, 0,
  0, 1,
  0, 1,
  1, 0,
  1, 1,
]), gl.STATIC_DRAW);

gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

function createTexture(gl) {
  const tex = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, tex);
  gl.texImage2D(
      gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
      new Uint8Array([0, 0, 255, 255]));
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
  return tex;
}

let drawingTex = createTexture(gl);
let loadingTex = createTexture(gl);

const m = m4.identity();
let frameCount = 0;
let previousTime = 0;

const workerScript = `
const ctx = new OffscreenCanvas(1, 1).getContext('2d');
let imgNdx = 0;
let imgAspect = 1;

const imageUrls = [
  'https://i.imgur.com/KjUybBD.png',
  'https://i.imgur.com/AyOufBk.jpg',
  'https://i.imgur.com/UKBsvV0.jpg',
  'https://i.imgur.com/TSiyiJv.jpg',
];

async function loadNextImage() {
  const url = \`\${imageUrls[imgNdx]}?cachebust=\${performance.now()}\`;
  imgNdx = (imgNdx + 1) % imageUrls.length;
  const res = await fetch(url, {mode: 'cors'});
  const blob = await res.blob();
  const bitmap = await createImageBitmap(blob, {
    premultiplyAlpha: 'none',
    colorSpaceConversion: 'none',
  });
  ctx.canvas.width = bitmap.width;
  ctx.canvas.height = bitmap.height;
  ctx.drawImage(bitmap, 0, 0);
  const imgData = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
  const data = new Uint8Array(imgData.data);
  postMessage({
    width: imgData.width,
    height: imgData.height,
    data: data.buffer,
  }, [data.buffer]);
}

onmessage = loadNextImage;
`;
const blob = new Blob([workerScript], {type: 'application/javascript'});
const worker = new Worker(URL.createObjectURL(blob));
let imgAspect = 1;
worker.onmessage = async(e) => {
  const {width, height, data} = e.data;
  
  gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);  
  
  const maxRows = 20;
  for (let y = 0; y < height; y += maxRows) {
    const rows = Math.min(maxRows, height - y);
    gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, y, width, rows, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(data, y * width * 4, rows * width * 4));  
    await waitRAF();
  }
  const temp = loadingTex;
  loadingTex = drawingTex;
  drawingTex = temp;
  imgAspect = width / height;
  await waitMS(1000);
  worker.postMessage('');
};
worker.postMessage('');

function waitRAF() {
  return new Promise(resolve => requestAnimationFrame(resolve));
}

function waitMS(ms = 0) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

function render(currentTime) {
  const deltaTime = currentTime - previousTime;
  previousTime = currentTime;
  
  {
    const {width, height} = ctx.canvas;
    const x = frameCount % width;
    const y = 1000 / deltaTime / 60 * height / 2;
    ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
    ctx.clearRect(x, 0, 1, height);
    ctx.fillRect(x, y, 1, height);
    ctx.clearRect(0, 0, 30, 15);
    ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
  }

  gl.useProgram(program);
  const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  m4.scaling([1 / dispAspect, 1, 1], m);
  m4.rotateZ(m, currentTime * 0.001, m);
  m4.scale(m, [imgAspect, 1, 1], m);
  m4.translate(m, [-0.5, -0.5, 0], m);
  gl.bindTexture(gl.TEXTURE_2D, drawingTex);
  gl.uniformMatrix4fv(matLoc, false, m);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
  
  ++frameCount;
  requestAnimationFrame(render);
}
requestAnimationFrame(render);

canvas { border: 1px solid black; margin: 2px; }

<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas id="webgl"></canvas>
<canvas id="graph"></canvas>

注意:我也不知道这也行不通.几个令人恐惧的地方并定义了浏览器实现

Note: I don't know that this will work either. Several places that are scary and browser implementation defined

  1. 调整画布大小的性能问题是什么?该代码正在调整工作器中OffscreenCanvas的大小.使用GPU重复执行可能会是一项繁重的操作.

  1. What's the performance issues of resizing a canvas. The code is resizing the OffscreenCanvas in the worker. That could be a heavy operation with GPU reprocussions.

将位图绘制到画布中的性能如何?同样,由于浏览器必须将图像传输到GPU才能将其绘制到GPU 2D画布中,因此GPU性能会出现较大问题.

What's the performance of drawing an bitmap into a canvas? Again, big GPU perf issues as the browser has to transfer the image to the GPU in order to draw it into a GPU 2D canvas.

getImageData的性能如何?浏览器再次不得不冻结GPU来读取GPU内存,以获取图像数据.

What's the performance get getImageData? Yet again the browser has to potentially freeze the GPU to read GPU memory to get the image data out.

重新设置纹理可能会导致穿孔.

There's a possible perf hit reszing the texture.

仅Chrome当前支持OffscreenCanvas

Only Chrome currently supports OffscreenCanvas

1、2、3和5都可以通过解码 jpg png 图像本身,虽然它确实很糟,但浏览器却具有解码图像的代码,只是您无法访问以任何有用的方式对代码进行解码.

1, 2, 3, and 5 could all be solved by decoding jpg, png the image yourself though it really sucks the browser has the code to decode the image it's just you can't access the decoding code in any useful way.

对于4,如果有问题,可以通过分配最大的图像大小纹理,然后将较小的纹理复制到矩形区域中来解决.假设这是一个问题

For 4, If it's an issue it could be solved, by allocating the largest image size texture and then copying smaller textures into a rectangular area. Assuming that's an issue

const m4 = twgl.m4;
const gl = document.querySelector('#webgl').getContext('webgl');
const ctx = document.querySelector('#graph').getContext('2d');

const vs = `
attribute vec4 position;
uniform mat4 matrix;
varying vec2 v_texcoord;
void main() {
  gl_Position = matrix * position;
  v_texcoord = position.xy;
}
`

const fs = `
precision mediump float;
varying vec2 v_texcoord;
uniform sampler2D tex;
void main() {
  gl_FragColor = texture2D(tex, v_texcoord);
}
`;

const program = twgl.createProgram(gl, [vs, fs]);
const posLoc = gl.getAttribLocation(program, 'position');
const matLoc = gl.getUniformLocation(program, 'matrix');

const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
  0, 0,
  1, 0,
  0, 1,
  0, 1,
  1, 0,
  1, 1,
]), gl.STATIC_DRAW);

gl.enableVertexAttribArray(posLoc);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);

function createTexture(gl) {
  const tex = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, tex);
  gl.texImage2D(
      gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE,
      new Uint8Array([0, 0, 255, 255]));
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
  return tex;
}

let drawingTex = createTexture(gl);
let loadingTex = createTexture(gl);

const m = m4.identity();
let frameCount = 0;
let previousTime = 0;

const workerScript = `
importScripts(
    // from https://github.com/eugeneware/jpeg-js
    'https://greggman.github.io/doodles/js/JPG-decoder.js',
    // from https://github.com/photopea/UPNG.js
    'https://greggman.github.io/doodles/js/UPNG.js',
);

let imgNdx = 0;
let imgAspect = 1;

const imageUrls = [
  'https://i.imgur.com/KjUybBD.png',
  'https://i.imgur.com/AyOufBk.jpg',
  'https://i.imgur.com/UKBsvV0.jpg',
  'https://i.imgur.com/TSiyiJv.jpg',
];

function decodePNG(arraybuffer) {
  return UPNG.decode(arraybuffer)
}

function decodeJPG(arrayBuffer) {
  return decode(new Uint8Array(arrayBuffer), true);
}

const decoders = {
  'image/png': decodePNG,
  'image/jpeg': decodeJPG,
  'image/jpg': decodeJPG,
};

async function loadNextImage() {
  const url = \`\${imageUrls[imgNdx]}?cachebust=\${performance.now()}\`;
  imgNdx = (imgNdx + 1) % imageUrls.length;
  const res = await fetch(url, {mode: 'cors'});
  const arrayBuffer = await res.arrayBuffer();
  const type = res.headers.get('Content-Type');
  let decoder = decoders[type];
  if (!decoder) {
    console.error('unknown image type:', type);
  }
  const imgData = decoder(arrayBuffer);
  postMessage({
    width: imgData.width,
    height: imgData.height,
    arrayBuffer: imgData.data.buffer,
  }, [imgData.data.buffer]);
}

onmessage = loadNextImage;
`;
const blob = new Blob([workerScript], {type: 'application/javascript'});
const worker = new Worker(URL.createObjectURL(blob));
let imgAspect = 1;
worker.onmessage = async(e) => {
  const {width, height, arrayBuffer} = e.data;
  
  gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);  
  
  const maxRows = 20;
  for (let y = 0; y < height; y += maxRows) {
    const rows = Math.min(maxRows, height - y);
    gl.bindTexture(gl.TEXTURE_2D, loadingTex);
    gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, y, width, rows, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(arrayBuffer, y * width * 4, rows * width * 4));  
    await waitRAF();
  }
  const temp = loadingTex;
  loadingTex = drawingTex;
  drawingTex = temp;
  imgAspect = width / height;
  await waitMS(1000);
  worker.postMessage('');
};
worker.postMessage('');

function waitRAF() {
  return new Promise(resolve => requestAnimationFrame(resolve));
}

function waitMS(ms = 0) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

function render(currentTime) {
  const deltaTime = currentTime - previousTime;
  previousTime = currentTime;
  
  {
    const {width, height} = ctx.canvas;
    const x = frameCount % width;
    const y = 1000 / deltaTime / 60 * height / 2;
    ctx.fillStyle = frameCount % (width * 2) < width ? 'red' : 'blue';
    ctx.clearRect(x, 0, 1, height);
    ctx.fillRect(x, y, 1, height);
    ctx.clearRect(0, 0, 30, 15);
    ctx.fillText((1000 / deltaTime).toFixed(1), 2, 10);
  }

  gl.useProgram(program);
  const dispAspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
  m4.scaling([1 / dispAspect, 1, 1], m);
  m4.rotateZ(m, currentTime * 0.001, m);
  m4.scale(m, [imgAspect, 1, 1], m);
  m4.translate(m, [-0.5, -0.5, 0], m);
  gl.bindTexture(gl.TEXTURE_2D, drawingTex);
  gl.uniformMatrix4fv(matLoc, false, m);
  gl.drawArrays(gl.TRIANGLES, 0, 6);
  
  ++frameCount;
  requestAnimationFrame(render);
}
requestAnimationFrame(render);

canvas { border: 1px solid black; margin: 2px; }

<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas id="webgl"></canvas>
<canvas id="graph"></canvas>

请注意,jpeg解码器运行缓慢.如果您找到或制作速度更快的产品,请发表评论

note the jpeg decoder is slow. If you find or make a faster one please post a comment

我只想说 ImageBitmap 应该足够快,而我上面提到的一些有关它的信息不足的评论可能并不完全正确.

I just want to say that ImageBitmap should be fast enough and that some of my comments above about it not having enough info might not be exactly right.

我目前的理解是,如果 ImageBitmap 是要使上传速度更快的话,这很重要.您应该给它一个blob,然后异步地将该图像加载到GPU中,这样才能正常工作.当您用它调用 texImage2D 时,浏览器可以变白".(使用GPU渲染)该图像到您的纹理中.鉴于这样,我不知道为什么在第一个示例中会出现垃圾,但我每6个左右的图像就会看到垃圾.

My current understanding is the entire point if ImageBitmap was to make uploads fast. It's supposed to work by you give it a blob and asynchronously it loads that image into the GPU. When you call texImage2D with it, the browser can "blit" (render with the GPU) that image into your texture. I have no idea why there is jank in the first example given that but I see jank every 6 or so images.

另一方面,虽然将图像上传到GPU是 ImageBitmap 的重点,但不需要浏览器上传到GPU.即使用户没有GPU, ImageBitmap 仍然可以正常工作.关键是由浏览器决定如何实现该功能,以及该功能是快速还是慢速或无垃圾,完全取决于浏览器.

On the other hand, while uploading the image to the GPU was the point of ImageBitmap, browsers are not required to upload to the GPU. ImageBitmap is still supposed to work even if the user doesn't have a GPU. The point being it's up to the browser to decide how to implement the feature and whether it's fast or slow or jank free is entirely up to the browser.

这篇关于在Web Worker中解码图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆