在 Three.js 中高效渲染数万个大小/颜色/位置可变的球体? [英] Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

查看:58
本文介绍了在 Three.js 中高效渲染数万个大小/颜色/位置可变的球体?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个问题是从我上一个问题中提取的,我发现使用 Points 会导致问题:https://stackoverflow.com/a/60306638/4749956

This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956

要解决这个问题,您需要使用四边形而不是点来绘制点.有很多方法可以做到这一点.将每个四边形绘制为单独的网格或精灵,或者将所有四边形合并到另一个网格中,或者使用 InstancedMesh,其中每个点都需要一个矩阵,或者编写自定义着色器来做点(请参阅本文的最后一个示例)

To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)

我一直在试图找出这个答案.我的问题是

I've been trying to figure this answer out. My questions are

什么是实例化"?合并几何图形和实例化之间有什么区别?而且,如果我要执行其中任何一项,我会使用什么几何图形以及如何改变颜色?我一直在看这个例子:

What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:

https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html

我看到,对于每个球体,您都会有一个几何图形,可以应用位置和大小(比例?).那么,底层几何体是单位半径的 SphereBufferGeometry 吗?但是,您如何应用颜色?

And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?

另外,我读到了自定义着色器方法,它有一些模糊的意义.但是,它似乎更复杂.性能会比上面的更好吗?

Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?

推荐答案

基于您之前的问题...

Based on your previous quesiton...

首先,实例化是一种告诉three.js 多次绘制相同几何图形但为每个实例"更改更多内容的方法.IIRC 唯一支持 Three.js 开箱即用的是为每个实例设置不同的矩阵(位置、方向、比例).过去,例如拥有不同的颜色,您必须编写自定义着色器.

First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.

实例化允许您要求系统用一个询问"而不是每个事物的询问"来绘制许多事物.这意味着它最终会快得多.你可以把它想象成任何东西.如果想要 3 个汉堡包,您可以请人为您制作 1 个.当他们完成后,您可以让他们制作另一个.当他们完成后,您可以要求他们制作第 3 个.这比一开始就要求他们做 3 个汉堡要慢得多.这不是一个完美的类比,但它确实指出了一次请求多项内容比一次请求多项内容效率低.

Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.

合并网格是另一种解决方案,遵循上面的类比,合并网格就像制作一个 1 磅的大汉堡包而不是三个 1/3 磅的汉堡包.翻转一个较大的汉堡并将配料和面包放在一个大的汉堡上,比对 3 个小汉堡执行相同操作的速度略快.

Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.

至于哪个是最适合您的解决方案取决于您.在您的原始代码中,您只是使用点绘制纹理四边形.点总是在屏幕空间中绘制它们的四边形.另一方面,默认情况下,网格在世界空间中旋转,因此如果您制作了四边形的实例或一组合并的四边形并尝试旋转它们,它们会转动而不是像点那样面向相机.如果您使用球体几何,那么您会遇到问题,而不是每个四边形只计算 6 个顶点并在其上绘制一个圆,您将计算每个球体的 100 或 1000 个顶点,这将比每个四边形 6 个顶点慢.

As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.

同样,它需要一个自定义着色器来保持点面向相机.

So again it requires a custom shader to keep the points facing the camera.

通过实例化短版本来做到这一点是您决定每个实例重复哪些顶点数据.例如,对于纹理四边形,我们需要 6 个顶点位置和 6 个 uv.对于这些,您可以使用普通的 BufferAttribute

To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute

然后您决定哪些顶点数据对于每个实例都是唯一的.在您的情况下,点的大小、颜色和中心.对于每一个,我们都创建了一个 InstancedBufferAttribute

Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute

我们将所有这些属性添加到 InstancedBufferGeometry 并作为最后一个参数告诉它有多少个实例.

We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.

画画的时候可以这样想

  • 对于每个实例
    • 将 size 设置为 size 属性中的下一个值
    • 将颜色设置为颜色属性中的下一个值
    • 将 center 设置为 center 属性中的下一个值
    • 调用顶点着色器 6 次,位置和 uv 设置为其属性中的第 n 个值.

    通过这种方式,您将获得多次使用的相同几何图形(位置和 uv),但每次都会更改一些值(大小、颜色、中心).

    In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.

    body {
      margin: 0;
    }
    #c {
      width: 100vw;
      height: 100vh;
      display: block;
    }
    #info {
      position: absolute;
      right: 0;
      bottom: 0;
      color: red;
      background: black;
    }

    <canvas id="c"></canvas>
    <div id="info"></div>
    <script type="module">
    // Three.js - Picking - RayCaster w/Transparency
    // from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html
    
    import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";
    
    function main() {
      const infoElem = document.querySelector("#info");
      const canvas = document.querySelector("#c");
      const renderer = new THREE.WebGLRenderer({ canvas });
    
      const fov = 60;
      const aspect = 2; // the canvas default
      const near = 0.1;
      const far = 200;
      const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
      camera.position.z = 30;
    
      const scene = new THREE.Scene();
      scene.background = new THREE.Color(0);
      const pickingScene = new THREE.Scene();
      pickingScene.background = new THREE.Color(0);
    
      // put the camera on a pole (parent it to an object)
      // so we can spin the pole to move the camera around the scene
      const cameraPole = new THREE.Object3D();
      scene.add(cameraPole);
      cameraPole.add(camera);
    
      function randomNormalizedColor() {
        return Math.random();
      }
    
      function getRandomInt(n) {
        return Math.floor(Math.random() * n);
      }
    
      function getCanvasRelativePosition(e) {
        const rect = canvas.getBoundingClientRect();
        return {
          x: e.clientX - rect.left,
          y: e.clientY - rect.top
        };
      }
    
      const textureLoader = new THREE.TextureLoader();
      const particleTexture =
        "https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";
    
      const vertexShader = `
        attribute float size;
        attribute vec3 customColor;
        attribute vec3 center;
    
        varying vec3 vColor;
        varying vec2 vUv;
    
        void main() {
            vColor = customColor;
            vUv = uv;
            vec3 viewOffset = position * size ;
            vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
            gl_Position = projectionMatrix * mvPosition;
        }
    `;
    
      const fragmentShader = `
        uniform sampler2D texture;
        varying vec3 vColor;
        varying vec2 vUv;
    
        void main() {
            vec4 tColor = texture2D(texture, vUv);
            if (tColor.a < 0.5) discard;
            gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
        }
    `;
    
      const pickFragmentShader = `
        uniform sampler2D texture;
        varying vec3 vColor;
        varying vec2 vUv;
    
        void main() {
          vec4 tColor = texture2D(texture, vUv);
          if (tColor.a < 0.25) discard;
          gl_FragColor = vec4(vColor.rgb, 1.0);
        }
    `;
    
      const materialSettings = {
        uniforms: {
          texture: {
            type: "t",
            value: textureLoader.load(particleTexture)
          }
        },
        vertexShader: vertexShader,
        fragmentShader: fragmentShader,
        blending: THREE.NormalBlending,
        depthTest: true,
        transparent: false
      };
    
      const createParticleMaterial = () => {
        const material = new THREE.ShaderMaterial(materialSettings);
        return material;
      };
    
      const createPickingMaterial = () => {
        const material = new THREE.ShaderMaterial({
          ...materialSettings,
          fragmentShader: pickFragmentShader,
          blending: THREE.NormalBlending
        });
        return material;
      };
    
      const geometry = new THREE.InstancedBufferGeometry();
      const pickingGeometry = new THREE.InstancedBufferGeometry();
      const colors = [];
      const sizes = [];
      const pickingColors = [];
      const pickingColor = new THREE.Color();
      const centers = [];
      const numSpheres = 30;
    
      const positions = [
        -0.5, -0.5,
         0.5, -0.5,
        -0.5,  0.5,
        -0.5,  0.5,
         0.5, -0.5,
         0.5,  0.5,
      ];
    
      const uvs = [
         0, 0,
         1, 0,
         0, 1,
         0, 1,
         1, 0,
         1, 1,
      ];
    
      for (let i = 0; i < numSpheres; i++) {
        colors[3 * i] = randomNormalizedColor();
        colors[3 * i + 1] = randomNormalizedColor();
        colors[3 * i + 2] = randomNormalizedColor();
    
        const rgbPickingColor = pickingColor.setHex(i + 1);
        pickingColors[3 * i] = rgbPickingColor.r;
        pickingColors[3 * i + 1] = rgbPickingColor.g;
        pickingColors[3 * i + 2] = rgbPickingColor.b;
    
        sizes[i] = getRandomInt(5);
    
        centers[3 * i] = getRandomInt(20);
        centers[3 * i + 1] = getRandomInt(20);
        centers[3 * i + 2] = getRandomInt(20);
      }
    
      geometry.setAttribute(
        "position",
        new THREE.Float32BufferAttribute(positions, 2)
      );
      geometry.setAttribute(
        "uv",
        new THREE.Float32BufferAttribute(uvs, 2)
      );
      geometry.setAttribute(
        "customColor",
        new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
      );
      geometry.setAttribute(
        "center",
        new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
      );
      geometry.setAttribute(
        "size",
        new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));
    
      const material = createParticleMaterial();
      const points = new THREE.InstancedMesh(geometry, material, numSpheres);
    
      // setup geometry and material for GPU picking
      pickingGeometry.setAttribute(
        "position",
        new THREE.Float32BufferAttribute(positions, 2)
      );
      pickingGeometry.setAttribute(
        "uv",
        new THREE.Float32BufferAttribute(uvs, 2)
      );
      pickingGeometry.setAttribute(
        "customColor",
        new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
      );
      pickingGeometry.setAttribute(
        "center",
        new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
      );
      pickingGeometry.setAttribute(
        "size",
        new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
      );
    
      const pickingMaterial = createPickingMaterial();
      const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);
    
      scene.add(points);
      pickingScene.add(pickingPoints);
    
      function resizeRendererToDisplaySize(renderer) {
        const canvas = renderer.domElement;
        const width = canvas.clientWidth;
        const height = canvas.clientHeight;
        const needResize = canvas.width !== width || canvas.height !== height;
        if (needResize) {
          renderer.setSize(width, height, false);
        }
        return needResize;
      }
    
      class GPUPickHelper {
        constructor() {
          // create a 1x1 pixel render target
          this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
          this.pixelBuffer = new Uint8Array(4);
        }
        pick(cssPosition, pickingScene, camera) {
          const { pickingTexture, pixelBuffer } = this;
    
          // set the view offset to represent just a single pixel under the mouse
          const pixelRatio = renderer.getPixelRatio();
          camera.setViewOffset(
            renderer.getContext().drawingBufferWidth, // full width
            renderer.getContext().drawingBufferHeight, // full top
            (cssPosition.x * pixelRatio) | 0, // rect x
            (cssPosition.y * pixelRatio) | 0, // rect y
            1, // rect width
            1 // rect height
          );
          // render the scene
          renderer.setRenderTarget(pickingTexture);
          renderer.render(pickingScene, camera);
          renderer.setRenderTarget(null);
          // clear the view offset so rendering returns to normal
          camera.clearViewOffset();
          //read the pixel
          renderer.readRenderTargetPixels(
            pickingTexture,
            0, // x
            0, // y
            1, // width
            1, // height
            pixelBuffer
          );
    
          const id =
            (pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];
    
          infoElem.textContent = `You clicked sphere number ${id}`;
    
          return id;
        }
      }
    
      const pickHelper = new GPUPickHelper();
    
      function render(time) {
        time *= 0.001; // convert to seconds;
    
        if (resizeRendererToDisplaySize(renderer)) {
          const canvas = renderer.domElement;
          camera.aspect = canvas.clientWidth / canvas.clientHeight;
          camera.updateProjectionMatrix();
        }
    
        cameraPole.rotation.y = time * 0.1;
    
        renderer.render(scene, camera);
    
        requestAnimationFrame(render);
      }
      requestAnimationFrame(render);
    
      function onClick(e) {
        const pickPosition = getCanvasRelativePosition(e);
        const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
      }
    
      function onTouch(e) {
        const touch = e.touches[0];
        const pickPosition = getCanvasRelativePosition(touch);
        const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
      }
    
      window.addEventListener("mousedown", onClick);
      window.addEventListener("touchstart", onTouch);
    }
    
    main();
    </script>

    这篇关于在 Three.js 中高效渲染数万个大小/颜色/位置可变的球体?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆