WebGL2-如何存储和检索3D顶点网格计算新顶点位置所需的3D纹理数据 [英] WebGL2 -- How to store and retrieve 3D texture data needed by 3D grid of vertices to calculate new vertex positions

查看:130
本文介绍了WebGL2-如何存储和检索3D顶点网格计算新顶点位置所需的3D纹理数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

3D物理模拟需要访问着色器中相邻顶点的位置和属性,以计算顶点的新位置. 2D版本可以使用,但在将解决方案移植到3D时遇到麻烦.翻转两个3D纹理似乎是正确的,输入一个纹理的x,y和z坐标集,并获得vec4s,其中包含相邻点的位置-速度-加速度数据,可用于计算每个顶点的新位置和速度. 2D版本使用1个带有帧缓冲区的绘制调用将所有生成的gl_FragColors保存到sampler2D.我想使用framebuffer对sampler3D做同样的事情.但是看起来好像在3D中使用帧缓冲区,我需要在第二个3D纹理时一次写入一个+层,直到所有层都被保存为止.我对将顶点网格映射到纹理的相对x,y,z坐标以及如何将其分别保存到图层感到困惑.在2D版本中,写入帧缓冲区的gl_FragColor直接映射到画布的2D x-y坐标系,每个像素都是一个顶点.但是我不明白如何确保将包含3D顶点位置-速度数据的gl_FragColor写入纹理,以使其始终正确映射到3D顶点.

3D Physics simulation needs access to neighbor vertices' positions and attributes in shader to calculate a vertex's new position. 2D version works but am having trouble porting solution to 3D. Flip-Flopping two 3D textures seems right, inputting sets of x,y and z coordinates for one texture, and getting vec4s which contains position-velocity-acceleration data of neighboring points to use to calculate new positions and velocities for each vertex. The 2D version uses 1 draw call with a framebuffer to save all the generated gl_FragColors to a sampler2D. I want to use a framebuffer to do the same with a sampler3D. But it looks like using a framebuffer in 3D, I need to write one+ layer at a time of a 2nd 3D texture, until all layers have been saved. I'm confused about mapping vertex grid to relative x,y,z coordinates of texture and how to save this to layers individually. In 2D version the gl_FragColor written to the framebuffer maps directly to the 2D x-y coordinate system of the canvas, with each pixel being a vertex. but I'm not understanding how to make sure the gl_FragColor which contains position-velocity data for a 3D vertex is written to the texture such that it keeps mapping correctly to the 3D vertices.

这适用于片段着色器中的2D:

This works for 2D in a fragment shader:

vec2 onePixel = vec2(1.0, 1.0)/u_textureSize;
vec4 currentState = texture2D(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1;i<=1;i+=2){
    for (int j=-1;j<=1;j+=2){
        if (i == 0 && j == 0) continue;
        vec2 neighborCoord = v_texCoord + vec2(onePixel.x*float(i), onePixel.y*float(j));

        vec4 neighborState;
        if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0){
            neighborState = vec4(0.0,0.0,0.0,1.0);
        } else {
            neighborState = texture2D(u_image, neighborCoord);
        }

        float deltaP =  neighborState.r - currentState.r;
        float deltaV = neighborState.g - currentState.g;

        fTotal += u_kSpring*deltaP + u_dSpring*deltaV;
    }
}

float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);

这是我在片段着色器中进行3D尝试的方法:#version 300 es

This is what I have attempted in 3D in a fragment shader:#version 300 es

vec3 onePixel = vec3(1.0, 1.0, 1.0)/u_textureSize;
vec4 currentState = texture(u_image, v_texCoord);
float fTotal = 0.0;
for (int i=-1; i<=1; i++){
    for (int j=-1; j<=1; j++){
        for (int k=-1; k<=1; k++){
           if (i == 0 && j == 0 && k == 0) continue;
           vec3 neighborCoord = v_texCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
           vec4 neighborState;

           if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.z < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0 || neighborCoord.z >= 1.0){
               neighborState = vec4(0.0,0.0,0.0,1.0);
           } else {
               neighborState = texture(u_image, neighborCoord);
           }
           float deltaP =  neighborState.r - currentState.r;  //Distance from neighbor
           float springDeltaLength =  (deltaP - u_springOrigLength[counter]);

           //Add the force on our point of interest from the current neighbor point.  We'll be adding up to 26 of these together.
           fTotal += u_kSpring[counter]*springDeltaLength;
        }
    }
}

float acceleration = fTotal/u_mass;
float velocity = acceleration*u_dt + currentState.g;
float position = velocity*u_dt + currentState.r;
gl_FragColor = vec4(position,velocity,acceleration,1);

写完这些之后,我继续阅读,发现帧缓冲区不能同时访问sampler3D的所有层进行写入.我需要以某种方式一次处理1-4层.我既不确定如何执行此操作,也不确定gl_FragColor到达正确图层上的正确像素.

After I wrote that, I kept reading and found that a framebuffer doesn't access all layers of a sampler3D for writing at the same time. I need to somehow process 1 - 4 layers at a time. I'm both unsure of how to do that, as well as make sure the gl_FragColor goes to the right pixel on the right layer.

我在SO上找到了这个答案: 渲染为3D纹理webgl2 它演示了一次在帧缓冲区中写入多个图层的过程,但是我没有看到如何通过一个绘制调用将其与片段着色器等同,自动运行1,000,000次(100 x 100 x 100 ...(长x宽) x高度)),每次都使用位置-速度-加速度数据填充sampler3D中的右侧像素,然后我可以将其翻转以用于下一次迭代.

I found this answer on SO: Render to 3D texture webgl2 It demonstrates writing to multiple layers at a time in a framebuffer, but I'm not seeing how to equate this with the fragment shader, from one draw call, automatically running 1,000,000 times (100 x 100 x 100 ... (length x width x height)), each time populating the right pixel in a sampler3D with the position-velocity-acceleration data, which I can then flip-flop to use for the next iteration.

我还没有结果.我希望以编程方式制作第一个sampler3D,用它生成保存在第二个sampler3D中的新顶点数据,然后切换纹理并重复.

I have no results yet. I'm hoping to make a first sampler3D programatically, use it to generate new vertex data which is saved in a second sampler3D, and then switch textures and repeat.

推荐答案

WebGL是基于目标的.这意味着它会对要写入目标的每个结果执行1次操作.可以设置的唯一目的地是2D平面中的点(像素的平方),直线和三角形.这意味着写入3D纹理将需要分别处理每个平面.充其量,通过在帧缓冲区中设置多个附件(最多允许的最大附件数),您也许可以在N为4到8的地方分别制作N个平面

WebGL is destination based. That means it does 1 operation for each result it wants to write to the destination. The only kinds of destinations you can set are points (squares of pixels), lines, and triangles in a 2D plane. That means writing to a 3D texture will require handling each plane separately. At best you might be able to do N planes separately on where N is 4 to 8 by setting up multiple attachments to a framebuffer up to the maximum allowed attachments

所以我假设您了解如何一次渲染到100个图层1.在初始化时,要么制作100个帧缓冲区,然后将不同的层附加到每个帧缓冲区.或者,在渲染时,使用不同的附件更新单个帧缓冲区.知道会发生多少验证后,我会选择制作100个帧缓冲区

So I'm assuming you understand how to render to 100 layers 1 at a time. At init time either make 100 framebuffers and attach different layer to each one. OR, at render time update a single framebuffer with a different attachment. Knowing how much validation happens I'd choose making 100 framebuffers

所以

const framebuffers = [];
for (let layer = 0; layer < numLayers; ++layer) {
  const fb = gl.createFramebuffer();
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, texture, 
    0, layer);
  framebuffers.push(fb);
}

现在在渲染时渲染到每个图层

now at render time render to each layer

framebuffers.forEach((fb, layer) => {
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  // pass in the layer number to the shader it can use for calculations
  gl.uniform1f(layerLocation, layer);
  ....
  gl.drawXXX(...);
});

WebGL1不支持3D纹理,因此我们知道您正在使用WebGL2,因为您提到使用sampler3D.

WebGL1 does not support 3D textures so we know you're using WebGL2 since you mentioned using sampler3D.

在WebGL2中,通常在着色器顶部使用#version 300 es表示要使用更现代的GLSL ES 3.00.

In WebGL2 you generally use #version 300 es at the top of your shaders to signify you want to use the more modern GLSL ES 3.00.

要绘制多个图层,首先需要弄清楚要渲染到的图层数量. WebGL2一次至少支持4个层,因此我们可以假设4个层.为此,您需要在每个帧缓冲区上附加4层

Drawing to multiple layers requires first figuring out how many layers you want to render to. WebGL2 supports a minimum of 4 at once so we could just assume 4 layers. To do that you'd attach 4 layers to each framebuffer

const layersPerFramebuffer = 4;
const framebuffers = [];
for (let baseLayer = 0; baseLayer < numLayers; baseLayer += layersPerFramebuffer) {
  const fb = gl.createFramebuffer();
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  for (let layer = 0; layer < layersPerFramebuffer; ++layer) {
    gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + layer, texture, 0, baseLayer + layer);
  }
  framebuffers.push(fb);
}

GLSL ES 3.0着色器不使用gl_FragCoord,它们使用用户定义的输出,因此我们将声明一个数组输出

GLSL ES 3.0 shaders do not use gl_FragCoord they use a user defined output so we'd declare an array output

out vec4 ourOutput[4];

,然后使用它,就像您以前使用gl_FragColor一样,只不过添加了一个索引.下面我们处理4层.我们只为v_texCoord传递一个vec2,并基于baseLayerTexCoord计算第三个坐标,这是我们在每次绘制调用中传递的内容.

and then use that just like you were previously using gl_FragColor except add an index. Below we're processing 4 layers. We're only passing in a vec2 for v_texCoord and computing the 3rd coordinate based on baseLayerTexCoord, something we pass in each draw call.

varying vec2 v_texCoord;
uniform float baseLayerTexCoord;

vec4 results[4];
vec3 onePixel = vec3(1.0, 1.0, 1.0)/u_textureSize;
const int numLayers = 4;
for (int layer = 0; layer < numLayers; ++layer) {
    vec3 baseTexCoord = vec3(v_texCoord, baseLayerTexCoord + onePixel * float(layer));
    vec4 currentState = texture(u_image, baseTexCoord);
    float fTotal = 0.0;
    for (int i=-1; i<=1; i++){
        for (int j=-1; j<=1; j++){
            for (int k=-1; k<=1; k++){
               if (i == 0 && j == 0 && k == 0) continue;
               vec3 neighborCoord = baseTexCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
               vec4 neighborState;

               if (neighborCoord.x < 0.0 || neighborCoord.y < 0.0 || neighborCoord.z < 0.0 || neighborCoord.x >= 1.0 || neighborCoord.y >= 1.0 || neighborCoord.z >= 1.0){
                   neighborState = vec4(0.0,0.0,0.0,1.0);
               } else {
                   neighborState = texture(u_image, neighborCoord);
               }
               float deltaP =  neighborState.r - currentState.r;  //Distance from neighbor
               float springDeltaLength =  (deltaP - u_springOrigLength[counter]);

               //Add the force on our point of interest from the current neighbor point.  We'll be adding up to 26 of these together.
               fTotal += u_kSpring[counter]*springDeltaLength;
            }
        }
    }

    float acceleration = fTotal/u_mass;
    float velocity = acceleration*u_dt + currentState.g;
    float position = velocity*u_dt + currentState.r;
    results[layer] = vec4(position,velocity,acceleration,1);
}
ourOutput[0] = results[0];
ourOutput[1] = results[1];
ourOutput[2] = results[2];
ourOutput[3] = results[3];

最后要做的是我们需要调用gl.drawBuffers来告诉WebGL2将输出存储在哪里.由于我们一次要使用4层,所以我们会使用

The last thing to do is we need to call gl.drawBuffers to tell WebGL2 where to store the outputs. Since we're doing 4 layers at a time we'd use

gl.drawBuffers([
  gl.COLOR_ATTACHMENT0,
  gl.COLOR_ATTACHMENT1,
  gl.COLOR_ATTACHMENT2,
  gl.COLOR_ATTACHMENT3,
]);
framebuffers.forEach((fb, ndx) => {
  gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
  gl.uniform1f(baseLayerTexCoordLocation, (ndx * layersPerFramebuffer + 0.5) / numLayers);
  ....
  gl.drawXXX(...);
});

示例:

function main() {
  const gl = document.querySelector('canvas').getContext('webgl2');
  if (!gl) {
    return alert('need webgl2');
  }
  const ext = gl.getExtension('EXT_color_buffer_float');
  if (!ext) {
    return alert('need EXT_color_buffer_float');
  }
  
  const vs = `#version 300 es
  in vec4 position;
  out vec2 v_texCoord;
  void main() {
    gl_Position = position;
    // position will be a quad -1 to +1 so we
    // can use that for our texcoords
    v_texCoord = position.xy * 0.5 + 0.5;
  }
  `;
  
  const fs = `#version 300 es
precision highp float;
in vec2 v_texCoord;
uniform float baseLayerTexCoord;
uniform highp sampler3D u_image;
uniform mat3 u_kernel[3];

out vec4 ourOutput[4];

void main() {
  vec3 textureSize = vec3(textureSize(u_image, 0));
  vec3 onePixel = vec3(1.0, 1.0, 1.0)/textureSize;
  const int numLayers = 4;
  vec4 results[4];
  for (int layer = 0; layer < numLayers; ++layer) {
      vec3 baseTexCoord = vec3(v_texCoord, baseLayerTexCoord + onePixel * float(layer));
      float fTotal = 0.0;
      vec4 color;
      for (int i=-1; i<=1; i++){
          for (int j=-1; j<=1; j++){
              for (int k=-1; k<=1; k++){
                 vec3 neighborCoord = baseTexCoord + vec3(onePixel.x*float(i), onePixel.y*float(j), onePixel.z*float(k));
                 color += u_kernel[k + 1][j + 1][i + 1] * texture(u_image, neighborCoord);
              }
          }
      }

      results[layer] = color;
  }
  ourOutput[0] = results[0];
  ourOutput[1] = results[1];
  ourOutput[2] = results[2];
  ourOutput[3] = results[3];
}
  `;
  const vs2 = `#version 300 es
  uniform vec4 position;
  uniform float size;
  void main() {
    gl_Position = position;
    gl_PointSize = size;
  }
  `;
  const fs2 = `#version 300 es
  precision highp float;
  uniform highp sampler3D u_image;
  uniform float slice;
  out vec4 outColor;
  void main() {
    outColor = texture(u_image, vec3(gl_PointCoord.xy, slice));
  }
  `;
  
  const computeProgramInfo = twgl.createProgramInfo(gl, [vs, fs]);
  const drawProgramInfo = twgl.createProgramInfo(gl, [vs2, fs2]);
  
  const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
    position: {
      numComponents: 2,
      data: [
        -1, -1,
         1, -1,
        -1,  1,
        -1,  1,
         1, -1,
         1,  1,
      ],
    },
  });

  function create3DTexture(gl, size) {
    const tex = gl.createTexture();
    const data = new Float32Array(size * size * size * 4);
    for (let i = 0; i < data.length; i += 4) {
      data[i + 0] = i % 100 / 100;
      data[i + 1] = i % 10000 / 10000;
      data[i + 2] = i % 100000 / 100000;
      data[i + 3] = 1;
    }
    gl.bindTexture(gl.TEXTURE_3D, tex);
    gl.texImage3D(gl.TEXTURE_3D, 0, gl.RGBA32F, size, size, size, 0, gl.RGBA, gl.FLOAT, data);

    gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
    gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
    return tex;
  }

  const size = 100;
  let inTex = create3DTexture(gl, size);
  let outTex = create3DTexture(gl, size);
  const numLayers = size;
  const layersPerFramebuffer = 4;
  
  function makeFramebufferSet(gl, tex) {
    const framebuffers = [];
    for (let baseLayer = 0; baseLayer < numLayers; baseLayer += layersPerFramebuffer) {
      const fb = gl.createFramebuffer();
      gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
      for (let layer = 0; layer < layersPerFramebuffer; ++layer) {
        gl.framebufferTextureLayer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0 + layer, tex, 0, baseLayer + layer);
      }
      framebuffers.push(fb);
    }
    return framebuffers;
  };
  
  let inFramebuffers = makeFramebufferSet(gl, inTex);
  let outFramebuffers = makeFramebufferSet(gl, outTex);

  function render() {
    gl.viewport(0, 0, size, size);
    gl.useProgram(computeProgramInfo.program);
    twgl.setBuffersAndAttributes(gl, computeProgramInfo, bufferInfo);

    outFramebuffers.forEach((fb, ndx) => {
      gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
      gl.drawBuffers([
        gl.COLOR_ATTACHMENT0,
        gl.COLOR_ATTACHMENT1,
        gl.COLOR_ATTACHMENT2,
        gl.COLOR_ATTACHMENT3,
      ]);

      const baseLayerTexCoord = (ndx * layersPerFramebuffer + 0.5) / numLayers;
      twgl.setUniforms(computeProgramInfo, {
        baseLayerTexCoord,
        u_kernel: [
          0, 0, 0,
          0, 0, 0,
          0, 0, 0,

          0, 0, 1,
          0, 0, 0,
          0, 0, 0,

          0, 0, 0,
          0, 0, 0,
          0, 0, 0,
        ],
        u_image: inTex,      
      });

      gl.drawArrays(gl.TRIANGLES, 0, 6);
    });

    {
      const t = inFramebuffers;
      inFramebuffers = outFramebuffers;
      outFramebuffers = t;
    }

    {
      const t = inTex;
      inTex = outTex;
      outTex = t;
    }

    gl.bindFramebuffer(gl.FRAMEBUFFER, null);
    gl.drawBuffers([gl.BACK]);
    gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);

    gl.useProgram(drawProgramInfo.program);

    const slices = 10.0;
    const sliceSize = 25.0
    for (let slice = 0; slice < slices; ++slice) {
      const sliceZTexCoord = (slice / slices * size + 0.5) / size;
      twgl.setUniforms(drawProgramInfo, {
        position: [
          ((slice * (sliceSize + 1) + sliceSize * .5) / gl.canvas.width * 2) - 1,
          0,
          0,
          1,
        ],
        slice: sliceZTexCoord,
        size: sliceSize,
      });
      gl.drawArrays(gl.POINTS, 0, 1);
    }
    
    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);
}

main();


function glEnumToString(gl, v) {
  const hits = [];
  for (const key in gl) {
    if (gl[key] === v) {
      hits.push(key);
    }
  }
  return hits.length ? hits.join(' | ') : `0x${v.toString(16)}`;
}

<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>

其他一些注意事项:在GLSL ES 3.00中,您无需传递纹理大小,因为您可以使用函数textureSize查询纹理大小.根据纹理的类型,它会返回ivec2ivec3.

Some other things to note: In GLSL ES 3.00 you don't need to pass in a texture size as you can query the texture size with the function textureSize. It returns an ivec2 or ivec3 depending on the type of texture.

您也可以使用texelFetch代替texture. texelFetch采用整数texel坐标和mip级别,因此例如vec4 color = texelFetch(some3DTexture, ivec3(12, 23, 45), 0);从mip级别0获得x = 12,y = 23,z = 45的texel.这意味着您不需要进行数学运算如果您发现使用像素而不是标准化的纹理坐标更容易,那么您的代码中就会出现"onePixel".

You can also use texelFetch instead of texture. texelFetch takes an integer texel coordinate and a mip level so for example vec4 color = texelFetch(some3DTexture, ivec3(12, 23, 45), 0); gets the texel at x = 12, y = 23, z = 45 from mip level 0. That means you don't need to do the math about 'onePixel` you have in your code if you find it easier to work with pixels instead of normalized texture coordinates.

这篇关于WebGL2-如何存储和检索3D顶点网格计算新顶点位置所需的3D纹理数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆