如何在JS中正确编写此着色器函数? [英] How can I properly write this shader function in JS?

查看:281
本文介绍了如何在JS中正确编写此着色器函数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想发生的事情:



为了测试我想到的游戏艺术风格,我想以像素艺术形式渲染3D世界。因此,例如,采用这样的场景(但使用某种颜色/样式进行渲染,以便在像素化后看起来很好):





并使其看起来像这样:





通过使用不同的3D源样式设置方式,我认为像素化输出可能看起来不错。当然,要获得这种效果,只需将图像缩小到约80p,然后使用最近的邻居重采样将其放大到1080p。但是直接将其渲染为80p画布并进行放大会更有效。



通常这不是使用着色器来调整位图大小的方式以最接近的邻居格式显示,但是它的性能比我发现的实时进行这种转换的任何其他方法都要好。



我的代码:



我的位图缓冲区存储在行主行中,作为 r1,g1,b1,a1,r2,g2,b2,a2 ... ,而我使用的是



我在做什么错了?



注意:我在这里问了一个相关的问题,是否存在一种更简单(且仍然有效)的方法来放大画布渲染并使用最近的邻居重采样?

解决方案

最终答案



塔伦(Tarun)的答案帮助我找到了最终解决方案,所以他的赏金理应得到,但是我实际上了解了gpu.js的一项功能(图形输出与上下文共享配对,以便将缓冲区直接输出到渲染目标),渲染速度大约快了30倍,从而为着色提供了总时间输出从30ms +到〜1ms,并且这没有额外的优化,我现在知道可以更快地将数组缓冲区发送到GPU,但是我没有任何动力将阴影/渲染时间降低到低于1ms。

  var ca nvas = document.createElement(‘canvas’); 
canvas.width =宽度;
canvas.height =高度;
document.body.appendChild(canvas);
var gl = canvas.getContext(’webgl’);
var gpu = new GPU({
canvas,
gl
});
var pixelateMatrix = gpu.createKernel(function(inputBuffer,width,scale,size,pure,div){
var subX = Math.floor(this.thread.x / scale [0]);
var subY = Math.floor(this.thread.y / scale [0]);
var subIndex =((subX * 4)+(subY * width [0] * 4));
var rIndex = subIndex;
var gIndex = subIndex + 1;
var bIndex = subIndex + 2;
var r =(((inputBuffer [rIndex] *纯度[0])+ inputBuffer [rIndex-4] + inputBuffer [rIndex + 4])/(div [0]);
var g =((inputBuffer [gIndex] * Purity [0])+ inputBuffer [gIndex-4] + inputBuffer [ gIndex + 4])/(div [0]);
var b =(((inputBuffer [bIndex] *纯度[0])+ inputBuffer [bIndex-4] + inputBuffer [bIndex + 4])/(div [0]);
this.color(r / 255,g / 255,b / 255);
})。setOutput([width,height])。setGraphical(true);

inputBuffer 只是通过三个检索的缓冲区.js的 readRenderTargetPixels 方法。

  renderer.render(场景,相机, rt); 
renderer.readRenderTargetPixels(rt,0,0,smallWidth,smallHeight,frameBuffer);
pixelateMatrix(frameBuffer,[smallWidth],[scale],[size],[purity],[div]);



边注



我们能惊叹吗一会儿,WebGL带给浏览器多少能量?在短短1ms内完成了829.44万个多操作任务。根据我的数量计算,我的着色器的每秒最大数学运算总数约为640亿。太疯狂了可以吗?我的数学错了吗? 我看到nvidia的自动驾驶AI正在执行24万亿次操作/ s ,所以我想我1060上的这些数字在可能范围内。



GPU.js在优化矩阵运算以在GPU上运行而不需要学习着色器代码方面做得非常出色,而且创建者非常活跃在项目上,通常在几个小时内就可以响应问题。强烈建议您尝试lib。对于机器学习吞吐量特别棒。


What I want to happen:

For testing a game art style I thought of, I want to render a 3D world in pixel-art form. So for example, take a scene like this (but rendered with certain coloring / style so as to look good once pixelated):

And make it look something like this:

By playing with different ways of styling the 3D source I think the pixelated output could look nice. Of course to get this effect one just sizes the image down to ~80p and upscales it to 1080p with nearest neighbor resampling. But it's more efficient to render straight to an 80p canvas to begin with and just do the upscaling.

This is not typically how one would use a shader, to resize a bitmap in nearest neighbor format, but the performance on it is better than any other way I've found to make such a conversion in real time.

My code:

My buffer for the bitmap is stored in row major, as r1, g1, b1, a1, r2, g2, b2, a2... and I'm using gpu.js which essentially converts this JS func into a shader. My goal is to take one bitmap and return one at larger scale with nearest-neighbor scaling, so each pixel becomes a 2x2 square or 3x3 and so on. Assume inputBuffer is a scaled fraction of size of the output determined by the setOutput method.

var pixelateMatrix = gpu.createKernel(function(inputBuffer, width, height, scale) {
  var y = Math.floor((this.thread.x / (width[0] * 4)) / scale[0]);
  var x = Math.floor((this.thread.x % (width[0] * 4)) / scale[0]);
  var remainder = this.thread.x % 4;
  return inputBuffer[(x * y) + remainder]; 
}).setOutput([width * height * 4]);

JSFiddle

Keep in mind it's iterating over a new buffer of the full size output, so I have to find the correct coordinates that will exist in the smaller sourceBuffer based on the current index in the outputBuffer (index is exposed by the lib as this.thread.x).

What's happening instead:

This, instead of making a nearest neighbor upscale, is making a nice little rainbow (above is the small normal render, below is the result of the shader, and to the right you can see some debug logging with stats about the input and output buffers):

What am I doing wrong?

Note: I asked a related question here, Is there a simpler (and still performant) way to upscale a canvas render with nearest neighbor resampling?

解决方案

Final Answer

Tarun's answer helped me get to my final solution, so his bounty was well deserved, but I actually learned about a feature (graphical output paired with context sharing for direct buffer output to the render target) of gpu.js that allows for roughly 30x faster rendering, bringing the total time for shading and rendering the output from 30ms+ to ~1ms, and this is without an additional optimization I now know to be possible to send the array buffer to the GPU even faster, but I just didnt have any motivation to get the shading / rendering time down lower than 1ms.

var canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
document.body.appendChild(canvas);
var gl = canvas.getContext('webgl');
var gpu = new GPU({
  canvas,
  gl
});
var pixelateMatrix = gpu.createKernel(function(inputBuffer, width, scale, size, purity, div) {
    var subX = Math.floor(this.thread.x / scale[0]);
    var subY = Math.floor(this.thread.y / scale[0]);
    var subIndex = ((subX * 4) + (subY * width[0] * 4));
    var rIndex = subIndex;
    var gIndex = subIndex + 1;
    var bIndex = subIndex + 2;
    var r = ((inputBuffer[rIndex] * purity[0]) + inputBuffer[rIndex - 4] + inputBuffer[rIndex + 4]) / (div[0]);
    var g = ((inputBuffer[gIndex] * purity[0]) + inputBuffer[gIndex - 4] + inputBuffer[gIndex + 4]) / (div[0]);
    var b = ((inputBuffer[bIndex] * purity[0]) + inputBuffer[bIndex - 4] + inputBuffer[bIndex + 4]) / (div[0]);
    this.color(r / 255, g / 255, b / 255);
  }).setOutput([width, height]).setGraphical(true);

inputBuffer is simply the buffer retrieved via three.js's readRenderTargetPixels method.

renderer.render(scene, camera, rt);
renderer.readRenderTargetPixels(rt, 0, 0, smallWidth, smallHeight, frameBuffer);
pixelateMatrix(frameBuffer, [smallWidth], [scale], [size], [purity], [div]);

Side Note

Can we just marvel for a moment about how much power WebGL brings to the browser? That's 8.2944 million multi-operation tasks carried out in just ~1ms. ~64 billion total maximum math ops per second for my shader by my count. That's insanity. Can that even be right? Is my math wrong on that? I see nvidia's self driving AI is performing 24 trillion ops/s, so I guess these numbers on my 1060 are within the realm of possibility. It's just incredible though.

GPU.js does a just fantastic job of optimizing matrix operations to run on the GPU without the need for learning shader code, and the creator is extremely active on the project, responding to issues usually in a matter of hours. Highly recommend you guys give the lib a try. Especially awesome for machine learning throughput.

这篇关于如何在JS中正确编写此着色器函数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆