用我的光线跟踪器帮助我解决这个问题 [英] Help me solve this bug with my ray tracer

查看:168
本文介绍了用我的光线跟踪器帮助我解决这个问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不会为这个问题发布任何代码,因为它需要太多的上下文,但我会从概念上解释我在做什么。



我正在构建一个使用仿射变换的简单射线追踪器。我的意思是说,我将相机坐标中的所有光线与一般形状相交。这些形状都具有相关的仿射变换,并且在与场景对象相交之前,光线首先与这些变换的逆相乘。

例如,假设我想要一个球体半径3位于(10,10,10)。我创建了这个球体并给它一个表示这个转换的转换矩阵。



我在相机坐标中创建了一个光线。我将该射线乘以球体变换矩阵的逆,并将其与通用球体(在(0,0,0)处的r = 1)相交。我沿着这个通用射线在交点处的距离,并使用它,我找到了通用法线和沿着原始射线的点,并将它们保存到一个Transformation对象中(以及距离(t)和实际的转换)。



当需要计算出交点的颜色时,我将转换的逆转置 并乘以泛型正常来查找正常。如果我使用逆变换射线相交处的 t 值,那么交点就是沿原始非变换射线的点。



<问题是,当我这样做时,转换会产生奇怪的效果。主要效果是转换似乎将光线从场景中拖出来。如果我构建一组图像并在每个框架上对球体应用稍大的旋转,则它似乎将场景中的灯光与它一起拖动。以下是一个例子



我真的无法弄清楚我在这里做错了什么,但是我正在撕掉我的头发。无论发生什么,我都想不出有什么好的理由。任何帮助都将受到极大的赞赏。

您已决定在对象坐标中执行交点,而不是在世界坐标中进行交点。恕我直言,这是一个错误(除非你正在做很多实例)。但是,鉴于此,您应该计算对象空间中的交点以及对象空间中的法线。这些需要使用对象转换转换回世界坐标 - 而不是其反转。这就是物体如何进入世界空间,以及物体空间中的一切如何进入世界空间。我不确定如何转换t参数,所以我会先转换交点直到获得正确的结果。


I'm not going to post any code for this question because it would require way too much context, but I shall explain conceptually what I'm doing.

I'm building a simple ray tracer that uses affine transformations. What I mean is that I'm intersecting all rays from camera coordinates with generic shapes. The shapes all have associated affine transformations, and the rays are first multiplied by the inverses of these transformations before intersecting with scene objects.

So for example, say I wanted a sphere of radius 3 positioned at (10,10,10). I create the sphere and give it a transformation matrix representing this transformation.

I create a ray in camera coordinates. I multiply the ray by the inverse of the sphere's transformation matrix and intersect it with the generic sphere (r=1 at (0,0,0)). I take the distance along this generic ray at the intersection point and using it I find the generic normal and the point along the original ray and save these into a Transformation object (along with the distance (t) and the actual transformation).

When it comes time to figure out the colour of this intersection point I take the transformation's inverse transpose and multiply it by the generic normal to find the normal. The point of intersection is just the point along the original non-transformed ray if I use the t value from the intersection of the inverse transformed ray.

The problem is, when I do things this way the transformations have weird effects. The main effect is that transformations seem to drag lights from the scene along with them. If I build a bunch of images and apply a slightly larger rotation to the sphere with each frame, it seems to drag the lights in the scene around with it. Here's an example

I honestly cannot figure out what I'm doing wrong here, but I'm tearing my hair out. I can't think of any good reason whatsoever for this to be happening. Any help would be hugely appreciated.

解决方案

You have made the decision to do intersections in object coordinates rather than world coordinates. IMHO that is an error (unless you're doing lots of instancing). However given that, you should compute the point of intersection in object space as well as the normal in object space. These need to be converted back to world coordinates using the objects transformation - NOT its inverse. This is how the object gets to world space, and how everything in object space gets to world space. Off hand I'm not certain how to transform the t parameter, so I'd go with transforming the intersection point initially until you get correct results.

这篇关于用我的光线跟踪器帮助我解决这个问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆