射线追踪相机 [英] Ray-Tracing Camera
问题描述
我目前正在研究射线追踪技术,我认为我已经做得非常好;但是,我还没有涵盖相机。到目前为止,我使用了一个位于( - width / 2,height / 2,200)
和 2,-height / 2,200)
[200只是z的固定数,可以改变]。除此之外,我使用相机主要在 e(0,0,1000)
并使用透视投影。我将点 e
的光线发送到像素,并在计算像素颜色后将其打印到图像的相应像素。
I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet. Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200)
and (width/2,-height/2,200)
[200 is just a fixed number of z, can be changed]. Addition to that, I use the camera mostly on e(0,0,1000)
and use perspective projection. I send rays from point e
to pixels, and print it to image's corresponding pixel after calculating the pixel color.
这是我创建的图片。希望你可以通过看图像来猜测眼睛和视图平面。
Here is a image I created. Hopefully you can guess where eye and view plane are by looking at the image.
我的问题从这里开始。现在是时候移动我的相机,但我不知道如何映射2D视图平面坐标到规范坐标。有一个转换矩阵吗?该方法我认为需要知道视图平面上的像素的3D坐标。我不知道这是正确的方法使用。那么你建议什么呢?
My question starts from here. It's time to move my camera around, but i don't know how to map 2D view plane coordinates to the canonical coordinates. Is there a transformation matrix for that? The method I think requires to know the 3D coordinates of pixels on view plane. I am not sure it's the right method to use. So what do you suggest?
PS:我希望我能表达自己,因为试图解释这个没有绘图是我真正的挑战。
P.S.: I hope I could express myself, because trying to explain this without drawing is a real challenge for me.
推荐答案
有多种方法可以做到。这里是我做的:
There are a variety of ways to do it. Here's what I do:
- 选择一个点来代表相机位置(
camera_position
)。 - 选择一个表示相机方向的向量(
camera_direction
)。 (如果你知道相机正在看的一个点,你可以通过从那个点减去camera_position
来计算这个方向向量。)你可能想标准化(
- 选择另一个标准化的向量,从相机的点(大约)上选择另一个标准化的向量, c $>
camera_right = Cross(camera_direction,camera_up)
camera_up = Cross(camera_right,camera_direction)
(这将纠正up选择中的任何偏差。 li>
- 选择另一个标准化的向量,从相机的点(大约)上选择另一个标准化的向量, c $>
- Choose a point to represent the camera location (
camera_position
). - Choose a vector that indicates the direction the camera is looking (
camera_direction
). (If you know a point the camera is looking at, you can compute this direction vector by subtractingcamera_position
from that point.) You probably want to normalize (camera_direction
), in which case it's also the normal vector of the image plane. - Choose another normalized vector that's (approximately) "up" from the camera's point of view (
camera_up
). camera_right = Cross(camera_direction, camera_up)
camera_up = Cross(camera_right, camera_direction)
(This corrects for any slop in the choice of "up".)
在 camera_position + camera_direction
处显示图像平面的向上和向右的矢量位于图像平面。
Visualize the "center" of the image plane at camera_position + camera_direction
. The up and right vectors lie in the image plane.
您可以选择一个矩形截面的图像平面对应于您的屏幕。此矩形部分的宽度或高度与camera_direction的长度的比率确定视野。要放大,您可以增加camera_direction或减小宽度和高度。
You can choose a rectangular section of the image plane to correspond to your screen. The ratio of the width or height of this rectangular section to the length of camera_direction determines the field of view. To zoom in you can increase camera_direction or decrease the width and height. Do the opposite to zoom out.
因此,假设像素位置(i,j)
code>(x,y,z)。从中可以减去 camera_position
以获得光线矢量(然后需要进行归一化)。
So given a pixel position (i, j)
, you want the (x, y, z)
of that pixel on the image plane. From that you can subtract camera_position
to get a ray vector (which then needs to be normalized).
Ray ComputeCameraRay(int i, int j) {
const float width = 512.0; // pixels across
const float height = 512.0; // pixels high
double normalized_i = (i / width) - 0.5;
double normalized_j = (j / height) - 0.5;
Vector3 image_point = normalized_i * camera_right +
normalized_j * camera_up +
camera_position + camera_direction;
Vector3 ray_direction = image_point - camera_position;
return Ray(camera_position, ray_direction);
}
这是说明性的,因此未进行优化。
This is meant to be illustrative, so it is not optimized.
这篇关于射线追踪相机的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!