Raytracer - 计算眼线 [英] Raytracer - Computing Eye Rays

查看:26
本文介绍了Raytracer - 计算眼线的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在写一个光线追踪器(主要是为了好玩),虽然我过去写过一个,并花了相当多的时间搜索,但似乎没有教程阐明计算眼睛光线的方法透视投影,不使用矩阵.

I'm writing a ray tracer (mostly for fun) and whilst I've written one in the past, and spent a decent amount of time searching, no tutorials seem to shed light on the way to calculate the eye rays in a perspective projection, without using matrices.

我相信我最后一次这样做是通过使用 Quaternion 类(可能)低效地将眼睛向量从相机方向向量旋转 x/y 度.这是用 C++ 写的,我用 C# 写的,虽然这不是那么重要.

I believe the last time I did it was by (potentially) inefficiently rotating the eye vectors x/y degrees from the camera direction vector using a Quaternion class. This was in C++, and I'm doing this one in C#, though that's not so important.

伪代码(假设 V * Q = 变换操作)

Pseudocode (assuming V * Q = transform operation)

yDiv = fovy / height
xDiv = fovx / width

for x = 0 to width
    for y = 0 to height

        xAng = (x / 2 - width) * xDiv
        yAng = (y / 2 - height) * yDiv
        Q1 = up vector, xAng
        Q2 = camera right vector, yAng
        Q3 = mult(Q1, Q2)

        pixelRay = transform(Q3, camera direction)
        raytrace pixelRay

    next
next

我认为它的实际问题在于它模拟的是球形屏幕表面,而不是平面屏幕表面.

I think the actual problem with this is that it's simulating a spherical screen surface, not a flat screen surface.

请注意,虽然我知道如何以及为什么使用叉积、点积、矩阵等,但我的实际 3D 数学问题解决能力并不出色.

Mind you, whilst I know how and why to use cross products, dot products, matrices and such, my actual 3D mathematics problem solving skills aren't fantastic.

既然如此:

  • 相机位置、方向和向上矢量
  • 视野
  • 屏幕像素和/或子采样分区

为光线追踪器的 x/y 像素坐标生成眼睛光线的实际方法是什么?

What is the actual method to produce an eye ray for x/y pixel coordinates for a raytracer?

澄清:我正是我想要计算的,我只是不擅长提出 3D 数学来计算它,而且我发现似乎没有光线跟踪器代码我需要计算单个像素的眼睛光线的代码.

To clarify: I exactly what I'm trying to calculate, I'm just not great at coming up with the 3D math to compute it, and no ray tracer code I find seems to have the code I need to compute the eye ray for an individual pixel.

推荐答案

INPUT: camera_position_vec, direction_vec, up_vec, screen_distance

right_vec = direction_vec x up_vec
for y from 0 to 1600:
    for x from 0 to 2560:
        # location of point in 3d space on screen rectangle
        P_3d = camera_position_vec + screen_distance*direction_vec
               + (y-800)*-up_vec
               + (x-1280)*right_vec

        ray = Ray(camera_position_vec, P_3d)
        yield "the eye-ray for `P_2d` is `ray`"

x 表示叉积

编辑:答案假定 direction_vec 是规范化的,因为它应该是.right_vec 在图片中(看起来应该在左边),但 right_vec 不是必需的,如果包含,应该始终与 的方向相同-(up_vec x direction_vec).此外,图片暗示 x 坐标随着向右而增加,而 y 坐标随着向下而增加.标志略有变化以反映这一点.缩放可以通过将方程中的 x 项和 y 项相乘来执行,或者更有效地,将向量相乘并使用 scaled_up_vecscaled_right_vec.然而,变焦等效于(因为光圈无关紧要;这是一个完美的针孔相机)来改变视场 (FoV),这是比任意变焦"更好的处理量.有关如何实施 FoV 的信息,请参阅下面的评论.

edit: The answer assumed that direction_vec is normalized, as it should be. right_vec is in the picture (seemingly where the left should be), but right_vec is not necessary and, if included, should always be in the same direction as -(up_vec x direction_vec). Furthermore the picture implies the x-coord increases as one goes right, and the y-coord increases as one goes down. The signs have been changed slightly to reflect that. A zoom may either be performed by multiplying the x- and y- terms in the equation, or more efficiently, multiplying the vectors and using scaled_up_vec and scaled_right_vec. A zoom is however equivalent (since aperture doesn't matter; this is a perfect pinhole camera) to changing the field of view (FoV) which is a much better nicer quantity to deal with than arbitrary "zoom". For information about how to implement FoV, seem my comment below.

这篇关于Raytracer - 计算眼线的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆