为什么我的FPS相机一劳永逸? [英] Why is my FPS camera rolling, once and for all?

查看:170
本文介绍了为什么我的FPS相机一劳永逸?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如果我忽略四元数代数的肮脏细节,我想我理解旋转和平移变换背后的数学原理.但是仍然无法理解我在做什么.

If I ignore the sordid details of quaternions algebra I think I understand the maths behind rotation and translation transformations. But still fail to understand what I am doing wrong.

为什么我的相机一劳永逸!! :)

更具体地说,我应该如何从其方向(旋转矩阵)计算摄像机视图矩阵?

And to be a bit more specific, how should I compute the camera View matrix from its orientation (rotation matrix)?

我正在用Python编写一个简约的3d引擎,其中包含一个场景Node类,该类处理3d对象的旋转和平移机制.它具有暴露旋转"和平移"矩阵以及模型"矩阵的方法.

I am writing a minimalistic 3d engine in Python with a scene Node class that handles the mechanics of rotation and translation of 3d objects. It has methods that expose the Rotation and Translation matrices as well as the Model matrix.

还有一个CameraNode类,它是Node的子类,它也公开了View和Projection矩阵(投影不是问题,因此我们可以忽略它).

There is also a CameraNode class, a subclass of Node, that also exposes the View and Projection matrices (projection is not the problem, so we can ignore it).

为了正确地应用转换,我将矩阵相乘如下:

In order to correctly apply the transformations I multiply the matrices as follows:

PxVxM x v

即首先是模型,然后是视图,最后是投影.

i.e first the Model, then the View and finally the Projection.

其中M是通过先应用旋转然后进行平移来计算的:

Where M is computed by first applying the rotation and then the translation:

M = TxR

代码如下:

class Node():
    # ...

    def model_mat(self):
        return self.translation_mat() @ self.rotation_mat() @ self.scaling_mat()

    def translation_mat(self):
        translation = np.eye(4, dtype=np.float32)
        translation[:-1, -1] = self.position  # self.position is an ndarray
        return translation

    def rotation_mat(self):
        rotation = np.eye(4, dtype=np.float32)
        rotation[:-1, :-1] = qua.as_rotation_matrix(self.orientation)  # self.orientation is a quaternion object
        return rotation

查看矩阵

我正在根据相机的位置和方向来计算View矩阵,如下所示:

View Matrix

I am computing the View matrix based on the camera position and orientation, as follows:

class CameraNode(Node):
    # ...

    def view_mat(self):
        trans = self.translation_mat()
        rot = self.rotation_mat()
        trans[:-1, 3] = -trans[:-1, 3]  # <-- efficient matrix inversion
        rot = rot.T                     # <-- efficient matrix inversion
        self.view = rot @ trans
        return self.view

如果我错了,请纠正我.由于我们只能移动和旋转世界几何图形(与移动/旋转摄影机相反),因此我必须以相反的顺序乘以矩阵,并且还要乘以相反的变换(实际上是每个变换矩阵的逆矩阵).换句话说,将相机移离物体也可以看作是将物体移离相机.

Please correct me if I am wrong. Since we can only move and rotate the world geometry (as opposed to moving/rotating the camera) I have to multiply the matrices in the reverse order and also the oposite transformation (effectively the inverse of each transformation matrix). In other words, moving the camera away from an object can also be seen as moving the object away from the camera.

现在,这是我将键盘输入转换为相机旋转的方式.当我按向右/向左/向上/向下箭头键时,我会以一定的俯仰/偏航角调用以下方法:

Now, here's how I convert keyboard input into camera rotation. When I press the right/left/up/down arrow keys I am calling the following methods with some pitch/yaw angle:

def rotate_in_xx(self, pitch):
    rot = qua.from_rotation_vector((pitch, 0.0, 0.0))
    self.orientation *= rot

def rotate_in_yy(self, yaw):
    rot = qua.from_rotation_vector((0.0, yaw, 0.0))
    self.orientation *= rot

行为错误,但旋转矩阵正确

这就是我得到的:

Behaves wrong but rotation matrix is correct

And this is what I get:

现在,令人困惑的是,如果我将上述方法更改为:

Now, confusingly, if I change the above methods to:

class CameraNode(Node):

    def view_mat(self):
        view = np.eye(4)
        trans = self.translation_mat()
        rot = self.rotation_mat()
        trans[:-1, 3] = -trans[:-1, 3]
        # rot = rot.T                     # <-- COMMENTED OUT
        self.view = rot @ trans
        return self.view

    def rotate_in_xx(self, pitch):
        rot = qua.from_rotation_vector((pitch, 0.0, 0.0))
        self.orientation = rot * self.orientation  # <-- CHANGE

我可以使相机像FPS相机一样正常工作,但是旋转矩阵似乎不正确.

I can make the camera behave correctly as an FPS camera, but the rotation matrix does not seem right.

请有人能给我一些启示吗? 预先感谢.

Please could someone shed some light? Thanks in advance.

推荐答案

In my last answer to your issue, I told you why it's not a good idea to reuse your view matrix, because pitching and yawing don't commute. You're using quaternions now, but again, pitch and yaw quaternions don't commute. Just store the pitch value and the yaw value, and recalculate the orientation from pitch and yaw whenever you need it.

def rotate_in_xx(self, pitch):
    self.pitch += pitch

def rotate_in_yy(self, yaw):
    self.yaw += yaw

def get_orientation():
    pitch_rotation = qua.from_rotation_vector((self.pitch, 0.0, 0.0))
    yaw_rotation = qua.from_rotation_vector((0.0, self.yaw, 0.0))
    return yaw_rotation * pitch_rotation


关于最后一个屏幕快照中相机旋转矩阵和对象旋转矩阵的不同之处的注释:对象旋转和平移矩阵(以及模型矩阵)描述了从对象坐标世界坐标,而视图矩阵描述了从世界坐标相机坐标的转换.


A note on how in your last screenshot the camera rotation matrix and object rotation matrix aren't identical: The object rotation and translation matrices (together the model matrix) describe the transformation from object coordinates to world coordinates, while the view matrix describes the transformation from world coordinates to camera coordinates.

因此,为了使三脚架相对于视口轴向对齐显示,视图旋转矩阵必须是模型旋转矩阵的.

So in order for the tripod to be displayed axis-aligned relative to your viewport, then the view rotation matrix must be the inverse of the model rotation matrix.

这篇关于为什么我的FPS相机一劳永逸?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆