在3D中旋转针孔相机 [英] Rotating a pinhole camera in 3D

查看:182
本文介绍了在3D中旋转针孔相机的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在3D空间中旋转针孔相机。我以前光线追踪一个房间。作为良好的实践,我已经先做了数学和试图编程的数学在C ++。

  //相机位置
vec3 cameraPos(0,0,-19);


//旋转摄像机
float&偏航;
vec3 c1(cos(yaw),0,sin(yaw));
vec3 c2(0,1,0);
vec3 c3(-sin(yaw),0,cos(yaw));
glm :: mat3 R(c1,c2,c3);

旋转相机的操作如下:

  if(keystate [SDLK_LEFT])
{
//cameraPos.x - = translation;
if(yaw> 0)
{
yaw = 0.01;
}
cout<<偏航< endl;
cameraPos = R * cameraPos;
cout<< LEFT< endl;
}
if(keystate [SDLK_RIGHT])
{
//cameraPos.x + = translation;
if(yaw> 0)
{
yaw = -0.01;
}
cout<<偏航< endl;
cameraPos = R * cameraPos;
cout<< RIGHT< endl;
}

我已经将旋转矩阵R与相机位置向量相乘。现在发生的是房间只向左移动,无论我按下什么键。



我所遵循的教程说:


$如果照相机被矩阵R旋转,那么表示
的向右(x轴),向下(y轴)和向前(z轴)的向量可以表示为:b $ b

方向van be
检索为:

  vec3 right(R [0] [0],R [0] 1],R [0] [2]); 
vec3 down(R [1] [0],R [1] [1],R [2] [2]);
vec3 right(R [2] [0],R [2] [1],R [2] [2]);

要建模旋转相机,您需要使用这些方向,当
时移动相机和你投射光线时。


我不明白我应该如何使用上述信息。



任何帮助或赞赏的赞助。

解决方案

似乎在更改 yaw 后更新 R 矩阵。这意味着每次你执行 camerapos = R * camerapos ,你将向一个方向旋转 camerapos p>

更合适的方法是将camerapos分开,每次建立 R



像这样:

  //相机位置
vec3 cameraPos(0,0,-19);
vec3 trueCameraPos;
float yaw;

if(keystate [SDLK_LEFT])
{
//cameraPos.x - = translation;
if(yaw> 0)
{
yaw = 0.01;
}
cout<<偏航 endl;
cout<< LEFT< endl;
}
if(keystate [SDLK_RIGHT])
{
//cameraPos.x + = translation;
if(yaw> 0)
{
yaw = -0.01;
}
cout<<偏航< endl;
cout<< RIGHT< endl;
}

//旋转摄像机
vec3 c1(cos(yaw),0,sin(yaw));
vec3 c2(0,1,0);
vec3 c3(-sin(yaw),0,cos(yaw));
glm :: mat3 R(c1,c2,c3);

trueCameraPos = R * cameraPos;

对于摄像机定义,摄像机需要三个向量来定义其方向。如果你旋转相机,方向也会旋转,否则你只需移动相机,它总是向一个方向看。



黄色的定义不正确,因为应当存在三个垂直向量,通常向上,向右和向前。现在有两个右向量(一个向下与向上的正好相反),因此最后一个向量应该是向前向量。



这些向量定义了raytracer中使用的方向。前向是光线被跟踪到的位置,向上和向右限定每个图像像素在图像平面中的位移方向。您很有可能已在您的跟踪代码中使用这些。


I am trying to rotate a pinhole camera in 3D space. I have previously raytraced a room. As good practice I have first done the maths and the tried to program the maths in c++.

// Camera position
vec3 cameraPos(0, 0, -19);


// Rotate camera
float& yaw;
vec3 c1(cos(yaw), 0, sin(yaw));
vec3 c2(0, 1, 0);
vec3 c3(-sin(yaw), 0, cos(yaw));
glm::mat3 R(c1, c2, c3);

What I have done to rotate the camera is this:

if (keystate[SDLK_LEFT])
{
    //cameraPos.x -= translation;
    if (yaw > 0)
    {   
        yaw = 0.01;
    }
    cout << yaw << endl;
    cameraPos = R * cameraPos;
    cout << "LEFT" << endl;
}
if (keystate[SDLK_RIGHT])
{
    //cameraPos.x += translation;
    if (yaw > 0)
    {
        yaw = -0.01;
    }
    cout << yaw << endl;
    cameraPos = R * cameraPos;
    cout << "RIGHT" << endl;
}

I have multiplied the rotation matrix R with the camera position vector. What happens now is that the room moves only to the left no matter what key I press.

The tutorial I am following says:

If the camera is rotated by the matrix R then vectors representing the right (x-axis), down (y-axis) and forward (z-axis) directions van be retrieved as:

vec3 right(R[0][0],R[0][1],R[0][2]);
vec3 down(R[1][0],R[1][1],R[2][2]);
vec3 right(R[2][0],R[2][1],R[2][2]);

To model a rotating camera you need to use these directions both when you move the camera and when you cast rays.

I don't understand how I am supposed to use the above information.

Any help or references appreciated.

解决方案

You don't seem to be updating your R matrix after changing the yaw. This means that every time you do camerapos = R * camerapos you are rotating the camerapos vector in one direction.

More proper way to do this would be to have the camerapos separate, build the R every time and use another vector for the result of the camera position.

Something like this:

// Camera position
vec3 cameraPos(0, 0, -19);
vec3 trueCameraPos;
float yaw;

if (keystate[SDLK_LEFT])
{
    //cameraPos.x -= translation;
    if (yaw > 0)
    {   
        yaw = 0.01;
    }
    cout << yaw << endl;
    cout << "LEFT" << endl;
}
if (keystate[SDLK_RIGHT])
{
    //cameraPos.x += translation;
    if (yaw > 0)
    {
        yaw = -0.01;
    }
    cout << yaw << endl;
    cout << "RIGHT" << endl;
}

// Rotate camera
vec3 c1(cos(yaw), 0, sin(yaw));
vec3 c2(0, 1, 0);
vec3 c3(-sin(yaw), 0, cos(yaw));
glm::mat3 R(c1, c2, c3);

trueCameraPos = R * cameraPos;

As for the camera definitions, the camera needs three vectors to define its orientation. If you rotate the camera the orientation also rotates, otherwise you'd just move the camera and it would always be looking in one direction.

The definition in yellow is incorrect, since there should be three perpendicular vectors, usually up, right and forward. Now there are two right vectors (one being down is just opposite of what up would be), so the last one should be forward vector.

These vectors define the directions used in the raytracer. Forward is where the rays are traced to, up and right define displacement directions in the image plane for each image pixel. You are most likely using these already in your tracing code.

这篇关于在3D中旋转针孔相机的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆