iOS:关于GLKMatrix4MakeLookAt结果中的相机信息的问题 [英] iOS: Questions about camera information within GLKMatrix4MakeLookAt result

查看:1201
本文介绍了iOS:关于GLKMatrix4MakeLookAt结果中的相机信息的问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

iOS 5文档显示 GLKMatrix4MakeLookAt 的操作与 gluLookAt 相同。



定义如下:

  static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX,float eyeY,float eyeZ ,
float centerX,float centerY,float centerZ,
float upX,float upY,float upZ)
{
GLKVector3 ev = {eyeX,eyeY,eyeZ}
GLKVector3 cv = {centerX,centerY,centerZ};
GLKVector3 uv = {upX,upY,upZ};
GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev,GLKVector3Negate(cv)));
GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv,n));
GLKVector3 v = GLKVector3CrossProduct(n,u);

GLKMatrix4 m = {uv [0],vv [0],nv [0],0.0f,
uv [1],vv [1],nv [1] f,
uv [2],vv [2],nv [2],0.0f,
GLKVector3DotProduct(GLKVector3Negate(u),ev),
GLKVector3DotProduct(GLKVector3Negate ),
GLKVector3DotProduct(GLKVector3Negate(n),ev),
1.0f};

返回m;
}

我要从这里提取相机信息:

  1。读取相机位置
GLKVector3 cPos = GLKVector3Make(mx.m30,mx.m31,mx.m32);
2.在上面的
中读取摄像机右矢量为`u'GLKVector3 cRight = GLKVector3Make(mx.m00,mx.m10,mx.m20);
3.在上面的
GLKVector3 cUp = GLKVector3Make(mx.m01,mx.m11,mx.m21)中读取摄像机向量作为`u';
4.在上面的
GLKVector3 cLookAt = GLKVector3Make(mx.m02,mx.m12,mx.m22)中读取相机视图向量为`n`;

有两个问题:







  1. 由于它们执行(eye-center),所以外观向量似乎被否定。而不是(center-eye)。实际上,当我调用(0,0,-10)的摄像机位置的 GLKMatrix4MakeLookAt code>(0,0,1)我提取的是(0,0,-1)我期望。


  2. 我提取的摄像机位置是视图转换矩阵预先计算视图旋转矩阵的结果,因此点积他们的定义。我相信这是不正确的 - 任何人都可以建议我应该如何计算位置?







非常感谢您的时间。

解决方案

Per 它的文档,gluLookAt计算中心 - 眼睛,使用它为一些中间步骤,然后是负数,它放置到结果矩阵。所以如果你想要中心 - 眼睛回来,采取否定是明确正确的。



你会注意到,返回的结果等价于一个multmatrix与旋转部分结果后面跟着一个glTranslate by -eye。由于经典的OpenGL矩阵运算后乘法,这意味着gluLookAt被定义为post转换乘以转换。所以如果你定义R =(定义旋转的矩阵的矩阵),那么这是正确的,和第一次移动相机,然后旋转它 - 这是正确的。你的教导的一部分),T =(翻译模拟),你得到RT如果你想提取T,你可以通过R的倒数进行前多项式,然后将结果从最后一列中提取出来,因为矩阵乘法是关联的。



因为R是正交的,所以逆只是转置。


The iOS 5 documentation reveals that GLKMatrix4MakeLookAt operates the same as gluLookAt.

The definition is provided here:

static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
                                                  float centerX, float centerY, float centerZ,
                                                  float upX, float upY, float upZ)
{
    GLKVector3 ev = { eyeX, eyeY, eyeZ };
    GLKVector3 cv = { centerX, centerY, centerZ };
    GLKVector3 uv = { upX, upY, upZ };
    GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
    GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
    GLKVector3 v = GLKVector3CrossProduct(n, u);

    GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
                     u.v[1], v.v[1], n.v[1], 0.0f,
                     u.v[2], v.v[2], n.v[2], 0.0f,
                     GLKVector3DotProduct(GLKVector3Negate(u), ev),
                     GLKVector3DotProduct(GLKVector3Negate(v), ev),
                     GLKVector3DotProduct(GLKVector3Negate(n), ev),
                     1.0f };

    return m;
}

I'm trying to extract camera information from this:

1. Read the camera position
    GLKVector3 cPos = GLKVector3Make(mx.m30, mx.m31, mx.m32);
2. Read the camera right vector as `u` in the above
    GLKVector3 cRight = GLKVector3Make(mx.m00, mx.m10, mx.m20);
3. Read the camera up vector as `u` in the above
    GLKVector3 cUp = GLKVector3Make(mx.m01, mx.m11, mx.m21);
4. Read the camera look-at vector as `n` in the above
    GLKVector3 cLookAt = GLKVector3Make(mx.m02, mx.m12, mx.m22);

There are two questions:


  1. The look-at vector seems negated as they defined it, since they perform (eye - center) rather than (center - eye). Indeed, when I call GLKMatrix4MakeLookAt with a camera position of (0,0,-10) and a center of (0,0,1) my extracted look at is (0,0,-1), i.e. the negative of what I expect. So should I negate what I extract?

  2. The camera position I extract is the result of the view transformation matrix premultiplying the view rotation matrix, hence the dot products in their definition. I believe this is incorrect - can anyone suggest how else I should calculate the position?


Many thanks for your time.

解决方案

Per its documentation, gluLookAt calculates centre - eye, uses that for some intermediate steps, then negatives it for placement into the resulting matrix. So if you want centre - eye back, the taking negative is explicitly correct.

You'll also notice that the result returned is equivalent to a multMatrix with the rotational part of the result followed by a glTranslate by -eye. Since the classic OpenGL matrix operations post multiply, that means gluLookAt is defined to post multiply the rotational by the translational. So Apple's implementation is correct, and the same as first moving the camera, then rotating it — which is correct.

So if you define R = (the matrix defining the rotational part of your instruction), T = (the translational analogue), you get R.T. If you want to extract T you could premultiply by the inverse of R and then pull the results out of the final column, since matrix multiplication is associative.

As a bonus, because R is orthonormal, the inverse is just the transpose.

这篇关于iOS:关于GLKMatrix4MakeLookAt结果中的相机信息的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆