视野 + 纵横比 + 来自投影矩阵的视野矩阵(HMD OST 校准) [英] Field of view + Aspect Ratio + View Matrix from Projection Matrix (HMD OST Calibration)

查看:35
本文介绍了视野 + 纵横比 + 来自投影矩阵的视野矩阵(HMD OST 校准)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在开发增强现实应用程序.目标设备是光学透视 HMD,我需要校准其显示器以实现虚拟对象的正确配准.我使用

透视投影矩阵如下所示:

r = 右,l = 左,b = 底部,t = 顶部,n = 近,f = 远2*n/(r-l) 0 0 00 2*n/(t-b) 0 0(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -10 0 -2*f*n/(f-n) 0

如下:

aspect = w/htanFov = tan( fov_y * 0.5 );p[0][0] = 2*n/(r-l) = 1.0/(tanFov * aspect)p[1][1] = 2*n/(t-b) = 1.0/tanFov

沿 Y 轴的视野角度(以度为单位):

fov = 2.0*atan( 1.0/prjMatrix[1][1]) * 180.0/PI;

纵横比:

aspect = prjMatrix[1][1]/prjMatrix[0][0];

<子>进一步查看以下问题的答案:
如何在现代 OpenGL 中使用片段着色器中的 gl_FragCoord.z 线性渲染深度?
如何在给定视图空间深度值和 ndc xy 的情况下恢复视图空间位置

I'm currently working on an Augmented reality application. The targetted device being an Optical See-though HMD I need to calibrate its display to achieve a correct registration of virtual objects. I used that implementation of SPAAM for android to do it and the result are precise enough for my purpose.

My problem is, calibration application give in output a 4x4 projection matrix I could have directly use with OpenGL for exemple. But, the Augmented Reality framework I use only accept optical calibration parameters under the format Field of View some parameter + Aspect Ratio some parameter + 4x4 View matrix.

Here is what I have :

Correct calibration result under wrong format :

 6.191399, 0.114267, -0.142429, -0.142144
-0.100027, 11.791289, 0.05604,   0.055928
 0.217304,-0.486923, -0.990243, -0.988265
 0.728104, 0.005347, -0.197072,  0.003122

You can take a look at the code that generate this result here.

What I understand is the Single Point Active Alignment Method produce a 3x4 matrix, then the program multiply this matrice by an orthogonal projection matrix to get the result above. Here are the param used to produce the orthogonal matrix :

near : 0.1, far : 100.0, right : 960, left : 0, top :  540, bottom:  0

Bad calibration result under right format :

Param 1 : 12.465418
Param 2 : 1.535465

 0.995903,   -0.046072,   0.077501,  0.000000   
 0.050040,    0.994671,  -0.047959,  0.000000
-0.075318,    0.051640,   0.992901,  0.000000
 114.639359, -14.115030, -24.993097, 1.000000

I don't have any information on how these result are obtained.

I read these parameters from binary files, and I don't know if matrices are stored in row or column major form. So the two matrices may have to be transposed.

My question is : Is it possible, and if yes, how to get these three parameters from the projection first matrix I have ?

解决方案

Is it possible, and if yes, how to get these three parameters from the projection matrix I have ?

The projection matrix and the view matrix describe completely different transformations. While the projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport, the view matrix describes the direction and position from which the scene is looked at. The view matrix is defined by the camera position and the direction too the target of view and the up vector of the camera.
(see Transform the modelMatrix)

This means it is not possible to get the view matrix from the projection matrix. But the camera defines a view matrix.


If the projection is perspective, then it will be possible to get the field of view angle and the aspect ratio from the projection matrix.

The Perspective Projection Matrix looks like this:

r = right, l = left, b = bottom, t = top, n = near, f = far

2*n/(r-l)      0              0               0
0              2*n/(t-b)      0               0
(r+l)/(r-l)    (t+b)/(t-b)    -(f+n)/(f-n)   -1    
0              0              -2*f*n/(f-n)    0

it follows:

aspect = w / h
tanFov = tan( fov_y * 0.5 );

p[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
p[1][1] = 2*n/(t-b) = 1.0 / tanFov

The field of view angle along the Y-axis in degrees:

fov = 2.0*atan( 1.0/prjMatrix[1][1] ) * 180.0 / PI;

The aspect ratio:

aspect = prjMatrix[1][1] / prjMatrix[0][0];

See further the answers to the following question:
How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
How to recover view space position given view space depth value and ndc xy

这篇关于视野 + 纵横比 + 来自投影矩阵的视野矩阵(HMD OST 校准)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆