VR中的变换矩阵问题 [英] questions of transform matrix in VR
问题描述
我对OpenVR api中的矩阵转换有疑问.
I have a problem about matrix transformation in OpenVR api.
m_compositor->WaitGetPoses(m_rTrackedDevicePose, vr::k_unMaxTrackedDeviceCount, nullptr, 0);
在openvr提供的演示中:
in the demo which the openvr gives:
const Matrix4 & matDeviceToTracking = m_rmat4DevicePose[ unTrackedDevice ];
Matrix4 matMVP = GetCurrentViewProjectionMatrix( nEye ) * matDeviceToTracking;
glUniformMatrix4fv( m_nRenderModelMatrixLocation, 1, GL_FALSE, matMVP.get() );
其中GetCurrentViewProjectionMatrix用
where GetCurrentViewProjectionMatrix is calculated with
Matrix4 CMainApplication::GetCurrentViewProjectionMatrix( vr::Hmd_Eye nEye )
{
Matrix4 matMVP;
if( nEye == vr::Eye_Left )
{
matMVP = m_mat4ProjectionLeft * m_mat4eyePosLeft * m_mat4HMDPose;
}
else if( nEye == vr::Eye_Right )
{
matMVP = m_mat4ProjectionRight * m_mat4eyePosRight * m_mat4HMDPose;
}
return matMVP;
}
问题是:
1,matDeviceToTracking将从哪个空间转换到哪个空间?
1, which space is matDeviceToTracking transformed from to which space?
2,如果我已经有了modelview矩阵,并且已经可以使用hmd旋转了,如何正确渲染rendermodel?我尝试使用projection*modelview*m_rmat4DevicePose[ unTrackedDevice ]
,但没有效果.
2, If I have modelview matrix already, and already can rotate with hmd, how can I render the rendermodel correctly? I tried using projection*modelview*m_rmat4DevicePose[ unTrackedDevice ]
but there is no effect.
推荐答案
1.
在示例代码中,matDeviceToTracking
是对m_rmat4DevicePose[unTrackedDevice]
的引用,该引用是从TrackedDevicePose_t::mDeviceToAbsoluteTracking
复制而来的.这是一个从模型空间到世界空间的映射模型.
In the sample code, the matDeviceToTracking
is a reference to m_rmat4DevicePose[unTrackedDevice]
, which is copied from TrackedDevicePose_t::mDeviceToAbsoluteTracking
. This is a model matrix mapping from the model space to the world space.
尽管有一个陷阱.如果您从示例中包含了UpdateHMDMatrixPose()
函数,则该函数会在更新m_mat4HMDPose
的值时反转m_rmat4DevicePose[vr::k_unTrackedDeviceIndex_Hmd]
,从而使m_rmat4DevicePose[0]
从世界空间映射到模型/HMD视图空间,而恰恰相反数组中的其他矩阵.
There is one pitfall, though. If you included the UpdateHMDMatrixPose()
function from the sample, this function inverts m_rmat4DevicePose[vr::k_unTrackedDeviceIndex_Hmd]
while updating the value of m_mat4HMDPose
, leaving m_rmat4DevicePose[0]
mapping from the world space to the model/HMD view space, exactly the other way around to the other matrices in the array.
2.
如果您已经具有模型视图矩阵,则只需将投影矩阵乘以它即可获得MVP矩阵.要渲染到HMD中,请分别对左眼和右眼使用m_mat4ProjectionLeft * m_mat4eyePosLeft * modelview
和m_mat4ProjectionRight * m_mat4eyePosRight * modelview
.为了在监视器上渲染,您可以生成自己的视锥并将其乘以模型视图矩阵.以下网站是有关如何创建投影矩阵的良好参考:
http://www.songho.ca/opengl/gl_projectionmatrix.html
If you already have the model-view matrix, then you only need to multiply the projection matrix by it to obtain the MVP matrix. For rendering into the HMD, use m_mat4ProjectionLeft * m_mat4eyePosLeft * modelview
and m_mat4ProjectionRight * m_mat4eyePosRight * modelview
for left and right eye, respectively. For rendering on a monitor, you can generate your own frustum and multiply it by your model-view matrix. The following website is a good reference on how to create a projection matrix:
http://www.songho.ca/opengl/gl_projectionmatrix.html
这篇关于VR中的变换矩阵问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!