CoordinateMapper.GetDepthFrameToCameraSpaceTable相机内在 [英] CoordinateMapper.GetDepthFrameToCameraSpaceTable camera intrinsic

查看:107
本文介绍了CoordinateMapper.GetDepthFrameToCameraSpaceTable相机内在的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

使用CoordinateMapper.GetDepthFrameToCameraSpaceTable我得到一个矩阵,用于计算每个深度点。计算的基础是cameraintrinsic - 参数。我有其他来自校准的相机参数。我的问题:如何使用
这些参数,例如  CoordinateMapper.GetDepthFrameToCameraSpaceTable表或

with the CoordinateMapper.GetDepthFrameToCameraSpaceTable i get a matrix for each depth Point to calculate. The basic for calculation are the cameraintrinsic - parameter. I have other cameraparameters from a cameracalibration. My questions: How can I use these parameters like the  CoordinateMapper.GetDepthFrameToCameraSpaceTable table or

我该怎样才能 使用新的CameraIntrinsic值计算空间点?

how can I  calculate the spacepoints with new CameraIntrinsic values?

非常感谢您的帮助。

many thank in advance for your help

亲切的问候Thilo

kind regards Thilo

推荐答案

深度和相机空间是相同的坐标系,因此您根本不使用此表。从深度到相机空间唯一需要的是反转投影。坐标映射器将为您执行此操作,但如果您想自己执行此操作
创建反投影矩阵。所需信息在深度框架描述中(高度/宽度/焦距)。
Depth and Camera space are the same coordinate system so you don't use this table at all. The only thing required from depth to camera space would be to invert the projection. The coordinate mapper will do this for you, but if you want to do it yourself create inverse projection matrix. The information needed is in the depth frame description (height/width/focal length).


这篇关于CoordinateMapper.GetDepthFrameToCameraSpaceTable相机内在的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆