Kinect Fusion多静态相机(矩阵变换) [英] Kinect Fusion Multi-static Cameras(matrix transform )

查看:105
本文介绍了Kinect Fusion多静态相机(矩阵变换)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们在kinect融合探索多静态相机样本中使用3个kinects重建3d对象。使用现有校准库获得3个相机的成对校准的矩阵变换(旋转/平移),并且相机的每个矩阵
平移将与全局坐标系相关。如何使用上述矩阵变换(旋转/平移)实现3个传感器的深度图像(或点云)的对齐和积分。


顺便说一句,角色的作用是什么UI控件中的相机姿势和距离可以通过这些相机参数重建3D对象(或者深度图像是否对齐和积分)?


有人能指出我的方向吗? ?谢谢!

解决方案

你看到这个帖子了吗?基本上,在运行样本时,您可以使用滑块输入坐标。只要你的世界坐标是正确的,融入世界空间的整合就应该适合你。这都是基于单个
固定点


http://social.msdn.microsoft.com/Forums/en-US/d91c3993-b8b7-454d-921a- 7f626abfecc2 / kinect的熔合探索的多静态cameraswpf?论坛= kinectsdknuiapi


Hi, we use 3 kinects to reconstruct 3d object in the kinect fusion explore multi-static cameras sample. The matrix transform(rotation/translation) of pairwise calibration of the 3 cameras is obtained using an existing calibration library and each matrix translation for a camera would be in relation to a global coordinate system. How to realize the aligning and integrating of the depth images ( or the point clouds) from 3 sensors using above matrix transform (rotation/translation).

By the way, what is the role of the camera pose and the distance in the UI Controls and could the 3D object be reconstructed ( or the depth images be aligned and integrated) by these camera parameters?

Can someone point me in a direction? Thanks!

解决方案

Have you see this thread? Basically, when running the sample, you enter your coordinates by use of the sliders. As long as your world coordinates are correct, the integration into the fusion world space should align for you. this is all based on a single fixed point

http://social.msdn.microsoft.com/Forums/en-US/d91c3993-b8b7-454d-921a-7f626abfecc2/kinect-fusion-explore-multi-static-cameraswpf?forum=kinectsdknuiapi


这篇关于Kinect Fusion多静态相机(矩阵变换)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆