数学背后从深度框架投射到点云 [英] Math behind back-projecting from depth frame to point cloud
问题描述
我一直在使用kinect fusion api生成点云和网格,但我想知道将深度帧点/深度投影到3d体积背后的数学是什么?以前我只是简单地绘制了(pixel_x,pixel_y,深度)
的点,但我知道这会出错...
将光线投射到3D世界的光线投影将是相同的数学。 这里有一个很好的解释:
http://www.mvps.org/directx/articles/rayproj.htm
这与此基本相反:
http://stackoverflow.com/questions/5024758/math-how-to-convert-a-3d-world-to-2d-screen-coordinate
I have been producing point clouds and meshes using the kinect fusion api, however I was wondering what is the math behind projecting the depth frame points/depths into a 3d volume? Previously I have simply just plotted the points at (pixel_x,pixel_y,depth) but I know this would wrong...
It would be the same math for ray casting a 2d point into a 3D world. A good explanation is here:
http://www.mvps.org/directx/articles/rayproj.htm
Which is essentially the inverse of this:
http://stackoverflow.com/questions/5024758/math-how-to-convert-a-3d-world-to-2d-screen-coordinate
这篇关于数学背后从深度框架投射到点云的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!