Kinect:从色彩空间到世界坐标 [英] Kinect: From Color Space to world coordinates
问题描述
我使用kinect的rgb数据跟踪球。之后,我查找相应的深度数据。这两个都是工作的辉煌。现在我想有实际的x,y,z世界坐标(即骨架空间),而不是x_screen,y_screen和深度值。不幸的是,kinect sdk给出的方法( http://msdn.microsoft.com/en-us/library /hh973078.aspx )don`t help me。基本上我需要一个函数NuiImageGetSkeletonCoordinatesFromColorPixel,但我不存在。所有的功能基本上是在相反的方向。
I am tracking a ball using the rgb data from kinect. After this I look up the corresponding depth data. Both of this is working splendid. Now I want to have the actual x,y,z world coordinates (i.e skeleton Space) instead of the x_screen, y_screen and depth values. Unfortunately the methods given by the kinect sdk (http://msdn.microsoft.com/en-us/library/hh973078.aspx) don`t help me. Basically i need a function "NuiImageGetSkeletonCoordinatesFromColorPixel" but i does not exist. All the functions basically go in the opposite direction
我知道这可能可以做openNI,但我不能使用它的其他原因。
I know this can probably be done with openNI but i can not use it for other reasons.
有没有为我做这个功能,还是我自己做转换?如果我必须自己做,我该怎么做呢?我草拟了一个小图表 http://i.imgur.com/ROBJW8Q.png - 你认为这将工作?
Is there a function that does this for me or do i have to do the conversion myself? If I have to do it myself, how would i do this? I sketched up a little diagram http://i.imgur.com/ROBJW8Q.png - do you think this would work?
推荐答案
检查CameraIntrinsics。
Check the CameraIntrinsics.
typedef struct _CameraIntrinsics
{
float FocalLengthX;
float FocalLengthY;
float PrincipalPointX;
float PrincipalPointY;
float RadialDistortionSecondOrder;
float RadialDistortionFourthOrder;
float RadialDistortionSixthOrder;
} CameraIntrinsics;
您可以从 ICoordinateMapper :: GetDepthCameraIntrinsics
。
然后,对于深度空间中的每个像素(u,v,d)
Then, for every pixel (u,v,d)
in depth space, you can get the coordinate in world space by doing this:
x = (u - principalPointX) / focalLengthX * d;
y = (v - principalPointY) / focalLengthY * d;
z = d;
对于颜色空间像素,您需要首先找到其关联的深度空间像素, code> ICoordinateMapper :: MapCameraPointTodepthSpace 。由于并非所有颜色像素都具有相关联的深度像素(1920x1080 vs 512x424),因此您不能拥有全高清色点云。
For color space pixel, you need to first find its associated depth space pixel, which you should use ICoordinateMapper::MapCameraPointTodepthSpace
. Since not all color pixel has its associated depth pixel (1920x1080 vs 512x424), you can't have the full-HD color point cloud.
这篇关于Kinect:从色彩空间到世界坐标的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!