Kinect 红外视图是否与 Kinect 深度视图有偏移 [英] Does Kinect Infrared View Have an offset with the Kinect Depth View

查看:70
本文介绍了Kinect 红外视图是否与 Kinect 深度视图有偏移的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用红外视图和深度视图进行 Kinect 项目.在红外视图中,使用 CVBlob 库,我能够提取一些二维兴趣点.我想找到这些二维点的深度.所以我想我可以直接使用深度视图,像这样:

I am working on a Kinect project using the infrared view and depth view. In the infrared view, using CVBlob library, I am able to extract some 2D points of interest. I want to find the depth of these 2D points. So I thought that I can use the depth view directly, something like this:

coordinates3D[0] = coordinates2D[0];
coordinates3D[1] = coordinates2D[1];
coordinates3D[2] = (USHORT*)(LockedRect.pBits)
[(int)coordinates2D[1] * Width + (int)coordinates2D[0]] >> 3;

我认为这不是获取深度的正确公式.我能够在深度视图中可视化感兴趣的 2D 点.如果我在红外视图中得到一个点 (x, y),那么我将它绘制为深度视图中 (x, y)
处的一个红点我注意到红点不在我期望的位置(在物体上).他们的位置存在系统性错误.

I don't think this is the right formula to get the depth. I am able to visualize the 2D points of interest in the depth view. If I get a point (x, y) in infrared view, then I draw it as a red point in the depth view at (x, y)
I noticed that the red points are not where I expect them to be (on an object). There is a systematic error in their locations.

我认为深度视图和红外视图与彩色视图和深度视图之间的对应关系不同,是一一对应的.
这确实是真的还是 IR 和深度视图之间存在偏移?如果有偏移,我能以某种方式获得正确的深度值吗?

I was of the opinion that the depth view and infrared views have one-to-one correspondence unlike the correspondence between the color view and depth view.
Is this indeed true or is there an offset between the IR and depth views? If there is an offset, can I somehow get the right depth value?

推荐答案

深度和颜色流不是取自同一点,因此它们不能完美对应.此外,它们的 FOV(视野)也不同.

Depth and Color streams are not taken from the same point so they do not correspond to each other perfectly. Also they FOV (field of view) is different.

  1. 相机

  • 红外/深度 FOV 58.5° x 45.6°
  • 彩色 FOV 62.0° x 48.6°
  • 相机之间的距离 25 毫米

我对两个流的 640x480 分辨率的更正

if (valid depth)
 {
 ax=(((x+10-xs2)*241)>>8)+xs2;
 ay=(((y+30-ys2)*240)>>8)+ys2;
 }

  • x,y 在深度图像中的坐标
  • ax,ay 是彩色图像中的坐标
  • xs,ys = 640,480
  • xs2,ys2 = 320,240
    • x,y are in coordinates in depth image
    • ax,ay are out coordinates in color image
    • xs,ys = 640,480
    • xs2,ys2 = 320,240
    • 正如你所看到的,我的 kinect 也有 y 偏移,这很奇怪(甚至比 x 偏移大).我的转换在高达 2 m 的范围内效果很好,我没有进一步测量它,但它应该可以工作

      as you can see my kinect has also y-offset which is weird (even bigger then x-offset). My conversion works well on ranges up to 2 m I did not measure it further but it should work even then

      不要忘记根据深度和深度图像坐标校正空间坐标

      pz=0.8+(float(rawdepth-6576)*0.00012115165336374002280501710376283);
      px=-sin(58.5*deg*float(x-xs2)/float(xs))*pz;
      py=+sin(45.6*deg*float(y-ys2)/float(ys))*pz;
      pz=-pz;
      

      • 其中 px,py,pz 是相对于 kinect
      • 空间中 [m] 中的点坐标

        • where px,py,pz is point coordinate in [m] in space relative to kinect
        • 我对Z方向相反的相机使用坐标系,因此符号的否定

          I use coordinate system for camera with opposite Z direction therefore the negation of sign

          附注.我有旧型号 1414,所以新型号可能有不同的校准参数

          PS. I have old model 1414 so newer models have probably different calibration parameters

          这篇关于Kinect 红外视图是否与 Kinect 深度视图有偏移的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆