Kinect的深度和图像帧对齐 [英] Kinect Depth and Image Frames Alignment

查看:595
本文介绍了Kinect的深度和图像帧对齐的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在玩弄新的Kinect SDK v1.0.3.190。 (在计算器等相关问题上的Kinect SDK以前)
我得到颜色的深浅和流从Kinect的。作为深度和RGB流与不同的传感器捕获的有两帧之间的未对准如下面可以看到。

I am playing around with new Kinect SDK v1.0.3.190. (other related questions in stackoverflow are on previous sdk of kinect) I get depth and color streams from Kinect. As the depth and RGB streams are captured with different sensors there is a misalignment between two frames as can be seen below.

只有RGB

Only RGB

只有深度

Only Depth

深度和放大器; RGB

Depth & RGB

我要对准他们,有一个名为MapDepthToColorImagePoint正是用于此目的的功能。然而,它似乎并没有工作。这里是一个同样混合(深度和映射颜色)以下结果是用下面的代码

I need to align them and there is a function named MapDepthToColorImagePoint exactly for this purpose. However it doesn't seem to work. here is a equally blended (depth and mapped color) result below which is created with the following code

            Parallel.For(0, this.depthFrameData.Length, i =>
        {
            int depthVal = this.depthFrameData[i] >> 3;
            ColorImagePoint point = this.kinectSensor.MapDepthToColorImagePoint(DepthImageFormat.Resolution640x480Fps30, i / 640, i % 640, (short)depthVal, ColorImageFormat.RgbResolution640x480Fps30);
            int baseIndex = Math.Max(0, Math.Min(this.videoBitmapData.Length - 4, (point.Y * 640 + point.X) * 4));

            this.mappedBitmapData[baseIndex] = (byte)((this.videoBitmapData[baseIndex]));
            this.mappedBitmapData[baseIndex + 1] = (byte)((this.videoBitmapData[baseIndex + 1]));
            this.mappedBitmapData[baseIndex + 2] = (byte)((this.videoBitmapData[baseIndex + 2]));
        });



其中,

where

depthFrameData -> raw depth data (short array)

videoBitmapData -> raw image data (byte array)

mappedBitmapData -> expected result data (byte array)



的参数,分辨率,数组大小的顺序是正确的(双重检查。)

order of the parameters, resolution, array sizes are correct (double checked).

码的结果是:

The result of the code is:

错位继续!更糟的是,使用MapDepthToColorImagePoint后结果图像是完全与原始图像相同。

The misalignment continues! What is even worse is that, result image after using MapDepthToColorImagePoint is exactly the same with the original image.

将不胜感激,如果有人可以帮助我找到了我的错误,或者至少给我解释一下什么是MapDepthToColorImagePoint为(假定我误解了它的功能)?

Would be appreciated if someone could help me to find out my mistake or at least explain me what is MapDepthToColorImagePoint for (assuming that I misunderstood its functionality)?

推荐答案

这将永远小幅因为这两个传感器安装在稍有不同的地方发生。

This will always happen slightly because the two sensors are mounted at slightly different places.

试试吧:

看一些对象与你的两只眼睛,然后尝试只用你的左眼,那么只有你的右眼。事情看起来略有不同,因为你的两只眼睛都没有完全相同的地方

Look at some object with your two eyes, then try using only your left eye, then only your right eye. Things look slightly different because your two eyes are not in exactly the same place.

不过:有可能纠正了很多问题,与一些API代码。

However: it is possible to correct a lot of the issues with some of the API codes.

我使用Kinect的为Windows 1.5,因此API的是从1.0略有不同。

I'm using Kinect for Windows 1.5, so the API's are slightly different from the 1.0.

short[] depth=new short[320*240];
// fill depth with the kinect data
ColorImagePoint[] colorPoints=new ColorImagePoint[320*240];
// convert mappings
kinect.MapDepthFrameToColorFrame(DepthImageFormat.Resolution320x240Fps30,
            depth, ColorImageFormat.RgbResolution640x480Fps30, colorPoints);
// now do something with it
for(int i=0;i<320*240;i++)
{
  if (we_want_to_display(depth[i]))
  {
    draw_on_image_at(colorPoints[i].X,colorPoints[i].Y);
  }  
}

这是基础。
如果你看看Kinect的开发工具包1.5它显示了一个很好的利用蓝绿例子。

That's the basics. If you look at the greenscreen example in the Kinect Developer Toolkit 1.5 it shows a good use for this.

这篇关于Kinect的深度和图像帧对齐的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆