为什么Kinect的颜色和深度不会正确对齐? [英] Why kinect color and depth won't align correctly?

查看:1091
本文介绍了为什么Kinect的颜色和深度不会正确对齐?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直对这个问题相当长的一段时间,我在我的创造力的结束,所以希望有人能够帮助我指出了正确的方向。我一直在与Kinect和试图将数据捕获到MATLAB。幸运的是有这样做(我目前使用的 HTTP不少办法:// www.mathworks.com/matlabcentral/fileexchange/30242-kinect-matlab )。当我试图项目捕获的数据到3D,我的传统方法给穷人重建的结果。

要削减长话短说,我结束了写一个Kinect的SDK包装MATLAB执行重建和调整。重建工程就像一个梦,但是......

我有吨的烦恼与定位,你可以在这里看到:

请不要太密切关注的模型:(

正如你所看到的,定位不正确。我不知道为什么是这样的话。我读过很多论坛,其他人有过比我用同样的方法取得更大的成功。

我现在的管道​​是使用Kinect的Matlab的(使用Openni)来捕获数据,使用Kinect的SDK重建,然后使用Kinect的SDK(由NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution)对齐。我怀疑这也许是由于Openni,但我有在创造MEX功能收效甚微调用使用Kinect的SDK来捕捉。

如果任何人都可以在一个方向,我应该更深入地钻研点我,这将是更AP preciated。

编辑:

图我应该张贴一些code。这是code我使用的对齐方式:

  / *的MATLAB MEX功能* /
    无效mexFunction(INT nlhs,mxArray * plhs [],int的nrhs,
            常量mxArray * prhs []){

        如果(nrhs 2)
        {
            的printf(没有深度输入或彩色图像编辑\ n!);
            mexErrMsgTxt(输入错误);
        }

        INT宽度= 640,高度= 480;

        //获取输入深度数据

        无符号短* pDepthRow =(无符号短*)mxGetData(prhs [0]);
        无符号字符* pColorRow =(无符号字符*)mxGetData(prhs [1]);

        //计算翘曲

        INuiSensor *传感器= CreateFirstConnected();
        长colorCoords [640 * 480 * 2];
        传感器 - > NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
                NUI_IMAGE_RESOLUTION_640x480,NUI_IMAGE_RESOLUTION_640x480,
                640 * 480,pDepthRow,640 * 480 * 2,colorCoords);
        传感器 - > NuiShutdown();
        传感器 - >发行();

        //创建MATLAB输出;这是一列有序矩阵; _;

        INT Jdimsc [3];
        Jdimsc [0] =高度;
        Jdimsc [1] =宽;
        Jdimsc [2] = 3;

        plhs [0] = mxCreateNumericArray(3,Jdimsc,mxUINT8_CLASS,mxREAL);
        无符号字符* IOUT =(无符号字符*)mxGetData(plhs [0]);

        对于(INT X = 0,X<宽度; X ++)
            对于(INT Y = 0; Y<高度; Y ++){

                INT IDX =(Y×宽+ X)* 2;
                长c_x = colorCoords [IDX + 0];
                长c_y = colorCoords [IDX + 1];

                布尔正确=(c_x> = 0&功放;&安培; c_x<宽度
                        &功放;&安培; c_y> = 0&安培;&安培; c_y<高度 );
                c_x =正确的? c_x:X;
                c_y =正确的? c_y:Y;

                IOUT [0 *高*宽+ X *高+ Y =
                        pColorRow [0 *高*宽+ c_x *高+ c_y]。
                IOUT [1 *高*宽+ X *高+ Y =
                        pColorRow [1 *高*宽+ c_x *高+ c_y]。
                IOUT [2 *高*宽+ X *高+ Y =
                        pColorRow [2 *高*宽+ c_x *高+ c_y]。

            }

    }
 

解决方案

这是对立体视觉系统的一个众所周知的问题。我有同样的问题而回。原来的问题我张贴,可以发现<一href="http://social.msdn.microsoft.com/Forums/en-US/c39bab30-a704-4de1-948d-307afd128dab/kinectsensormapdepthframetocolorframe-example"相对=nofollow>此处。我试图做的是这样的一种类似。经过然而大量的研究,我得出这样一个捕获的数据集不能容易地对准的结论。

在另一方面,在录音时可以方便地使用一个函数调用来对齐两个RGB和深度数据的数据集。这种方法在两个OpenNI和Kinect SDK中提供的(功能是一样的,而调用函数的名称是为每个不同的)

看起来你正在使用Kinect的SDK来捕获数据集,以配合Kinect的SDK数据,可以使用<一个href="http://social.msdn.microsoft.com/Forums/en-US/c39bab30-a704-4de1-948d-307afd128dab/kinectsensormapdepthframetocolorframe-example"相对=nofollow> MapDepthFrameToColorFrame 。

既然你也使用OpenNI提到,看看 AlternativeViewPointCapability

我有Kinect的SDK没有经验,但与OpenNI V1.5这整个问题解决了通过下面的函数调用,前登录记录节点:

  depth.GetAlternativeViewPointCap()SetViewPoint(图像)。
 

其中,图片是图像生成器节点和深度是深度发电机节点。这与旧的SDK已经取代OpenNI 2.0 SDK。所以,如果你使用的是最新的SDK,那么函数调用可能会有所不同,但总的过程可能是类似的。

我也加入了一些示例图像:

在不使用上述取向功能调用的RGB的深度边缘不对齐

当使用该函数调用的深度边缘被完全对齐(有其显示一些边一些红外线阴影区域,但它们只是无效深度区域)

I've been working on this problem for quite some time and am at the end of my creativity, so hopefully someone else can help point me in the right direction. I've been working with the Kinect and attempting to capture data to MATLAB. Fortunately there's quite a few ways of doing so (I'm currently using http://www.mathworks.com/matlabcentral/fileexchange/30242-kinect-matlab). When I attempted to project the captured data to 3D, my traditional methods gave poor reconstruction results.

To cut a long story short, I ended up writing a Kinect SDK wrapper for matlab that performs the reconstruction and the alignment. The reconstruction works like a dream, but...

I am having tons of trouble with the alignment as you can see here:

Please don't look too closely at the model :(.

As you can see, the alignment is incorrect. I'm not sure why that's the case. I've read plenty of forums where others have had more success than I with the same methods.

My current pipeline is using Kinect Matlab (using Openni) to capture data, reconstructing using the Kinect SDK, then aligning using the Kinect SDK (by NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution). I suspected it was perhaps due to Openni, but I have had little success in creating mex function calls to capture using the Kinect SDK.

If anyone can point me in a direction I should delve more deeply into, it would be much appreciated.

Edit:

Figure I should post some code. This is the code I use for alignment:

    /* The matlab mex function */
    void mexFunction( int nlhs, mxArray *plhs[], int nrhs, 
            const mxArray *prhs[] ){

        if( nrhs < 2 )
        {
            printf( "No depth input or color image specified!\n" );
            mexErrMsgTxt( "Input Error" );
        }

        int width = 640, height = 480;

        // get input depth data

        unsigned short *pDepthRow = ( unsigned short* ) mxGetData( prhs[0] );
        unsigned char *pColorRow = ( unsigned char* ) mxGetData( prhs[1] );

        // compute the warping

        INuiSensor *sensor = CreateFirstConnected();
        long colorCoords[ 640*480*2 ];
        sensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
                NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, 
                640*480, pDepthRow, 640*480*2, colorCoords );
        sensor->NuiShutdown();
        sensor->Release();

        // create matlab output; it's a column ordered matrix ;_;

        int Jdimsc[3];
        Jdimsc[0]=height;
        Jdimsc[1]=width;
        Jdimsc[2]=3;

        plhs[0] = mxCreateNumericArray( 3, Jdimsc, mxUINT8_CLASS, mxREAL );
        unsigned char *Iout = ( unsigned char* )mxGetData( plhs[0] );

        for( int x = 0; x < width; x++ )
            for( int y = 0; y < height; y++ ){

                int idx = ( y*width + x )*2;
                long c_x = colorCoords[ idx + 0 ];
                long c_y = colorCoords[ idx + 1 ];

                bool correct = ( c_x >= 0 && c_x < width 
                        && c_y >= 0 && c_y < height );
                c_x = correct ? c_x : x;
                c_y = correct ? c_y : y;

                Iout[ 0*height*width + x*height + y ] =
                        pColorRow[ 0*height*width + c_x*height + c_y ];
                Iout[ 1*height*width + x*height + y ] =
                        pColorRow[ 1*height*width + c_x*height + c_y ];
                Iout[ 2*height*width + x*height + y ] =
                        pColorRow[ 2*height*width + c_x*height + c_y ];

            }

    }

解决方案

This is a well known problem for stereo vision systems. I had the same problem a while back. The original question I posted can be found here. What I was trying to do was kind of similar to this. However after a lot of research I came to the conclusion that a captured dataset can not be easily aligned.

On the other hand, while recording the dataset you can easily use a function call to align both the RGB and Depth data. This method is available in both OpenNI and Kinect SDK (functionality is same, while names of the function call are different for each)

It looks like you are using Kinect SDK to capture the dataset, to align data with Kinect SDK you can use MapDepthFrameToColorFrame.

Since you have also mentioned using OpenNI, have a look at AlternativeViewPointCapability.

I have no experience with Kinect SDK, however with OpenNI v1.5 this whole problem was solved by making the following function call, before registering the recorder node:

depth.GetAlternativeViewPointCap().SetViewPoint(image);

where image is the image generator node and depth is the depth generator node. This was with older SDK which has been replaced by OpenNI 2.0 SDK. So if you are using the latest SDK, then the function call might be different, however the overall procedure might be similar.

I am also adding some example images:

Without using the above alignment function call the depth edge on RGB were not aligned

When using the function call the depth edge gets perfectly aligned (there are some infrared shadow regions which show some edges, but they are just invalid depth regions)

这篇关于为什么Kinect的颜色和深度不会正确对齐?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆