为什么kinect颜色和深度将不能正确对齐? [英] Why kinect color and depth won't align correctly?

查看:760
本文介绍了为什么kinect颜色和深度将不能正确对齐?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在这个问题上工作了一段时间,在我的创造力结束时,希望有人能帮助我指出正确的方向。我一直在使用Kinect并尝试捕获数据到MATLAB。幸运的是,有很多方法可以这样做(我目前使用。我想做的是类似的。然而,经过大量的研究,我得出的结论是,捕获的数据集不能轻松对齐。



另一方面,在记录数据集时,您可以轻松地使用函数调用来对齐RGB和深度数据。这个方法在OpenNI和Kinect SDK中都可用(功能是相同的,而函数调用的名称对每个都是不同的)



看起来你正在使用Kinect SDK捕获数据集,使数据与Kinect SDK对齐,您可以使用 MapDepthFrameToColorFrame



由于您也提到过使用OpenNI,请查看 AlternativeViewPointCapability



我没有Kinect SDK的经验,通过在注册记录器节点之前进行以下函数调用,解决了整个问题:

  depth.GetAlternativeViewPointCap ().SetViewPoint(image); 

其中 image 是图像生成器节点 depth 是深度生成器节点。这是与旧的SDK已被OpenNI 2.0 SDK取代。因此,如果您使用的是最新的SDK,那么函数调用可能会有所不同,但总体过程可能是类似的。



我还添加了一些示例图像: p>

不使用上述对齐函数调用RGB上的深度边缘未对齐



使用函数调用时,深度边缘完全对齐(有一些红外阴影区域显示一些边缘,只是无效的深度区域)


I've been working on this problem for quite some time and am at the end of my creativity, so hopefully someone else can help point me in the right direction. I've been working with the Kinect and attempting to capture data to MATLAB. Fortunately there's quite a few ways of doing so (I'm currently using http://www.mathworks.com/matlabcentral/fileexchange/30242-kinect-matlab). When I attempted to project the captured data to 3D, my traditional methods gave poor reconstruction results.

To cut a long story short, I ended up writing a Kinect SDK wrapper for matlab that performs the reconstruction and the alignment. The reconstruction works like a dream, but...

I am having tons of trouble with the alignment as you can see here:

Please don't look too closely at the model :(.

As you can see, the alignment is incorrect. I'm not sure why that's the case. I've read plenty of forums where others have had more success than I with the same methods.

My current pipeline is using Kinect Matlab (using Openni) to capture data, reconstructing using the Kinect SDK, then aligning using the Kinect SDK (by NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution). I suspected it was perhaps due to Openni, but I have had little success in creating mex function calls to capture using the Kinect SDK.

If anyone can point me in a direction I should delve more deeply into, it would be much appreciated.

Edit:

Figure I should post some code. This is the code I use for alignment:

    /* The matlab mex function */
    void mexFunction( int nlhs, mxArray *plhs[], int nrhs, 
            const mxArray *prhs[] ){

        if( nrhs < 2 )
        {
            printf( "No depth input or color image specified!\n" );
            mexErrMsgTxt( "Input Error" );
        }

        int width = 640, height = 480;

        // get input depth data

        unsigned short *pDepthRow = ( unsigned short* ) mxGetData( prhs[0] );
        unsigned char *pColorRow = ( unsigned char* ) mxGetData( prhs[1] );

        // compute the warping

        INuiSensor *sensor = CreateFirstConnected();
        long colorCoords[ 640*480*2 ];
        sensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
                NUI_IMAGE_RESOLUTION_640x480, NUI_IMAGE_RESOLUTION_640x480, 
                640*480, pDepthRow, 640*480*2, colorCoords );
        sensor->NuiShutdown();
        sensor->Release();

        // create matlab output; it's a column ordered matrix ;_;

        int Jdimsc[3];
        Jdimsc[0]=height;
        Jdimsc[1]=width;
        Jdimsc[2]=3;

        plhs[0] = mxCreateNumericArray( 3, Jdimsc, mxUINT8_CLASS, mxREAL );
        unsigned char *Iout = ( unsigned char* )mxGetData( plhs[0] );

        for( int x = 0; x < width; x++ )
            for( int y = 0; y < height; y++ ){

                int idx = ( y*width + x )*2;
                long c_x = colorCoords[ idx + 0 ];
                long c_y = colorCoords[ idx + 1 ];

                bool correct = ( c_x >= 0 && c_x < width 
                        && c_y >= 0 && c_y < height );
                c_x = correct ? c_x : x;
                c_y = correct ? c_y : y;

                Iout[ 0*height*width + x*height + y ] =
                        pColorRow[ 0*height*width + c_x*height + c_y ];
                Iout[ 1*height*width + x*height + y ] =
                        pColorRow[ 1*height*width + c_x*height + c_y ];
                Iout[ 2*height*width + x*height + y ] =
                        pColorRow[ 2*height*width + c_x*height + c_y ];

            }

    }

解决方案

This is a well known problem for stereo vision systems. I had the same problem a while back. The original question I posted can be found here. What I was trying to do was kind of similar to this. However after a lot of research I came to the conclusion that a captured dataset can not be easily aligned.

On the other hand, while recording the dataset you can easily use a function call to align both the RGB and Depth data. This method is available in both OpenNI and Kinect SDK (functionality is same, while names of the function call are different for each)

It looks like you are using Kinect SDK to capture the dataset, to align data with Kinect SDK you can use MapDepthFrameToColorFrame.

Since you have also mentioned using OpenNI, have a look at AlternativeViewPointCapability.

I have no experience with Kinect SDK, however with OpenNI v1.5 this whole problem was solved by making the following function call, before registering the recorder node:

depth.GetAlternativeViewPointCap().SetViewPoint(image);

where image is the image generator node and depth is the depth generator node. This was with older SDK which has been replaced by OpenNI 2.0 SDK. So if you are using the latest SDK, then the function call might be different, however the overall procedure might be similar.

I am also adding some example images:

Without using the above alignment function call the depth edge on RGB were not aligned

When using the function call the depth edge gets perfectly aligned (there are some infrared shadow regions which show some edges, but they are just invalid depth regions)

这篇关于为什么kinect颜色和深度将不能正确对齐?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆