3D映射到RGB的深度(Kinect OpenNI深度图到OpenCV RGB凸轮) [英] 3D Mapping depth to RGB (Kinect OpenNI Depthmap to OpenCV RGB Cam)

查看:517
本文介绍了3D映射到RGB的深度(Kinect OpenNI深度图到OpenCV RGB凸轮)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将我的OpenNI(1.5.4.0)Kinect 4 Windows深度图映射到OpenCV RGB图像.

我有深度为640x480的深度图(单位为mm),试图像Burrus一样进行映射: http://burrus.name/index.php/Research/KinectCalibration

我跳过了失真部分,但否则我做了我认为的一切:

//with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected
//to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera. 

P3D.at<Vec3f>(y,x)[0] = (x - cx_ir) * depth/fx_ir;
P3D.at<Vec3f>(y,x)[1] = (y - cy_ir) * depth/fy_ir;
P3D.at<Vec3f>(y,x)[2] = depth;


//P3D' = R.P3D + T:
RTMat = (Mat_<float>(4,4) << 0.999388, -0.00796202, -0.0480646, -3.96963,
0.00612322, 0.9993536, 0.0337474, -22.8512,
0.0244427, -0.03635059, 0.999173, -15.6307,
0,0,0,1);

perspectiveTransform(P3D, P3DS, RTMat);

//reproject each 3D point on the color image and get its color:  
depth = P3DS.at<Vec3f>(y,x)[2];
x_rgb =  (P3DS.at<Vec3f>(y,x)[0] * fx_rgb/ depth + cx_rgb;
y_rgb = (P3DS.at<Vec3f>(y,x)[1] * fy_rgb/ depth + cy_rgb;

但是使用我估计的Kinect RGB摄像机和IR摄像机的校准值,我的结果在每个方向上都失败了,并且仅通过更改外部T参数无法固定.

我有一些怀疑:

  • OpenNi是否已将IR深度图映射到 Kinect?
  • 我应该以米为单位使用深度还是将像素转换为 毫米? (我尝试通过乘以pixel_size * 0.001,但得到了 相同的结果)

真的希望有人能帮助我. 提前谢谢.

解决方案

AFAIK OpenNI进行自己的注册(出厂设置),您也可以切换注册.如果您使用OpenNI支持构建了OpenCV,那么就这么简单:

capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

此处所述,这里有一个最小的OpenNI/OpenCV示例此处. 因此,最小的工作示例如下所示:

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int main(){
    VideoCapture capture;
    capture.open(CV_CAP_OPENNI);
    //registration
    if(capture.get( CV_CAP_PROP_OPENNI_REGISTRATION ) == 0) capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

    if( !capture.isOpened() ){
        cout << "Can not open a capture object." << endl;
        return -1;
    }
    cout << "ready" << endl;

    for(;;){
        Mat depthMap,depthShow;
        if( !capture.grab() ){
            cout << "Can not grab images." << endl;
            return -1;
        }else{
            if( capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) ){
                const float scaleFactor = 0.05f;
                depthMap.convertTo( depthShow, CV_8UC1, scaleFactor );
                imshow("depth",depthShow);
            }
        }
        if( waitKey( 30 ) == 27 )    break;//esc to exit
    }

}

如果您没有使用OpenNI支持构建的OpenCV,则应该可以使用http://burrus.name/index.php/Research/KinectCalibration

i skipped the distortion part but otherwise i did everything i think:

//with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected
//to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera. 

P3D.at<Vec3f>(y,x)[0] = (x - cx_ir) * depth/fx_ir;
P3D.at<Vec3f>(y,x)[1] = (y - cy_ir) * depth/fy_ir;
P3D.at<Vec3f>(y,x)[2] = depth;


//P3D' = R.P3D + T:
RTMat = (Mat_<float>(4,4) << 0.999388, -0.00796202, -0.0480646, -3.96963,
0.00612322, 0.9993536, 0.0337474, -22.8512,
0.0244427, -0.03635059, 0.999173, -15.6307,
0,0,0,1);

perspectiveTransform(P3D, P3DS, RTMat);

//reproject each 3D point on the color image and get its color:  
depth = P3DS.at<Vec3f>(y,x)[2];
x_rgb =  (P3DS.at<Vec3f>(y,x)[0] * fx_rgb/ depth + cx_rgb;
y_rgb = (P3DS.at<Vec3f>(y,x)[1] * fy_rgb/ depth + cy_rgb;

But with my estimated calibration values for the RGB Camera and the IR Camera of the Kinect my result fails in every direction and cannot be fixed only with changing the extrinsic T Parameters.

I have a few suspisions:

  • does OpenNi already map the IR Depthmap to the RGB Camera of the Kinect?
  • Should i use depth in meters and or transform the pixels into mm? (i tried by multiplying with pixel_size * 0.001 but i got the same results)

Really hope someone can help me. Thx in advance.

解决方案

AFAIK OpenNI does it's own registration (factory setting) and you can toggle registration as well. If you've built OpenCV with OpenNI support it's as simple as this:

capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

As explained here and there's a minimal OpenNI/OpenCV example here. So a minimal working sample would look like so:

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int main(){
    VideoCapture capture;
    capture.open(CV_CAP_OPENNI);
    //registration
    if(capture.get( CV_CAP_PROP_OPENNI_REGISTRATION ) == 0) capture.set(CV_CAP_PROP_OPENNI_REGISTRATION,1);

    if( !capture.isOpened() ){
        cout << "Can not open a capture object." << endl;
        return -1;
    }
    cout << "ready" << endl;

    for(;;){
        Mat depthMap,depthShow;
        if( !capture.grab() ){
            cout << "Can not grab images." << endl;
            return -1;
        }else{
            if( capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) ){
                const float scaleFactor = 0.05f;
                depthMap.convertTo( depthShow, CV_8UC1, scaleFactor );
                imshow("depth",depthShow);
            }
        }
        if( waitKey( 30 ) == 27 )    break;//esc to exit
    }

}

If you don't have OpenCV built with OpenNI support, you should be able to use GetAlternativeViewPointCap()

这篇关于3D映射到RGB的深度(Kinect OpenNI深度图到OpenCV RGB凸轮)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆