当我们在 OpenCV 中有深度和 rgb 垫时如何显示 3D 图像(从 Kinect 捕获) [英] How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

查看:55
本文介绍了当我们在 OpenCV 中有深度和 rgb 垫时如何显示 3D 图像(从 Kinect 捕获)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们使用带有 OpenNI 库的 Kinect 捕获了 3d 图像,并使用此代码以 OpenCV Mat 的形式获取了 rgb 和深度图像.

We captured a 3d Image using Kinect with OpenNI Library and got the rgb and depth images in the form of OpenCV Mat using this code.

    main()
{
    OpenNI::initialize();
    puts( "Kinect initialization..." );
    Device device;
    if ( device.open( openni::ANY_DEVICE ) != 0 )
    {
        puts( "Kinect not found !" ); 
        return -1;
    }
    puts( "Kinect opened" );
    VideoStream depth, color;
    color.create( device, SENSOR_COLOR );
    color.start();
    puts( "Camera ok" );
    depth.create( device, SENSOR_DEPTH );
    depth.start();
    puts( "Depth sensor ok" );
    VideoMode paramvideo;
    paramvideo.setResolution( 640, 480 );
    paramvideo.setFps( 30 );
    paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
    depth.setVideoMode( paramvideo );
    paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
    color.setVideoMode( paramvideo );
    puts( "Réglages des flux vidéos ok" );

    // If the depth/color synchronisation is not necessary, start is faster :
    //device.setDepthColorSyncEnabled( false );

    // Otherwise, the streams can be synchronized with a reception in the order of our choice :
    device.setDepthColorSyncEnabled( true );
    device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );

    VideoStream** stream = new VideoStream*[2];
    stream[0] = &depth;
    stream[1] = &color;
    puts( "Kinect initialization completed" );


    if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
    {
        VideoFrameRef depthFrame, colorFrame;
        cv::Mat colorcv( cv::Size( 640, 480 ), CV_8UC3, NULL );
        cv::Mat depthcv( cv::Size( 640, 480 ), CV_16UC1, NULL );
        cv::namedWindow( "RGB", CV_WINDOW_AUTOSIZE );
        cv::namedWindow( "Depth", CV_WINDOW_AUTOSIZE );

        int changedIndex;
        while( device.isValid() )
        {
            OpenNI::waitForAnyStream( stream, 2, &changedIndex );
            switch ( changedIndex )
            {
                case 0:
                    depth.readFrame( &depthFrame );

                    if ( depthFrame.isValid() )
                    {
                        depthcv.data = (uchar*) depthFrame.getData();
                        cv::imshow( "Depth", depthcv );
                    }
                    break;

                case 1:
                    color.readFrame( &colorFrame );

                    if ( colorFrame.isValid() )
                    {
                        colorcv.data = (uchar*) colorFrame.getData();
                        cv::cvtColor( colorcv, colorcv, CV_BGR2RGB );
                        cv::imshow( "RGB", colorcv );
                    }
                    break;

                default:
                    puts( "Error retrieving a stream" );
            }
            cv::waitKey( 1 );
        }

        cv::destroyWindow( "RGB" );
        cv::destroyWindow( "Depth" );
    }
    depth.stop();
    depth.destroy();
    color.stop();
    color.destroy();
    device.close();
    OpenNI::shutdown();
}

我们在上面添加了一些代码,从中获得了 RGB 和深度 Mat,然后我们使用 OpenCV 处理了 RGB.

We added some code to above and got the RGB and depth Mat from that and the we processed RGB using OpenCV.

现在我们需要以 3D 形式显示该图像.

Now we need to display that image in 3D.

我们正在使用:-

1) Windows 8 x64

1) Windows 8 x64

2) Visual Studio 2012 x64

2) Visual Studio 2012 x64

3) OpenCV 2.4.10

3) OpenCV 2.4.10

4) OpenNI 2.2.0.33

4) OpenNI 2.2.0.33

5) Kinect1

6) Kinect SDK 1.8.0

6) Kinect SDK 1.8.0

问题:-

1) 我们可以使用 OpenCV 直接显示此图像吗,或者我们需要任何外部库??

1) Can we directly display this Image using OpenCV OR we need any external libraries ??

2) 如果我们需要使用外部库,哪一个更适合这个简单的任务 OpenGL、PCL 或任何其他 ??

2) If we need to use external Libraries which one is better for this simple task OpenGL, PCL Or any other ??

3) PCL 是否支持 Visual Studio 12 和 OpenNI2 并且由于 PCL 与其他版本的 OpenNI 打包在一起,这两个版本是否会冲突??

3) Does PCL support Visual Studio 12 and OpenNI2 and Since PCL comes packed with other version of OpenNI does these two versions conflict??

推荐答案

为了提高南极人的回答,要以 3D 显示图像,您需要先创建点云...RGB 和深度图像为您提供创建有组织的彩色点云所需的数据.为此,您需要计算每个点的 x、y、z 值.z 值来自深度像素,但必须计算 x 和 y.

To improve the answer of antarctician, to display the image in 3D you need to create your point cloud first... The RGB and Depth images give you the necessary data to create an organized colored pointcloud. To do so, you need to calculate the x,y,z values for each point. The z value comes from the depth pixel, but the x and y must be calculated.

要做到这一点,您可以这样做:

to do it you can do something like this:

void Viewer::get_pcl(cv::Mat& color_mat, cv::Mat& depth_mat, pcl::PointCloud<pcl::PointXYZRGBA>& cloud ){
    float x,y,z;

    for (int j = 0; j< depth_mat.rows; j ++){
        for(int i = 0; i < depth_mat.cols; i++){
            // the RGB data is created
            PCD_BGRA   pcd_BGRA;
                       pcd_BGRA.B  = color_mat.at<cv::Vec3b>(j,i)[0];
                       pcd_BGRA.R  = color_mat.at<cv::Vec3b>(j,i)[2];
                       pcd_BGRA.G  = color_mat.at<cv::Vec3b>(j,i)[1];
                       pcd_BGRA.A  = 0;

            pcl::PointXYZRGBA vertex;
            int depth_value = (int) depth_mat.at<unsigned short>(j,i);
            // find the world coordinates
            openni::CoordinateConverter::convertDepthToWorld(depth, i, j, (openni::DepthPixel) depth_mat.at<unsigned short>(j,i), &x, &y,&z );

            // the point is created with depth and color data
            if ( limitx_min <= i && limitx_max >=i && limity_min <= j && limity_max >= j && depth_value != 0 && depth_value <= limitz_max && depth_value >= limitz_min){
                vertex.x   = (float) x;
                vertex.y   = (float) y;
                vertex.z   = (float) depth_value;
            } else {
                // if the data is outside the boundaries
                vertex.x   = bad_point;
                vertex.y   = bad_point;
                vertex.z   = bad_point;
            }
            vertex.rgb = pcd_BGRA.RGB_float;

            // the point is pushed back in the cloud
            cloud.points.push_back( vertex );
        }
    }
}

PCD_BGRA 是

union PCD_BGRA
{
    struct
    {
        uchar B; // LSB
        uchar G; // ---
        uchar R; // MSB
        uchar A; //
    };
    float RGB_float;
    uint  RGB_uint;
};

当然,这是针对您想使用 PCL 的情况,但它或多或少是 x,y,z 值的计算.这依赖于 openni::CoordinateConverter::convertDepthToWorld 在 3D 中找到点的位置.您也可以手动执行此操作

Of course, this is for the case you want to use PCL, but it is more or less the calculations of the x,y,z values stands. This relies on openni::CoordinateConverter::convertDepthToWorld to find the position of the point in 3D. You may also do this manually

 const float invfocalLength = 1.f / 525.f;
 const float centerX = 319.5f;
 const float centerY = 239.5f;
 const float factor = 1.f / 1000.f;

 float dist = factor * (float)(*depthdata);
 p.x = (x-centerX) * dist * invfocalLength;
 p.y = (y-centerY) * dist * invfocalLength;
 p.z = dist;

其中 centerX、centerY 和焦距是相机的内在校准(这个是针对 Kinect 的).以及如果您需要以米或毫米为单位的距离的因素...此值取决于您的程序

Where centerX, centerY, and focallength are the intrinsic calibration of the camera (this one is for Kinect). and the factor it is if you need the distance in meters or millimeters... this value depends on your program

对于问题:

  1. 是的,您可以使用带有 viz class 或其他适合您需求的外部库.
  2. OpenGl 很好,但如果您不知道如何使用它们中的任何一个(我的意思是显示点云),PCL(或 OpenCV)更容易使用
  3. 我没在windows上用过,但理论上可以用visual studio 2012.据我所知PCL自带的版本是OpenNI 1,不会影响OpenNI2...
  1. Yes, you can display it using the latest OpenCV with the viz class or with another external library that suits your needs.
  2. OpenGl is nice, but PCL (or OpenCV) is easier to use if you do not know how to use any of them (I mean for displaying pointclouds)
  3. I haven't use it with windows, but in theory it can be used with visual studio 2012. As far as I know the version that PCL comes packed with is OpenNI 1, and it won't affect OpenNI2...

这篇关于当我们在 OpenCV 中有深度和 rgb 垫时如何显示 3D 图像(从 Kinect 捕获)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆