在OpenCV中具有深度和rgb Mat时如何显示3D图像(从Kinect捕获) [英] How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)
问题描述
我们使用Kinect和OpenNI库捕获了3d图像,并使用此代码以OpenCV Mat的形式获取了rgb和深度图像.
main()
{
OpenNI::initialize();
puts( "Kinect initialization..." );
Device device;
if ( device.open( openni::ANY_DEVICE ) != 0 )
{
puts( "Kinect not found !" );
return -1;
}
puts( "Kinect opened" );
VideoStream depth, color;
color.create( device, SENSOR_COLOR );
color.start();
puts( "Camera ok" );
depth.create( device, SENSOR_DEPTH );
depth.start();
puts( "Depth sensor ok" );
VideoMode paramvideo;
paramvideo.setResolution( 640, 480 );
paramvideo.setFps( 30 );
paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
depth.setVideoMode( paramvideo );
paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
color.setVideoMode( paramvideo );
puts( "Réglages des flux vidéos ok" );
// If the depth/color synchronisation is not necessary, start is faster :
//device.setDepthColorSyncEnabled( false );
// Otherwise, the streams can be synchronized with a reception in the order of our choice :
device.setDepthColorSyncEnabled( true );
device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );
VideoStream** stream = new VideoStream*[2];
stream[0] = &depth;
stream[1] = &color;
puts( "Kinect initialization completed" );
if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
{
VideoFrameRef depthFrame, colorFrame;
cv::Mat colorcv( cv::Size( 640, 480 ), CV_8UC3, NULL );
cv::Mat depthcv( cv::Size( 640, 480 ), CV_16UC1, NULL );
cv::namedWindow( "RGB", CV_WINDOW_AUTOSIZE );
cv::namedWindow( "Depth", CV_WINDOW_AUTOSIZE );
int changedIndex;
while( device.isValid() )
{
OpenNI::waitForAnyStream( stream, 2, &changedIndex );
switch ( changedIndex )
{
case 0:
depth.readFrame( &depthFrame );
if ( depthFrame.isValid() )
{
depthcv.data = (uchar*) depthFrame.getData();
cv::imshow( "Depth", depthcv );
}
break;
case 1:
color.readFrame( &colorFrame );
if ( colorFrame.isValid() )
{
colorcv.data = (uchar*) colorFrame.getData();
cv::cvtColor( colorcv, colorcv, CV_BGR2RGB );
cv::imshow( "RGB", colorcv );
}
break;
default:
puts( "Error retrieving a stream" );
}
cv::waitKey( 1 );
}
cv::destroyWindow( "RGB" );
cv::destroyWindow( "Depth" );
}
depth.stop();
depth.destroy();
color.stop();
color.destroy();
device.close();
OpenNI::shutdown();
}
我们在上面添加了一些代码,并从中获得了RGB和深度Mat,然后我们使用OpenCV处理了RGB.
现在我们需要以3D方式显示该图像.
我们正在使用:-
1)Windows 8 x64
2)Visual Studio 2012 x64
3)OpenCV 2.4.10
4)OpenNI 2.2.0.33
5)Kinect1
6)Kinect SDK 1.8.0
问题:-
1)我们可以使用OpenCV直接显示此图像,还是需要任何外部库?
2)如果我们需要使用外部库,那么对于此简单任务OpenGL,PCL或其他任何一个库,哪个更好?
3)PCL是否支持Visual Studio 12和OpenNI2,并且由于PCL与OpenNI的其他版本打包在一起,所以这两个版本会发生冲突吗?
要改善南极洲的答案, 要以3D显示图像,您需要先创建点云... RGB和深度图像为您提供创建有组织的彩色点云所需的数据.为此,您需要计算每个点的x,y,z值. z值来自深度像素,但必须计算x和y.
要执行此操作,您可以执行以下操作:
void Viewer::get_pcl(cv::Mat& color_mat, cv::Mat& depth_mat, pcl::PointCloud<pcl::PointXYZRGBA>& cloud ){
float x,y,z;
for (int j = 0; j< depth_mat.rows; j ++){
for(int i = 0; i < depth_mat.cols; i++){
// the RGB data is created
PCD_BGRA pcd_BGRA;
pcd_BGRA.B = color_mat.at<cv::Vec3b>(j,i)[0];
pcd_BGRA.R = color_mat.at<cv::Vec3b>(j,i)[2];
pcd_BGRA.G = color_mat.at<cv::Vec3b>(j,i)[1];
pcd_BGRA.A = 0;
pcl::PointXYZRGBA vertex;
int depth_value = (int) depth_mat.at<unsigned short>(j,i);
// find the world coordinates
openni::CoordinateConverter::convertDepthToWorld(depth, i, j, (openni::DepthPixel) depth_mat.at<unsigned short>(j,i), &x, &y,&z );
// the point is created with depth and color data
if ( limitx_min <= i && limitx_max >=i && limity_min <= j && limity_max >= j && depth_value != 0 && depth_value <= limitz_max && depth_value >= limitz_min){
vertex.x = (float) x;
vertex.y = (float) y;
vertex.z = (float) depth_value;
} else {
// if the data is outside the boundaries
vertex.x = bad_point;
vertex.y = bad_point;
vertex.z = bad_point;
}
vertex.rgb = pcd_BGRA.RGB_float;
// the point is pushed back in the cloud
cloud.points.push_back( vertex );
}
}
}
和PCD_BGRA是
union PCD_BGRA
{
struct
{
uchar B; // LSB
uchar G; // ---
uchar R; // MSB
uchar A; //
};
float RGB_float;
uint RGB_uint;
};
当然,这是您要使用PCL的情况,但是x,y,z值的计算或多或少是正确的.这依赖于openni::CoordinateConverter::convertDepthToWorld
在3D中找到该点的位置.您也可以手动执行
const float invfocalLength = 1.f / 525.f;
const float centerX = 319.5f;
const float centerY = 239.5f;
const float factor = 1.f / 1000.f;
float dist = factor * (float)(*depthdata);
p.x = (x-centerX) * dist * invfocalLength;
p.y = (y-centerY) * dist * invfocalLength;
p.z = dist;
其中centerX,centerY和焦距是相机的固有校准(这是Kinect的).以及是否需要以米或毫米为单位的距离的因素...此值取决于您的程序
对于问题:
- 是的,您可以使用带有
We added some code to above and got the RGB and depth Mat from that and the we processed RGB using OpenCV.
Now we need to display that image in 3D.
We are using :-
1) Windows 8 x64
2) Visual Studio 2012 x64
3) OpenCV 2.4.10
4) OpenNI 2.2.0.33
5) Kinect1
6) Kinect SDK 1.8.0
Questions :-
1) Can we directly display this Image using OpenCV OR we need any external libraries ??
2) If we need to use external Libraries which one is better for this simple task OpenGL, PCL Or any other ??
3) Does PCL support Visual Studio 12 and OpenNI2 and Since PCL comes packed with other version of OpenNI does these two versions conflict??
解决方案To improve the answer of antarctician, to display the image in 3D you need to create your point cloud first... The RGB and Depth images give you the necessary data to create an organized colored pointcloud. To do so, you need to calculate the x,y,z values for each point. The z value comes from the depth pixel, but the x and y must be calculated.
to do it you can do something like this:
void Viewer::get_pcl(cv::Mat& color_mat, cv::Mat& depth_mat, pcl::PointCloud<pcl::PointXYZRGBA>& cloud ){ float x,y,z; for (int j = 0; j< depth_mat.rows; j ++){ for(int i = 0; i < depth_mat.cols; i++){ // the RGB data is created PCD_BGRA pcd_BGRA; pcd_BGRA.B = color_mat.at<cv::Vec3b>(j,i)[0]; pcd_BGRA.R = color_mat.at<cv::Vec3b>(j,i)[2]; pcd_BGRA.G = color_mat.at<cv::Vec3b>(j,i)[1]; pcd_BGRA.A = 0; pcl::PointXYZRGBA vertex; int depth_value = (int) depth_mat.at<unsigned short>(j,i); // find the world coordinates openni::CoordinateConverter::convertDepthToWorld(depth, i, j, (openni::DepthPixel) depth_mat.at<unsigned short>(j,i), &x, &y,&z ); // the point is created with depth and color data if ( limitx_min <= i && limitx_max >=i && limity_min <= j && limity_max >= j && depth_value != 0 && depth_value <= limitz_max && depth_value >= limitz_min){ vertex.x = (float) x; vertex.y = (float) y; vertex.z = (float) depth_value; } else { // if the data is outside the boundaries vertex.x = bad_point; vertex.y = bad_point; vertex.z = bad_point; } vertex.rgb = pcd_BGRA.RGB_float; // the point is pushed back in the cloud cloud.points.push_back( vertex ); } } }
and PCD_BGRA is
union PCD_BGRA { struct { uchar B; // LSB uchar G; // --- uchar R; // MSB uchar A; // }; float RGB_float; uint RGB_uint; };
Of course, this is for the case you want to use PCL, but it is more or less the calculations of the x,y,z values stands. This relies on
openni::CoordinateConverter::convertDepthToWorld
to find the position of the point in 3D. You may also do this manuallyconst float invfocalLength = 1.f / 525.f; const float centerX = 319.5f; const float centerY = 239.5f; const float factor = 1.f / 1000.f; float dist = factor * (float)(*depthdata); p.x = (x-centerX) * dist * invfocalLength; p.y = (y-centerY) * dist * invfocalLength; p.z = dist;
Where centerX, centerY, and focallength are the intrinsic calibration of the camera (this one is for Kinect). and the factor it is if you need the distance in meters or millimeters... this value depends on your program
For the questions:
- Yes, you can display it using the latest OpenCV with the viz class or with another external library that suits your needs.
- OpenGl is nice, but PCL (or OpenCV) is easier to use if you do not know how to use any of them (I mean for displaying pointclouds)
- I haven't use it with windows, but in theory it can be used with visual studio 2012. As far as I know the version that PCL comes packed with is OpenNI 1, and it won't affect OpenNI2...
这篇关于在OpenCV中具有深度和rgb Mat时如何显示3D图像(从Kinect捕获)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!