当对象一个在另一个之上时,从单个 2D 图像或来自 RGBD 相机的点云中对已知 3D CAD 对象的实时 6D 姿态估计? [英] Real time 6D pose estimation of known 3D CAD objects from a single 2D image or point clouds from RGBD Camera when objects are one on top of the other?

查看:46
本文介绍了当对象一个在另一个之上时,从单个 2D 图像或来自 RGBD 相机的点云中对已知 3D CAD 对象的实时 6D 姿态估计?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在从事一个研究项目,我需要在该项目中实时估计拾取和放置任务中物体的 6DOF 姿态.必须实时估计姿势,并且对象可以一个在另一个之上并且完全相同,因此我必须获得顶部对象的位置和方向.问题是对象是相同的(PPVC 块,在建筑领域),但好消息是它们的形状非常规则.

I'm working on a research project where I need to estimate in real time the 6DOF pose of objects in a pick and place tasks. The pose must be estimate in real time, and the objects can be one on top of the other and identical, so I have to get the position and the orientation of the object on the top. The problem is that the objects are same ( PPVC blocks, in the construction field) but good thing is that they are with quite regular shape.

那么如何将 RGBD Camera 点云的单个 2D 图像中已知的 3D CAD 模型与顶部的对象进行比较,从而可以实时获得 6DOF 姿态.可以使用任何方法,但必须非常准确,因为需要非常高的精度.

So how to compare the known 3D CAD model in a single 2D image of point clouds from RGBD Camera with the object that is on the top, so can get the 6DOF pose in real time. Any approach can be used, but must be very accurate as very high precision is required.

我知道可以使用深度学习来定位对象,然后与 3D CAD 模型进行比较或处理点云(分割,如果需要聚类,PCA...),但是如果深度学习失败而不是能够用 RGBD 相机检测物体吗?那我有什么选择?是曲面匹配算法https://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html 在 OpenCv 中的一个选项?

I know that can use Deep Learning to localize the object and then either compare with the 3D CAD model or process the point clouds(segmentation , if need clustering, PCA...) , but what if the Deep Learning failed and not able to detect the object with the RGBD Camera? What options do I have then? Is the surface matching algorithm https://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html in OpenCv an option?

我从 PCL 开始,将 3D CAD 对象模型与实际对象使用 ICP 匹配的 PCL 场景进行比较 如何使用迭代最近点

I start with the PCL and comparing the 3D CAD object model with the PCL scene where the actual object is using ICP matching How to use iterative closest point

我的示例代码在这里

void cloud_cb(const sensor_msgs::PointCloud2::ConstPtr &msg){


   // Convert to pcl point cloud
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudIn (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::fromROSMsg(*msg,*cloudIn);

   //Deklaration cloud pointers
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::PointCloud<pcl::PointXYZRGB> finalCloud;
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut_new (new pcl::PointCloud<pcl::PointXYZRGB>) ;


   //Load CAD file
   if(pcl::io::loadPLYFile ("/home/admini/darknet_ros/src/darknet_ros/gb_visual_detection_3d/darknet_ros_3d/darknet_ros_3d/src/rs_box.ply", *cloudOut)==-1)
   {
     PCL_ERROR ("Couldn't read second input file! \n");
    // return (-1);
   }


  //std::cout << "\nLoaded file "  << " (" << cloudOut->size () << " points) in " << time.toc () << " ms\n" << std::endl;
  
  pcl::IterativeClosestPoint<pcl::PointXYZRGB, pcl::PointXYZRGB> icp;
  icp.setInputCloud(cloudOut);
  icp.setInputTarget(cloudIn);
  icp.setMaximumIterations (30);// 30
  icp.setTransformationEpsilon (1e-9);
  icp.setMaxCorrespondenceDistance (2);//2
  icp.setEuclideanFitnessEpsilon (0.6);// 0.0001
  icp.setRANSACOutlierRejectionThreshold (80);//20
  //icp.setRANSACIterations(800);//800

  icp.align(finalCloud);

  if (icp.hasConverged())
  {
      std::cout << "ICP converged." << std::endl
                << "The score is " << icp.getFitnessScore() << std::endl;
      std::cout << "Transformation matrix:" << std::endl;
      std::cout << icp.getFinalTransformation() << std::endl;
  }
  else std::cout << "ICP did not converge." << std::endl;

  Eigen::Matrix4f transformationMatrix = icp.getFinalTransformation ();
  std::cout<<"trans %n"<<transformationMatrix<<std::endl;

  //pcl::transformPointCloud( *cloudOut, *cloudOut_new, transformationMatrix);
  pcl::transformPointCloud (finalCloud, *cloudOut_new, transformationMatrix);

   //sensor_msgs::PointCloud2 cloud_icp;
   //pcl::toROSMsg (*cloudOut , cloud_icp);
   //cloudOut ->header.frame_id = "/zed_left_camera_frame";
   //pub.publish (cloud_icp);


   //Covert rotation matrix to quaternion
   //First convert Transforamtion Matrix to Rotation Matrix
    //Eigen::Matrix3f Rotation_matrix = transformationMatrix.topLeftCorner<3,3>();
    /*Eigen::Matrix3d rotation_matrix;
    rotation_matrix (0, 0) = transformationMatrix (0,0);
    rotation_matrix (0, 1) = transformationMatrix (0,1);
    rotation_matrix (0, 2) = transformationMatrix (0,2);
    rotation_matrix (1, 0) = transformationMatrix (1,0);
    rotation_matrix (1, 1) = transformationMatrix (1,1);
    rotation_matrix (1, 2) = transformationMatrix (1,2);
    rotation_matrix (2, 0) = transformationMatrix (2,0);
    rotation_matrix (2, 1) = transformationMatrix (2,1);
    rotation_matrix (2, 2) = transformationMatrix (2,2);*/

    //Convert Rotation Matrix to Quaternion
    //Quaternionf q(Rotation_matrix);
    Eigen::Matrix3f rot_matrix = transformationMatrix.topLeftCorner<3,3>();
    //Eigen::Quaternionf q;
    Eigen::Quaternionf q(rot_matrix);
    q.normalized();
    //Eigen::Quaternionf q.normalized(transformationMatrix.topLeftCorner<3,3>());

    cout << " postion x  = " << transformationMatrix(0,3) <<  endl;
    cout << " postion y  = " << transformationMatrix(1,3) <<  endl;
    cout << " postion z  = " << transformationMatrix(2,3) <<  endl;
    
   //Convert Quaternion to Euler angle
    double roll, pitch, yaw;
    double PI=3.1415926535;
    auto euler = q.toRotationMatrix().eulerAngles(0,1,2);
    std::cout << "Euler from quaternion in roll, pitch, yaw (degree)"<< std::endl << euler*180/PI << std::endl;

  


   //Visualisation Marker
    std::string ns; 
    float r; 
    float g; 
    float b;
    visualization_msgs::MarkerArray msg_marker;
    visualization_msgs::Marker bbx_marker;
    bbx_marker.header.frame_id = "zed_left_camera_frame";
    bbx_marker.header.stamp = ros::Time::now();
    bbx_marker.ns = ns;
    bbx_marker.type = visualization_msgs::Marker::CUBE;
    bbx_marker.action = visualization_msgs::Marker::ADD;
    bbx_marker.pose.position.x =  transformationMatrix(0,3);
    bbx_marker.pose.position.y =  transformationMatrix(1,3);;
    bbx_marker.pose.position.z =  transformationMatrix(2,3);;
    bbx_marker.pose.orientation.x = (90/PI)*q.x();///2
    bbx_marker.pose.orientation.y = -(360/PI)*q.y();//-2*
    bbx_marker.pose.orientation.z = -(360/PI)*q.z();//-2*
    bbx_marker.pose.orientation.w = q.w();
    bbx_marker.scale.x = 0.09;//0.13
    bbx_marker.scale.y = 0.22;//0.34
    bbx_marker.scale.z = 0.21;//0.25
    bbx_marker.color.b = 0.3*b;
    bbx_marker.color.g = 0.1*g;
    bbx_marker.color.r = 0.1*r;
    bbx_marker.color.a = 0.8;
    bbx_marker.lifetime = ros::Duration();
    msg_marker.markers.push_back(bbx_marker);
    markers_pub_.publish(msg_marker);


}

int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "my_pcl_tutorial");
  ros::NodeHandle nh;
  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("/zed/zed_node/point_cloud/cloud_registered", 200, cloud_cb);
  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<sensor_msgs::PointCloud2> ("cloud_icp", 100);
  markers_pub_ = nh.advertise<visualization_msgs::MarkerArray> ("msg_marker", 200);
  ros::spin();

}

但是代码和算法中有两点需要进一步改进.首先,可以更准确.其次,需要提高更新率,假设目标在 25-30fps 左右.对如何实现这两个进一步的改进点有帮助吗?

But there are two points in the code and algo that need further improvements. Firstly, can be more accurate. And secondly, the update rate needs to be increased, let's say the target is around 25-30fps. Any help on how to achieve these two further improvement points?

谢谢

推荐答案

基本上有两个主要步骤:

Essentially, there are two main steps:

  1. 分割:从场景中提取对象列表
  2. 识别:将提取的对象与数据库中的已知对象进行匹配

PCL 库中已经有针对这两个步骤的几种方法.

There are several methods for both of these steps already in the PCL library.

查看本教程的入门教程,其中包含两个步骤.

Have a look at this tutorial for a starter, where both steps are included.

https://pcl.readthedocs.io/projects/tutorials/en/latest/correspondence_grouping.html#correspondence-grouping

这篇关于当对象一个在另一个之上时,从单个 2D 图像或来自 RGBD 相机的点云中对已知 3D CAD 对象的实时 6D 姿态估计?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆