OpenCV-使用findHomography的RobustMatcher [英] OpenCV - RobustMatcher using findHomography

查看:111
本文介绍了OpenCV-使用findHomography的RobustMatcher的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实现了基于差异测试(在对称测试,比率测试和RANSAC测试中)在互联网上找到的鲁棒匹配器.它运作良好. 然后,我使用findHomography来实现良好的匹配.

I've implement a Robust matcher found on the internet based on differents tests : symmetry test, Ratio Test and RANSAC test. It works well. I used then findHomography in order to have good matches.

代码在这里:

    RobustMatcher::RobustMatcher() : ratio(0.65f), refineF(true),confidence(0.99), distance(3.0) {
           detector = new cv::SurfFeatureDetector(400); //Better than ORB
           //detector = new cv::SiftFeatureDetector; //Better than ORB
           //extractor= new cv::OrbDescriptorExtractor();
           //extractor= new cv::SiftDescriptorExtractor;
           extractor= new cv::SurfDescriptorExtractor;
          // matcher= new cv::FlannBasedMatcher;
           matcher= new cv::BFMatcher();

    }
    // Clear matches for which NN ratio is > than threshold
    // return the number of removed points
    // (corresponding entries being cleared,
    // i.e. size will be 0)
    int RobustMatcher::ratioTest(std::vector<std::vector<cv::DMatch> >
                                                 &matches) {
      int removed=0;
        // for all matches
      for (std::vector<std::vector<cv::DMatch> >::iterator
               matchIterator= matches.begin();
           matchIterator!= matches.end(); ++matchIterator) {
             // if 2 NN has been identified
             if (matchIterator->size() > 1) {
                 // check distance ratio
                 if ((*matchIterator)[0].distance/
                     (*matchIterator)[1].distance > ratio) {
                    matchIterator->clear(); // remove match
                    removed++;
                 }
             } else { // does not have 2 neighbours
                 matchIterator->clear(); // remove match
                 removed++;
             }
      }
      return removed;
    }

    // Insert symmetrical matches in symMatches vector
    void RobustMatcher::symmetryTest(
        const std::vector<std::vector<cv::DMatch> >& matches1,
        const std::vector<std::vector<cv::DMatch> >& matches2,
        std::vector<cv::DMatch>& symMatches) {
      // for all matches image 1 -> image 2
      for (std::vector<std::vector<cv::DMatch> >::
               const_iterator matchIterator1= matches1.begin();
           matchIterator1!= matches1.end(); ++matchIterator1) {
         // ignore deleted matches
         if (matchIterator1->size() < 2)
             continue;
         // for all matches image 2 -> image 1
         for (std::vector<std::vector<cv::DMatch> >::
            const_iterator matchIterator2= matches2.begin();
             matchIterator2!= matches2.end();
             ++matchIterator2) {
             // ignore deleted matches
             if (matchIterator2->size() < 2)
                continue;
             // Match symmetry test
             if ((*matchIterator1)[0].queryIdx ==
                 (*matchIterator2)[0].trainIdx &&
                 (*matchIterator2)[0].queryIdx ==
                 (*matchIterator1)[0].trainIdx) {
                 // add symmetrical match
                   symMatches.push_back(
                     cv::DMatch((*matchIterator1)[0].queryIdx,
                               (*matchIterator1)[0].trainIdx,
                               (*matchIterator1)[0].distance));
                   break; // next match in image 1 -> image 2
             }
         }
      }
    }

    // Identify good matches using RANSAC
    // Return fundemental matrix
    cv::Mat RobustMatcher::ransacTest(const std::vector<cv::DMatch>& matches,const std::vector<cv::KeyPoint>& keypoints1,
            const std::vector<cv::KeyPoint>& keypoints2,
            std::vector<cv::DMatch>& outMatches) {
        // Convert keypoints into Point2f
        std::vector<cv::Point2f> points1, points2;
        cv::Mat fundemental;
        for (std::vector<cv::DMatch>::const_iterator it= matches.begin();it!= matches.end(); ++it) {
            // Get the position of left keypoints
            float x= keypoints1[it->queryIdx].pt.x;
            float y= keypoints1[it->queryIdx].pt.y;
            points1.push_back(cv::Point2f(x,y));
            // Get the position of right keypoints
            x= keypoints2[it->trainIdx].pt.x;
            y= keypoints2[it->trainIdx].pt.y;
            points2.push_back(cv::Point2f(x,y));
        }
        // Compute F matrix using RANSAC
        std::vector<uchar> inliers(points1.size(),0);
        if (points1.size()>0&&points2.size()>0){
            cv::Mat fundemental= cv::findFundamentalMat(
                        cv::Mat(points1),cv::Mat(points2), // matching points
                        inliers,       // match status (inlier or outlier)
                        CV_FM_RANSAC, // RANSAC method
                        distance,      // distance to epipolar line
                        confidence); // confidence probability
            // extract the surviving (inliers) matches
            std::vector<uchar>::const_iterator itIn= inliers.begin();
            std::vector<cv::DMatch>::const_iterator itM= matches.begin();
            // for all matches
            for ( ;itIn!= inliers.end(); ++itIn, ++itM) {
                if (*itIn) { // it is a valid match
                    outMatches.push_back(*itM);
                }
            }
            if (refineF) {
                // The F matrix will be recomputed with
                // all accepted matches
                // Convert keypoints into Point2f
                // for final F computation
                points1.clear();
                points2.clear();
                for (std::vector<cv::DMatch>::const_iterator it= outMatches.begin();it!= outMatches.end(); ++it) {
                    // Get the position of left keypoints
                    float x= keypoints1[it->queryIdx].pt.x;
                    float y= keypoints1[it->queryIdx].pt.y;
                    points1.push_back(cv::Point2f(x,y));
                    // Get the position of right keypoints
                    x= keypoints2[it->trainIdx].pt.x;
                    y= keypoints2[it->trainIdx].pt.y;
                    points2.push_back(cv::Point2f(x,y));
                }
                // Compute 8-point F from all accepted matches
                if (points1.size()>0&&points2.size()>0){
                    fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), // matches
                               CV_FM_8POINT); // 8-point method
                }
            }
        }
        return fundemental;
    }

    // Match feature points using symmetry test and RANSAC
    // returns fundemental matrix
    cv::Mat RobustMatcher::match(cv::Mat& image1,
        cv::Mat& image2, // input images
       // output matches and keypoints
       std::vector<cv::DMatch>& matches,
       std::vector<cv::KeyPoint>& keypoints1,
       std::vector<cv::KeyPoint>& keypoints2) {
     if (!matches.empty()){
         matches.erase(matches.begin(),matches.end());
     }
     // 1a. Detection of the SIFT features
     detector->detect(image1,keypoints1);
     detector->detect(image2,keypoints2);
     // 1b. Extraction of the SIFT descriptors
     /*cv::Mat img_keypoints;
     cv::Mat img_keypoints2;
     drawKeypoints( image1, keypoints1, img_keypoints, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
     drawKeypoints( image2, keypoints2, img_keypoints2, Scalar::all(-1), DrawMatchesFlags::DEFAULT );
     //-- Show detected (drawn) keypoints
     //cv::imshow("Result keypoints detected", img_keypoints);
    // cv::imshow("Result keypoints detected", img_keypoints2);

     cv::waitKey(5000);*/

     cv::Mat descriptors1, descriptors2;
     extractor->compute(image1,keypoints1,descriptors1);
     extractor->compute(image2,keypoints2,descriptors2);
     // 2. Match the two image descriptors
     // Construction of the matcher
     //cv::BruteForceMatcher<cv::L2<float>> matcher;
     // from image 1 to image 2
     // based on k nearest neighbours (with k=2)
     std::vector<std::vector<cv::DMatch> > matches1;
     matcher->knnMatch(descriptors1,descriptors2,
         matches1, // vector of matches (up to 2 per entry)
         2);        // return 2 nearest neighbours
      // from image 2 to image 1
      // based on k nearest neighbours (with k=2)
      std::vector<std::vector<cv::DMatch> > matches2;
      matcher->knnMatch(descriptors2,descriptors1,
         matches2, // vector of matches (up to 2 per entry)
         2);        // return 2 nearest neighbours
      // 3. Remove matches for which NN ratio is
      // > than threshold
      // clean image 1 -> image 2 matches
      int removed= ratioTest(matches1);
      // clean image 2 -> image 1 matches
      removed= ratioTest(matches2);
      // 4. Remove non-symmetrical matches
      std::vector<cv::DMatch> symMatches;
      symmetryTest(matches1,matches2,symMatches);
      // 5. Validate matches using RANSAC
      cv::Mat fundemental= ransacTest(symMatches,
                  keypoints1, keypoints2, matches);
      // return the found fundemental matrix
      return fundemental;
    }




            cv::Mat img_matches;

            drawMatches(image1, keypoints_img1,image2, keypoints_img2,
                                 matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                                 vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
                    std::cout << "Number of good matching " << (int)matches.size() << "\n" << endl;

                    if ((int)matches.size() > 5 ){
                        Debug::info("Good matching !");
                    }
                    //-- Localize the object
                    std::vector<Point2f> obj;
                    std::vector<Point2f> scene;

                    for( int i = 0; i < matches.size(); i++ )
                    {
                      //-- Get the keypoints from the good matches
                      obj.push_back( keypoints_img1[ matches[i].queryIdx ].pt );
                      scene.push_back( keypoints_img2[matches[i].trainIdx ].pt );
                    }
                    cv::Mat arrayRansac;
                    std::vector<uchar> inliers(obj.size(),0);
                    Mat H = findHomography( obj, scene, CV_RANSAC,3,inliers);


                    //-- Get the corners from the image_1 ( the object to be "detected" )
                    std::vector<Point2f> obj_corners(4);
                    obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint( image1.cols, 0 );
                    obj_corners[2] = cvPoint( image1.cols, image1.rows ); obj_corners[3] = cvPoint( 0, image1.rows );
                    std::vector<Point2f> scene_corners(4);


                    perspectiveTransform( obj_corners, scene_corners, H);


                    //-- Draw lines between the corners (the mapped object in the scene - image_2 )
                    line( img_matches, scene_corners[0] + Point2f( image1.cols, 0), scene_corners[1] + Point2f( image1.cols, 0), Scalar(0, 255, 0), 4 );
                    line( img_matches, scene_corners[1] + Point2f( image1.cols, 0), scene_corners[2] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
                    line( img_matches, scene_corners[2] + Point2f( image1.cols, 0), scene_corners[3] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );
                    line( img_matches, scene_corners[3] + Point2f( image1.cols, 0), scene_corners[0] + Point2f( image1.cols, 0), Scalar( 0, 255, 0), 4 );



    }

</pre><code>

我得到了这样的结果(单应性很好)

I have results like this (Homography is good):

但是我不明白为什么对于我的某些匹配良好的结果,我会有这样的结果(单应性似乎不好):

But I don't understand why for some of my results where the match is good I have these kind of results (homography not seems to be good):

有人可以解释我吗?也许我必须调整参数?但是,如果我减少约束(例如提高比例)而不是在两张图片之间没有匹配项(这很好),则我有很多匹配项……而且我也不想这样做.此外,单应性根本不起作用(仅上面有一条绿线).

Can someone explain me? Maybe I have to adjust the parameters? But if I reduce constraints (rise the ratio for example) instead of have no matching between two pictures (this is good), I have a lot of matching... And I don't want to. Besides the homography doesn't work at all (I have a green line only like above).

相反,我强大的匹配器工作得很好,也就是说,对于相同的图片(只是旋转,不同的比例等),这很好用,但是当我有两个相似的图像时,我根本没有匹配. ..

And inversely, my robust matcher works (too) well that is to say that for differents sames picture (just rotated, differents scale etc) , that's work fine but when I have two similar image, I have no match at all...

所以我不知道如何进行良好的计算.我是初学者.健壮的匹配器工作良好,但对于完全相同的图像,但对于上述两个相似的图像,则不起作用,这是一个问题.

So I don't how can I do a good computation. I'm a beginner. The robust matcher works well but for the exactly same image but for two similar images like above, it doesn't work and this is a problem.

也许我走错路了.

在发布此消息之前,我当然在Stack上阅读了很多,但没有找到答案. (例如,此处)

Before post this message, I of course read a lot on Stack but I didn't find the answer. (For example Here)

推荐答案

这是由于SURF描述符的工作原理,请参见

It is due to how SURF descriptors work, see http://docs.opencv.org/trunk/doc/py_tutorials/py_feature2d/py_surf_intro/py_surf_intro.html

基本上,对于Droid来说,图像大部分是纯色,很难找到不模糊的关键点.使用Nike时,形状是相同的,但是描述符中的强度比完全不同:想象一下,描述符的中心将是强度0,而右边则是1.即使您对图像的强度进行了标准化,您也可以不会有比赛.

Basically with Droid the image is mostly flat color and it's difficult to find keypoints that are not ambiguous. With Nike, the shape is the same, but the intensity ratio is completely different in the descriptors: imagine on the left the center of a descriptor will be intensity 0 and on the right 1. Even if you normalize the intensity of the images, you're not going to have a match.

如果您的目标只是匹配徽标,建议您研究一下边缘检测算法,例如:

If your goal is just to match logos, I suggest you look into edge detection algorithms, like: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

这篇关于OpenCV-使用findHomography的RobustMatcher的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆