EmguCV (OpenCV) ORBDetector 只发现坏匹配 [英] EmguCV (OpenCV) ORBDetector finding only bad matches

查看:56
本文介绍了EmguCV (OpenCV) ORBDetector 只发现坏匹配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

问题

所以总的来说,我对计算机视觉还很陌生.我目前正在尝试通过分析 2 个图像来计算单应性.我想使用单应性来校正 1 个图像的视角以匹配另一个.但我得到的比赛既糟糕又错误.所以我所做的单应扭曲完全关闭.

当前状态

我正在使用 EmguCV 在 C# 中包装 opencv.我知道我的代码似乎正常"工作.

我加载了两个图像并声明了一些变量来存储计算工件.

(Image Image, VectorOfKeyPoint Keypoints, Mat Descriptors) imgModel = (new Image(imageFolder + "image0.jpg").Resize(0.2, Emgu.CV.CvEnum.Inter.Area), new VectorOfKeyPoint(), new Mat());(Image Image, VectorOfKeyPoint Keypoints, Mat Descriptors) imgTest = (new Image(imageFolder + "image1.jpg").Resize(0.2, Emgu.CV.CvEnum.Inter.Area),new VectorOfKeyPoint(), new Mat());Mat imgKeypointsModel = new Mat();Mat imgKeypointsTest = new Mat();Mat imgMatches = new Mat();Mat imgWarped = new Mat();VectorOfVectorOfDMatch 匹配 = new VectorOfVectorOfDMatch();VectorOfVectorOfDMatchfilteredMatches = new VectorOfVectorOfDMatch();列表FilteredMatchesList = new List();

请注意,我使用 ValueTuple 直接存储图像及其各自的关键点和描述符.

在此之后,使用 ORB 检测器和 BruteForce 匹配器来检测、描述和匹配关键点:

ORBDetector 检测器 = new ORBDetector();BFMatcher matcher = new BFMatcher(DistanceType.Hamming2);检测器.DetectAndCompute(imgModel.Image, null, imgModel.Keypoints, imgModel.Descriptors, false);检测器.DetectAndCompute(imgTest.Image, null, imgTest.Keypoints, imgTest.Descriptors, false);matcher.Add(imgTest.Descriptors);matcher.KnnMatch(imgModel.Descriptors,matches,k:2,mask:null);

在此之后,我应用算法示例

测试投影是否出错

匹配时使用交叉检查

原始图像每张为 2448x3264,在对它们进行任何计算之前按 0.2 缩放.

问题

基本上它既简单又复杂:我做错了什么?正如你从上面的例子中看到的,我检测特征并匹配它们的方法似乎工作得非常糟糕.所以我在问是否有人可以在我的代码中发现错误.或者,当互联网上有数百个示例显示它是如何工作的以及它是多么容易"时,就为什么我的结果如此糟糕给出建议.

到目前为止我尝试过的:

  • 缩放输入图像.如果我将它们缩小很多,我通常会得到更好的结果.
  • 检测更多或更少的特征.默认为当前使用的 500.增加或减少这个数字并没有真正让我的结果更好.
  • 各种数量的 k 但除了 k = 2 之外的任何其他数字对我来说都没有任何意义,因为我不知道如何修改 k > 2 的比率测试.
  • 改变过滤器参数,例如使用 0.6-0.9 的比率进行定量测试.
  • 使用不同的图片:二维码、恐龙的剪影、我桌子周围的一些其他随机物品.
  • 根据结果的任何变化将重新投影阈值从 1 到 10 变化
  • 验证投影本身没有问题.为模型和测试提供相同图像的算法产生单应性并使用单应性扭曲图像.形象不应该改变.这按预期工作(参见示例图片 2).
  • 图 3:匹配时使用交叉检查.看起来更有希望,但仍然不是我所期望的.
  • 使用其他距离方法:Hamming、Hamming2、L2Sqr(不支持其他方法)

我使用的示例:

  • 我试图复制对象视角的方法有可能是错误的吗?

    MCVE

    Problem

    So I am fairly new to Computer Vision in general. I am currently trying to calculate a homography by analyzing 2 images. I want to use the homography to correct the perspective of 1 image to match the other. But the matches I am getting are just bad and wrong. So the homographic warp I do is completely off.

    Current state

    I am using EmguCV for wrapping opencv in C#. I got as far as that my code seems to work "properly".

    I load my two images and declare some variables to store calculation artifacts.

    (Image<Bgr, byte> Image, VectorOfKeyPoint Keypoints, Mat Descriptors) imgModel = (new Image<Bgr, byte>(imageFolder + "image0.jpg").Resize(0.2, Emgu.CV.CvEnum.Inter.Area), new VectorOfKeyPoint(), new Mat());
    (Image<Bgr, byte> Image, VectorOfKeyPoint Keypoints, Mat Descriptors) imgTest = (new Image<Bgr, byte>(imageFolder + "image1.jpg").Resize(0.2, Emgu.CV.CvEnum.Inter.Area), new VectorOfKeyPoint(), new Mat());
    Mat imgKeypointsModel = new Mat();
    Mat imgKeypointsTest = new Mat();
    Mat imgMatches = new Mat();
    Mat imgWarped = new Mat();
    VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch();
    VectorOfVectorOfDMatch filteredMatches = new VectorOfVectorOfDMatch();
    List<MDMatch[]> filteredMatchesList = new List<MDMatch[]>();
    

    Notice that I use a ValueTuple<Image,VectorOfKeyPoint,Mat> to store the images directly with their respective Keypoints and Descriptors.

    After this is use an ORB detector and BruteForce matcher to detect, describe and match the keypoints:

    ORBDetector detector = new ORBDetector();
    BFMatcher matcher = new BFMatcher(DistanceType.Hamming2);
    
    detector.DetectAndCompute(imgModel.Image, null, imgModel.Keypoints, imgModel.Descriptors, false);
    detector.DetectAndCompute(imgTest.Image, null, imgTest.Keypoints, imgTest.Descriptors, false);
    
    matcher.Add(imgTest.Descriptors);
    matcher.KnnMatch(imgModel.Descriptors, matches, k: 2, mask: null);
    

    After this I apply the ratio test and do some further filtering by using a match-distance threshold.

    MDMatch[][] matchesArray = matches.ToArrayOfArray();
    
    //Apply ratio test
    for (int i = 0; i < matchesArray.Length; i++)
    {
      MDMatch first = matchesArray[i][0];
      float dist1 = matchesArray[i][0].Distance;
      float dist2 = matchesArray[i][1].Distance;
    
      if (dist1 < ms_MIN_RATIO * dist2)
      {
        filteredMatchesList.Add(matchesArray[i]);
      }
    }
    
    //Filter by threshold
    MDMatch[][] defCopy = new MDMatch[filteredMatchesList.Count][];
    filteredMatchesList.CopyTo(defCopy);
    filteredMatchesList = new List<MDMatch[]>();
    
    foreach (var item in defCopy)
    {
      if (item[0].Distance < ms_MAX_DIST)
      {
        filteredMatchesList.Add(item);
      }
    }
    
    filteredMatches = new VectorOfVectorOfDMatch(filteredMatchesList.ToArray());
    

    Disabling any of these filter methods isn't really making my results much better or worse (just keeping all matches) but they seem to make sense so I keep them.

    In the end I calculate my homography from the found and filtered matches then warp the image with this homography and draw some debug images:

    Mat homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(imgModel.Keypoints, imgTest.Keypoints, filteredMatches, null, 10);
    CvInvoke.WarpPerspective(imgTest.Image, imgWarped, homography, imgTest.Image.Size);
    
    Features2DToolbox.DrawKeypoints(imgModel.Image, imgModel.Keypoints, imgKeypointsModel, new Bgr(0, 0, 255));
    Features2DToolbox.DrawKeypoints(imgTest.Image, imgTest.Keypoints, imgKeypointsTest, new Bgr(0, 0, 255));
    Features2DToolbox.DrawMatches(imgModel.Image, imgModel.Keypoints, imgTest.Image, imgTest.Keypoints, filteredMatches, imgMatches, new MCvScalar(0, 255, 0), new MCvScalar(0, 0, 255));
    
    //Task.Factory.StartNew(() => ImageViewer.Show(imgKeypointsModel, "Keypoints Model"));
    //Task.Factory.StartNew(() => ImageViewer.Show(imgKeypointsTest, "Keypoints Test"));
    Task.Factory.StartNew(() => ImageViewer.Show(imgMatches, "Matches"));
    Task.Factory.StartNew(() => ImageViewer.Show(imgWarped, "Warp"));
    

    tl;dr: ORBDetector->BFMatcher->FilterMatches->GetHomography->WarpPerspective

    Output

    Example for the algorithm

    Test whether projection is going wrong

    Using crosscheck when matching

    Original images are 2448x3264 each and scaled by 0.2 before running any calculations on them.

    Question

    Basically it's as simple yet complex as: What am I doing wrong? As you can see from the example above my method of detecting features and matching them just seem to work extremely poorly. So I am asking if someone can maybe spot a mistake in my code. Or give advice on why my results are so bad when there are hundreds of example out on the internet showing how it works and how "easy" it is.

    What I tried so far:

    • Scaling of the input images. I generally get better results if I scale them down quite a bit.
    • Detect more or less features. Default is 500 which is used currently. Increasing or decreasing this number didn't really make my results better.
    • Various numbers of k but anything else except k = 2 doesn't make any sense to me as I don't know how to modify the ratio test for k > 2.
    • Varying filter parameters like using a ratio of 0.6-0.9 for the ration test.
    • Using different pictures: QR-code, Silhouette of a dinosaur, some other random objects I had lying around my desk.
    • Varying the re-projection threshold from 1-10 with any changes in the result
    • Verifying that the projection itself is not faulty. Feeding the algorithm with the same image for model and test produce the homography and warp the image with the homography. Image should not change. This worked as expected (see example image 2).
    • Image 3: Using crosscheck when matching. Looks a lot more promising but still not really what I am expecting.
    • Using other distance Methods: Hamming, Hamming2, L2Sqr (others are not supported)

    Examples I used:

    Original Images: The original images can be downloaded from here: https://drive.google.com/open?id=1Nlqv_0sH8t1wiH5PG-ndMxoYhsUbFfkC

    Further Experiments since asking

    So I did some further research after asking. Most changes are already included above but I wanted to make a separate section for this one. So after running into so many problems and seemingly nowhere to start I decided to google up the original paper on ORB. After this I decided to try and replicate some of their results. Upon trying this I realised that even I try to match the match image rotate by a degree the matches seem to look fine but the transformation completely breaks down.

    Is it possible that my method of trying to replicate the perspective of an object is just wrong?

    MCVE

    https://drive.google.com/open?id=17DwFoSmco9UezHkON5prk8OsPalmp2MX (without packages, but nuget restore will be enough to get it to compile)

    解决方案

    Solution

    Problem 1

    The biggest problem was actually a quite easy one. I had accidentally flipped my model and test descriptors when matching:

    matcher.Add(imgTest.Descriptors);
    matcher.KnnMatch(imgModel.Descriptors, matches, 1, null);
    

    But if you look at the documentation of these functions you will see that you have to add the model(s) and match against the test image.

    matcher.Add(imgModel.Descriptors);
    matcher.KnnMatch(imgTest.Descriptors, matches, 1, null);
    

    Problem 2

    I don't know why by now but Features2DToolbox.GetHomographyMatrixFromMatchedFeatures seems to be broken and my homography was always wrong, warping the image in a strange way (similar to the above examples).

    To fix this I went ahead and directly used the wrapper invoke to OpenCV FindHomography(srcPoints, destPoints, method). To be able to do this I had to write a little helper to get my data-structures in the right format:

    public static Mat GetHomography(VectorOfKeyPoint keypointsModel, VectorOfKeyPoint keypointsTest, List<MDMatch[]> matches)
    {
      MKeyPoint[] kptsModel = keypointsModel.ToArray();
      MKeyPoint[] kptsTest = keypointsTest.ToArray();
    
      PointF[] srcPoints = new PointF[matches.Count];
      PointF[] destPoints = new PointF[matches.Count];
    
      for (int i = 0; i < matches.Count; i++)
      {
        srcPoints[i] = kptsModel[matches[i][0].TrainIdx].Point;
        destPoints[i] = kptsTest[matches[i][0].QueryIdx].Point;
      }
    
      Mat homography = CvInvoke.FindHomography(srcPoints, destPoints, Emgu.CV.CvEnum.HomographyMethod.Ransac);
    
      //PrintMatrix(homography);
    
      return homography;
    }
    

    Results

    Now everything works fine and as expected:

    这篇关于EmguCV (OpenCV) ORBDetector 只发现坏匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆