iOS上的OpenCV:与SurfFeatureDetector和FlannBasedMatcher的错误匹配 [英] OpenCV on iOS: False matching with SurfFeatureDetector and FlannBasedMatcher

查看:237
本文介绍了iOS上的OpenCV:与SurfFeatureDetector和FlannBasedMatcher的错误匹配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用OpenCV的特征检测工具来判断是否在较大的场景图像中存在小样本图像。

我使用了 stackoverflow问题。我按照那里给出的答案,尝试了OpenCV 2计算机视觉应用程序设计手册一书中建议的步骤,但是我无法使其适用于不同大小的图像(似乎是cv的限制:: findFundamentalMat功能)。



我缺少什么?有没有办法使用SurfFeatureDetector和FlannBasedMatcher来了解一个样本图像何时是较大图像的一部分,而另一个样本图像不是?有没有一种更好的方法可以达到这个目的?



更新:

我更新了上面的代码,包括我使用的完整功能,包括试图实际绘制单应性。另外,这里有3个图像 - 1个场景,以及我想在场景中找到的两个小物体。我的爪子图标的内部百分比越来越好,而不是实际在场景中的推特图标。另外,由于某种原因没有绘制单应性:

Twitter图标

爪子图标

场景

解决方案

您的匹配器将始终匹配较小的描述符列表中的每个点与较大的列表之一。然后你必须自己寻找这些匹配中哪些是有意义的,哪些不是。您可以通过丢弃超过最大允许描述符距离的每个匹配来执行此操作,或者您可以尝试查找转换矩阵(例如,使用findHomography)并检查是否有足够的匹配对应。


I am trying to use OpenCV's feature detection tools in order to decide whether a small sample image exists in a larger scene image or not.
I used the code from here as a reference (without the homography part).

UIImage *sceneImage, *objectImage1;
cv::Mat sceneImageMat, objectImageMat1;
cv::vector<cv::KeyPoint> sceneKeypoints, objectKeypoints1;
cv::Mat sceneDescriptors, objectDescriptors1;
cv::SurfFeatureDetector *surfDetector;
cv::FlannBasedMatcher flannMatcher;
cv::vector<cv::DMatch> matches;
int minHessian;
double minDistMultiplier;

minHessian = 400;
minDistMultiplier= 3;
surfDetector = new cv::SurfFeatureDetector(minHessian);

sceneImage = [UIImage imageNamed:@"twitter_scene.png"];
objectImage1 = [UIImage imageNamed:@"twitter.png"];

sceneImageMat = cv::Mat(sceneImage.size.height, sceneImage.size.width, CV_8UC1);
objectImageMat1 = cv::Mat(objectImage1.size.height, objectImage1.size.width, CV_8UC1);

cv::cvtColor([sceneImage CVMat], sceneImageMat, CV_RGB2GRAY);
cv::cvtColor([objectImage1 CVMat], objectImageMat1, CV_RGB2GRAY);

if (!sceneImageMat.data || !objectImageMat1.data) {
    NSLog(@"NO DATA");
}

surfDetector->detect(sceneImageMat, sceneKeypoints);
surfDetector->detect(objectImageMat1, objectKeypoints1);

surfExtractor.compute(sceneImageMat, sceneKeypoints, sceneDescriptors);
surfExtractor.compute(objectImageMat1, objectKeypoints1, objectDescriptors1);

flannMatcher.match(objectDescriptors1, sceneDescriptors, matches);

double max_dist = 0; double min_dist = 100;

for( int i = 0; i < objectDescriptors1.rows; i++ )
{ 
    double dist = matches[i].distance;
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
}

cv::vector<cv::DMatch> goodMatches;
for( int i = 0; i < objectDescriptors1.rows; i++ )
{ 
    if( matches[i].distance < minDistMultiplier*min_dist )
    { 
        goodMatches.push_back( matches[i]);
    }
}
NSLog(@"Good matches found: %lu", goodMatches.size());

cv::Mat imageMatches;
cv::drawMatches(objectImageMat1, objectKeypoints1, sceneImageMat, sceneKeypoints, goodMatches, imageMatches, cv::Scalar::all(-1), cv::Scalar::all(-1),
                cv::vector<char>(), cv::DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

for( int i = 0; i < goodMatches.size(); i++ )
{
    //-- Get the keypoints from the good matches
    obj.push_back( objectKeypoints1[ goodMatches[i].queryIdx ].pt );
    scn.push_back( objectKeypoints1[ goodMatches[i].trainIdx ].pt );
}

cv::vector<uchar> outputMask;
cv::Mat homography = cv::findHomography(obj, scn, CV_RANSAC, 3, outputMask);
int inlierCounter = 0;
for (int i = 0; i < outputMask.size(); i++) {
    if (outputMask[i] == 1) {
        inlierCounter++;
    }
}
NSLog(@"Inliers percentage: %d", (int)(((float)inlierCounter / (float)outputMask.size()) * 100));

cv::vector<cv::Point2f> objCorners(4);
objCorners[0] = cv::Point(0,0);
objCorners[1] = cv::Point( objectImageMat1.cols, 0 );
objCorners[2] = cv::Point( objectImageMat1.cols, objectImageMat1.rows );
objCorners[3] = cv::Point( 0, objectImageMat1.rows );

cv::vector<cv::Point2f> scnCorners(4);

cv::perspectiveTransform(objCorners, scnCorners, homography);

cv::line( imageMatches, scnCorners[0] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[1] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar(0, 255, 0), 4);
cv::line( imageMatches, scnCorners[1] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[2] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);
cv::line( imageMatches, scnCorners[2] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[3] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);
cv::line( imageMatches, scnCorners[3] + cv::Point2f( objectImageMat1.cols, 0), scnCorners[0] + cv::Point2f( objectImageMat1.cols, 0), cv::Scalar( 0, 255, 0), 4);

[self.mainImageView setImage:[UIImage imageWithCVMat:imageMatches]];

This works, but I keep getting a significant amount of matches, even when the small image is not part of the larger one.
Here's an example for a good output:

And here's an example for a bad output:

Both outputs are the result of the same code. Only difference is the small sample image.
With results like this, it is impossible for me to know when a sample image is NOT in the larger image.
While doing my research, I found this stackoverflow question. I followed the answer given there, and tried the steps suggested in the "OpenCV 2 Computer Vision Application Programming Cookbook" book, but I wasn't able to make it work with images of different sizes (seems like a limitation of the cv::findFundamentalMat function).

What am I missing? Is there a way to use SurfFeatureDetector and FlannBasedMatcher to know when one sample image is a part of a larger image, and another sample image isn't? Is there a different method which is better for that purpose?

UPDATE:
I updated the code above to include the complete function I use, including trying to actually draw the homography. Plus, here are 3 images - 1 scene, and two small objects I'm trying to find in the scene. I'm getting better inlier percentages for the paw icon, and not the twitter icon, which is actually IN the scene. Plus, the homography is not drawn for some reason:
Twitter Icon
Paw Icon
Scene

解决方案

Your matcher will always match every point from the smaller descriptor list to one of the larger list. You then have to look for yourself which of these matches make sense and which not. You can do this by discarding every match that exceeds a maximum allowed descriptor distance, or you can try to find a transformation matrix (e.g. with findHomography) and check if enough matches correspond to it.

这篇关于iOS上的OpenCV:与SurfFeatureDetector和FlannBasedMatcher的错误匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆