为什么我们需要功能crossCheckMatching? [英] Why we need crossCheckMatching for feature?

查看:140
本文介绍了为什么我们需要功能crossCheckMatching?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在阅读很多有关使用特征提取(ecc ecc)进行对象检测的文章.

I am reading lot of post for object detection using feature extraction (sift ecc).

在两个图像上都计算了描述符后,为了获得良好的匹配,他们正在使用crossCheckMatching. (位于 sample/cpp/descritpor_extractor_matcher.cpp )

After having calculate descriptors on both images, to get good matches they are using crossCheckMatching. (found on sample/cpp/descritpor_extractor_matcher.cpp)

Coudl我明白为什么要这样选择吗?

Coudl I understand why this choice?

为什么我们需要同时评估

Why we need to evalute both

descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );

我不明白.

计算欧氏距离不会在两个方向上返回相同结果吗?

Computing the Euclian distance for example doesn't return the same result in both direction ?

推荐答案

通常不能假定匹配器将使用欧氏距离.例如,BFMatcher支持不同的规范:L1,L2,Hamming ...

You can't generally assume that the Eucludian distance will be used by your matcher. For instance, the BFMatcher supports different norms : L1, L2, Hamming...

您可以在此处查看文档以了解更多详细信息: http://docs. opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html

You can check the documentation here for more details : http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_descriptor_matchers.html

无论如何,所有这些距离量度都是对称的,使用哪一个来回答您的问题都没有关系.

Anyway, all these distance measures are symmetric and it doesn't matter which one you use to answer your question.

答案是:调用knnMatch(A,B)与调用knnMatch(B,A)不同.

And the answer is : calling knnMatch(A,B) is not the same as calling knnMatch(B,A).

如果您不信任我,我会尽力为您提供图形化和直观的说明.为了简单起见,我假设knn==1,因此对于每个查询的描述符,该算法将仅找到1个对应关系(更容易绘制:-)

If you don't trust me, I'll try to give you a graphical and intuitive explanation. I assume for the sake of simplicity that knn==1, so that for each queried descriptor, the algorithm will only find 1 correspondence (much easier to plot :-)

我随机选择了几个2D样本并创建了两个数据集(红色和绿色).在第一个图中,绿色位于查询数据集中,这意味着对于每个绿色点,我们尝试找到最接近的红色点(每个箭头代表一个对应关系).

I randomly picked few 2D samples and created two data-sets (red & green). In the first plot, the greens are in the query data-set, meaning that for each green point, we try to find the closest red point (each arrow represents a correspondence).

在第二个情节中,查询&训练数据集已交换.

In the second plot, the query & train data-sets has been swapped.

最后,我还绘制了crossCheckMatching()函数的结果,该函数仅保留双向匹配.

Finally, I also plotted the result of the crossCheckMatching() function which only conserve the bi-directional matches.

并且您可以看到,crossCheckMatching()的输出比每个单个knnMatch(X,Y)/knnMatch(Y,X)都要好得多,因为只保留了最强的对应关系.

And as you can see, the crossCheckMatching()'s output is much better than each single knnMatch(X,Y) / knnMatch(Y,X) since only the strongest correspondence have been kept.

这篇关于为什么我们需要功能crossCheckMatching?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆