OpenCV findFundamentalMat非常不稳定和敏感 [英] OpenCV findFundamentalMat very unstable and sensitive

查看:3570
本文介绍了OpenCV findFundamentalMat非常不稳定和敏感的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为我的大学工作,我们要一个Quadrcopter用他的相机稳定自己。不幸的是,基本矩阵对于特征点内的微小变化的反应非常敏感,我会给你以后的例子。

I'm working on a Project for my University where we want a Quadrcopter to stabilize himself with his camera. Unfortunately the Fundamental matrix reacts very sensible to little changes within the featurpoints, i'll give you examples later on.

我认为我的匹配已经很好感谢ocv。
我使用SURF特性并与knn-Method匹配:

I think my matching already works pretty good thanks to ocv. I'm using SURF Features and match them with the knn-Method:

    SurfFeatureDetector surf_detect;
    surf_detect = SurfFeatureDetector(400);

    //detect keypoints
    surf_detect.detect(fr_one.img, fr_one.kp);
    surf_detect.detect(fr_two.img, fr_two.kp);

    //extract keypoints
    SurfDescriptorExtractor surf_extract;
    surf_extract.compute(fr_one.img, fr_one.kp, fr_one.descriptors);
    surf_extract.compute(fr_two.img, fr_two.kp, fr_two.descriptors);

    //match keypoints
    vector<vector<DMatch> > matches1,matches2;
    vector<DMatch> symMatches,goodMatches;
    FlannBasedMatcher flann_match;

    flann_match.knnMatch(fr_one.descriptors, fr_two.descriptors, matches1,2);
    flann_match.knnMatch(fr_two.descriptors, fr_one.descriptors, matches2,2);

    //test matches in both ways
    symmetryTest(matches1,matches2,symMatches);

    std::vector<cv::Point2f> points1, points2;
    for (std::vector<cv::DMatch>::const_iterator it= symMatches.begin();
       it!= symMatches.end(); ++it)
    {
        //left keypoints
        float x= fr_one.kp[it->queryIdx].pt.x;
        float y= fr_one.kp[it->queryIdx].pt.y;
        points1.push_back(cv::Point2f(x,y));
        //right keypoints
        x = fr_two.kp[it->trainIdx].pt.x;
        y = fr_two.kp[it->trainIdx].pt.y;
        points2.push_back(cv::Point2f(x,y));
    }

    //kill outliers with ransac
    vector<uchar> inliers(points1.size(),0);
    findFundamentalMat(Mat(points1),Mat(points2),
                inliers,CV_FM_RANSAC,3.f,0.99f);

    std::vector<uchar>::const_iterator
    itIn= inliers.begin();
    std::vector<cv::DMatch>::const_iterator
    itM= symMatches.begin();
    for ( ;itIn!= inliers.end(); ++itIn, ++itM)
    {
        if (*itIn)
        {
            goodMatches.push_back(*itM);
        }
    }



现在我想用这些匹配来计算基础矩阵。我在这个例子中使用8POINT方法 - 我已经尝试过它与LMEDS和RANSAC - 它只有得到更糟糕,因为有更多的匹配改变。

Now i want to compute the Fundamental Matrix with these matches. I'm using the 8POINT method for this example - i already tried it with LMEDS and RANSAC - there it only get's worse because there are more matches which change.

    vector<int> pointIndexes1;
    vector<int> pointIndexes2;
    for (vector<DMatch>::const_iterator it= goodMatches.begin();
         it!= goodMatches.end(); ++it) {
             pointIndexes1.push_back(it->queryIdx);
             pointIndexes2.push_back(it->trainIdx);
    }
    vector<Point2f> selPoints1, selPoints2;
    KeyPoint::convert(fr_one.kp,selPoints1,pointIndexes1);
    KeyPoint::convert(fr_two.kp,selPoints2,pointIndexes2);

    Mat F = findFundamentalMat(Mat(selPoints1),Mat(selPoints2),CV_FM_8POINT);

当我在同一对图像上的循环中调用这些计算时,F的结果变化很大

When i call these calculations within a loop on the same pair of images the result of F varies very much - theres no way to extract movement from such calculations.

我生成了一个例子,我过滤掉一些匹配,以便你可以看到我自己提到的效果。

I generated an example where i filtered out some matches so that you can see the effect i mentioned for yourselves.

http://abload.de/img/div_c_01ascel。 png

http:// abload .de / img / div_c_02zpflj.png

我的代码有问题,或者我必须考虑其他原因,如图像质量等?

Is there something wrong with my code or do i have to think about other reasons like image-quality and so on ?

感谢您的帮助!
derfreak

Thanks in advance for the Help ! derfreak

推荐答案

即使你的算法是正确的,8点F矩阵计算由于图像噪声。使用较少的对应性越好。最好的做法是做5点基本(E)矩阵计算,但这将需要您预先校准摄像机,并将检测到的像素图像点SIFT / SURF后转换为归一化像素(度量像素位置)。然后应用Nister的5点算法,从免费的Matlab实现或Bundler(由Noah Snavely的c ++实现)。在我的SfM经验中,5点E矩阵比7或8点F矩阵计算好得多/稳定。而且在5点后做RANSAC得到更稳健的估计。希望这有助于。

Even if your algorithm is correct, 8 point F matrix computation is very error prone due to image noise. The lesser correspondences you use the better. The best you can do is doing 5 point Essential (E) matrix computation, but that would require you to pre-calibrate the camera and convert the detected pixel image points after SIFT/SURF to normalized pixels (metric pixel locations). Then apply Nister's 5-point algorithm either from the freely available Matlab implementation or from Bundler (c++ implementation by Noah Snavely). In my experience with SfM, 5-point E matrix is much much better/stable than 7 or 8 point F matrix computation. And ofcourse do RANSAC after 5 point to get more robust estimates. Hope this helps.

这篇关于OpenCV findFundamentalMat非常不稳定和敏感的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆