特征检测后的OpenCV的Andr​​oid提取匹配垫 [英] OpenCV Android extract match Mat after feature detection

查看:150
本文介绍了特征检测后的OpenCV的Andr​​oid提取匹配垫的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想提取匹配我的参考图像的图像的一部分。
我尝试使用Calib3d.findHomography方法来转换图像。
当我这样做,我使用Imgproc.warpPerspective做改造,但没有很好的效果。难道我在这里错过了什么?
我需要做一些与perspectiveTransform?我曾经试过,但没有到目前为止,任何侥幸。

下面是我的findSceneCorners方式:

 私人无效findSceneCorners(太SRC){
    mFeatureDetector.detect(SRC,mSceneKeypoints);
    mDescriptorExtractor.compute(SRC,mSceneKeypoints,mSceneDescriptors);
    mDescriptorMatcher.match(mSceneDescriptors,mReferenceDescriptors,mMatches);    清单< D​​Match> matchesList = mMatches.toList();
    如果(matchesList.size()4;){
        //有太多几场比赛找到对应性。
        返回;
    }    清单<关键点> referenceKeypointsList =
            mReferenceKeypoints.toList();
    清单<关键点> sceneKeypointsList =
            mSceneKeypoints.toList();    //计算最大值和关键点之间的距离最小。
    双maxDist = 0.0;
    双minDist = Double.MAX_VALUE;
    对于(DMatch匹配:matchesList){
        双DIST = match.distance;
        如果(DIST< minDist){
            minDist = DIST;
        }
        如果(DIST> maxDist){
            maxDist = DIST;
        }
    }    //为minDist的阈值是主观选择
    //根据测试。该单元是不相关的像素
    //距离;它关系到失败的测试数
    //为匹配的描述符之间的相似性。
    如果(minDist> 50.0){
        //目标是完全丧失。
        //丢弃任何previously发现角落。
        mSceneCorners.create(0,0,mSceneCorners.type());
        返回;
    }否则如果(minDist&20.0){
        //目标丢失,但也许它仍然接近。
        //保留任何previously发现角落。
        返回;
    }
    //确定基于匹配距离好的关键点。
    ArrayList的<点和GT; goodReferencePointsList =
            新的ArrayList<点和GT;();
    ArrayList的<点和GT; goodScenePointsList =
            新的ArrayList<点和GT;();    双maxGoodMatchDist = 1.75 * minDist;
    对于(DMatch匹配:matchesList){
        如果(match.distance< maxGoodMatchDist){
           goodReferencePointsList.add(
                   referenceKeypointsList.get(match.trainIdx).PT);
           goodScenePointsList.add(
                   sceneKeypointsList.get(match.queryIdx).PT);
        }
    }    如果(goodReferencePointsList.size()4; ||
            goodScenePointsList.size()&下; 4){
        //有找到单应太少好点。
        返回;
    }    Log.i(TAG,匹配找到);    MatOfPoint2f goodReferencePoints =新MatOfPoint2f();
    goodReferencePoints.fromList(goodReferencePointsList);    MatOfPoint2f goodScenePoints =新MatOfPoint2f();
    goodScenePoints.fromList(goodScenePointsList);    单应= Calib3d.findHomography(goodReferencePoints,goodScenePoints);    垫四=新垫(mReferenceImage.size(),CvType.CV_32F);
    Imgproc.warpPerspective(SRC,四,单应,quad.size());
    objectDetectedListener.objectDetected(四)}


解决方案

我认为你应该使用 WARP_INVERSE_MAP 作为的标志 warpPerspective 为: Imgproc.warpPerspective(SRC,四,单应,quad.size(),WARP_INVERSE_MAP);

我没有使用完全相同的code,但只是单应后的部分,我已经看到,图像在镜扭曲,不是因为我们想(使用更大的显示图像,看看究竟在那里) 。事实上,你贴出来,与10卡的页面上,则使用该标志,抱歉,我没有想过提到这更早的。

I want to extract the part of the image that matches my reference image. I try to transform the image using the Calib3d.findHomography method. When i have done this i use Imgproc.warpPerspective to do the transformation but with no good results. Do i miss something here? Do i need to do something with perspectiveTransform? I have tried this but without any luck so far.

Here is my findSceneCorners method:

 private void findSceneCorners(Mat src) {
    mFeatureDetector.detect(src, mSceneKeypoints);
    mDescriptorExtractor.compute(src, mSceneKeypoints, mSceneDescriptors); 
    mDescriptorMatcher.match(mSceneDescriptors, mReferenceDescriptors, mMatches);

    List<DMatch> matchesList = mMatches.toList();
    if (matchesList.size() < 4) {
        // There are too few matches to find the homography.
        return;
    }

    List<KeyPoint> referenceKeypointsList =
            mReferenceKeypoints.toList();
    List<KeyPoint> sceneKeypointsList =
            mSceneKeypoints.toList();

    // Calculate the max and min distances between keypoints.
    double maxDist = 0.0;
    double minDist = Double.MAX_VALUE;
    for(DMatch match : matchesList) {
        double dist = match.distance;
        if (dist < minDist) {
            minDist = dist;
        }
        if (dist > maxDist) {
            maxDist = dist;
        }
    }

    // The thresholds for minDist are chosen subjectively
    // based on testing. The unit is not related to pixel
    // distances; it is related to the number of failed tests
    // for similarity between the matched descriptors.
    if (minDist > 50.0) {
        // The target is completely lost.
        // Discard any previously found corners.
        mSceneCorners.create(0, 0, mSceneCorners.type());
        return;
    } else if (minDist > 20.0) {
        // The target is lost but maybe it is still close.
        // Keep any previously found corners.
        return;
    }


    // Identify "good" keypoints based on match distance.
    ArrayList<Point> goodReferencePointsList =
            new ArrayList<Point>();
    ArrayList<Point> goodScenePointsList =
            new ArrayList<Point>();

    double maxGoodMatchDist = 1.75 * minDist;
    for(DMatch match : matchesList) {
        if (match.distance < maxGoodMatchDist) {
           goodReferencePointsList.add(
                   referenceKeypointsList.get(match.trainIdx).pt);
           goodScenePointsList.add(
                   sceneKeypointsList.get(match.queryIdx).pt);
        }
    }

    if (goodReferencePointsList.size() < 4 ||
            goodScenePointsList.size() < 4) {
        // There are too few good points to find the homography.
        return;
    }

    Log.i(TAG, "Match found");

    MatOfPoint2f goodReferencePoints = new MatOfPoint2f();
    goodReferencePoints.fromList(goodReferencePointsList);

    MatOfPoint2f goodScenePoints = new MatOfPoint2f();
    goodScenePoints.fromList(goodScenePointsList);

    homography = Calib3d.findHomography(goodReferencePoints, goodScenePoints);

    Mat quad = new Mat(mReferenceImage.size(), CvType.CV_32F);
    Imgproc.warpPerspective(src, quad, homography, quad.size());
    objectDetectedListener.objectDetected(quad);

}

解决方案

I think you should use WARP_INVERSE_MAP as the flag of warpPerspective as: Imgproc.warpPerspective(src, quad, homography, quad.size(),WARP_INVERSE_MAP);.

I have not used exactly your code, but just the part after homography, and i have seen that the image was warped in the mirror, not as we wanted (use a bigger display image to see exactly what's there). In fact, on the page you posted, with the 10-card, it is used that flag, sorry that I have not thought of mentioning this earlier.

这篇关于特征检测后的OpenCV的Andr​​oid提取匹配垫的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆