如何在C ++中使用OpenCV检测多个对象? [英] How to detect multiple objects with OpenCV in C++?

查看:110
本文介绍了如何在C ++中使用OpenCV检测多个对象?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我从这里(此实现是Python实现,但我需要C++)获得了灵感该答案非常有效,我想到的是:detectAndCompute获取keypoints,使用kmeans将其划分为多个群集,然后对每个群集分别对每个descriptors执行matcher->knnMatch,然后对另一个群集执行常见的单一检测方法之类的东西.主要问题是,如何为每个集群的matcher->knnMatch进程提供descriptors?我以为我们应该将其他keypoints对应的descriptor的值设置为0(无用),对吗? 并在尝试中遇到了一些问题:

I got inspiration from this answer here, which is a Python implementation, but I need C++, that answer works very well, I got the thought is that: detectAndCompute to get keypoints, use kmeans to segment them to clusters, then for each cluster do matcher->knnMatch with each's descriptors, then do the other stuffs like the common single detecting method. The main problem is, how to provide descriptors for each cluster's matcher->knnMatch process? I thought we should set value of the other keypoints corresponding descriptor to 0(useless), am I right? And got some problems in my trying:

  1. 如何估算kmeans的簇数?
  2. 为什么可以为这样的集群创建Mat数组Mat descriptors_scene_clusters[3] = { Mat(descriptors_scene.rows, descriptors_scene.cols, CV_8U, Scalar(0)) };?
  1. how to estimate cluster count for kmeans?
  2. Why can create Mat array for clusters like this Mat descriptors_scene_clusters[3] = { Mat(descriptors_scene.rows, descriptors_scene.cols, CV_8U, Scalar(0)) };?

非常感谢您的帮助!

#include <stdio.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/xfeatures2d.hpp>

using namespace cv;
using namespace cv::xfeatures2d;

#define MIN_MATCH_COUNT 10

int main()
{
    Mat img_object = imread("./2.PNG", IMREAD_GRAYSCALE);
    Mat img_scene = imread("./1.PNG", IMREAD_GRAYSCALE);

    Ptr<ORB> detector = ORB::create();
    std::vector<KeyPoint> keypoints_object, keypoints_scene;
    Mat descriptors_object, descriptors_scene;
    detector->detectAndCompute(img_object, cv::Mat(), keypoints_object, descriptors_object);
    detector->detectAndCompute(img_scene, cv::Mat(), keypoints_scene, descriptors_scene);


    std::cout << descriptors_scene.row(0) << "\n";
    std::cout << descriptors_scene.cols << "\n";


    std::vector<Point2f> keypoints_scene_points_;
    for (int i=0; i<keypoints_scene.size(); i++) {
        keypoints_scene_points_.push_back(keypoints_scene[i].pt);
    }
    Mat keypoints_scene_points(keypoints_scene_points_);

    Mat labels;
    int estimate_cluster_count = 3; // estimated ??????????
    kmeans(keypoints_scene_points, estimate_cluster_count, labels, TermCriteria(TermCriteria::EPS + TermCriteria::COUNT, 10, 1.0), 3, KMEANS_RANDOM_CENTERS);

    std::cout << "==================================111111\n";

    Mat descriptors_scene_clusters[3] = { Mat(descriptors_scene.rows, descriptors_scene.cols, CV_8U, Scalar(0)) };

    std::cout << "==================================111111------\n";

    for (int i=0; i<labels.rows; i++) {
        int clusterIndex = labels.at<int>(i);
        Point2f pt = keypoints_scene_points.at<Point2f>(i);
        descriptors_scene_clusters[clusterIndex].at<uchar>(pt) = descriptors_scene.at<uchar>(pt);  // ?????? error
    }

    std::cout << descriptors_scene_clusters[0] << "\n";
    std::cout << "==================================22222222\n";
    // return 0;

    Mat img_matches = img_scene;
    std::vector<DMatch> all_good_matches;
    for (int i=0; i<estimate_cluster_count; i++) {
        std::cout << "==================================33333\n";

        Ptr<flann::IndexParams> indexParams = makePtr<flann::KDTreeIndexParams>(5);
        Ptr<flann::SearchParams> searchParams = makePtr<flann::SearchParams>(50);
        Ptr<FlannBasedMatcher> matcher = makePtr<FlannBasedMatcher>(indexParams, searchParams);
        // BFMatcher matcher;
        std::vector<std::vector<DMatch>> matches;

        std::cout << "==================================444444\n";
        matcher->knnMatch(descriptors_object, descriptors_scene_clusters[i], matches, 2);
        std::cout << "==================================555555\n";
        std::vector<DMatch> good_matches;

        for (auto &match : matches) {
            if (match[0].distance < 0.7 * match[1].distance) {
                good_matches.push_back(match[0]);
            }
        }

        all_good_matches.insert(all_good_matches.end(), good_matches.begin(), good_matches.end());

        std::cout << "==================================66666\n";

        if (good_matches.size() > MIN_MATCH_COUNT) {

            //-- Localize the object
            std::vector<Point2f> obj;
            std::vector<Point2f> scene;

            for (auto &match : good_matches) {
                //-- Get the keypoints from the good matches
                obj.push_back(keypoints_object[match.queryIdx].pt);
                scene.push_back(keypoints_scene[match.trainIdx].pt);
            }

            Mat H = findHomography(obj, scene, RANSAC);

            //-- Get the corners from the image_1 ( the object to be "detected" )
            std::vector<Point2f> obj_corners(4);
            obj_corners[0] = cvPoint(0, 0);
            obj_corners[1] = cvPoint(img_object.cols, 0);
            obj_corners[2] = cvPoint(img_object.cols, img_object.rows);
            obj_corners[3] = cvPoint(0, img_object.rows);
            std::vector<Point2f> scene_corners(4);

            perspectiveTransform(obj_corners, scene_corners, H);

            //-- Draw lines between the corners (the mapped object in the scene - image_2 )
            line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0),
                 scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
            line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0),
                 scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
            line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0),
                 scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);
            line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0),
                 scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4);

            print(scene_corners);
        }
    }

    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene,
                    all_good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                    std::vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);


    //-- Show detected matches
    imshow("Good Matches & Object detection", img_matches);

    waitKey(0);
    return 0;
}

推荐答案

我不知道您的问题的解决方案,但是以下内容可能有助于回答您提出的问题.

I don't know a solution to your problem, but the following might help answer the questions you've asked.

在注释中说,您可能需要opencv已经具有的meanshift实现. 此处例如,

In the comments it says that you might need an implementation of meanshift, which opencv already has. Here an example, here the documentation with a tutorial.

  1. kmeansclusterCount是您要创建的集群的数量

  1. The clusterCount for kmeansis the number of clusters you want to create link. I don't know how to estimate the number you want to create, but I guess you could know.

您只能使用一个元素来初始化descriptors_scene_clusters:

You initialize descriptors_scene_clusters only with one element:

Mat descriptors_scene_clusters[3] = { Mat(descriptors_scene.rows, descriptors_scene.cols, CV_8U, Scalar(0)) };

当您对其进行迭代时:

for (int i=0; i<labels.rows; i++) {
    int clusterIndex = labels.at<int>(i);
    Point2f pt = keypoints_scene_points.at<Point2f>(i);
    descriptors_scene_clusters[clusterIndex].at<uchar>(pt) = descriptors_scene.at<uchar>(pt);  // ?????? error
}

clusterIndex为2,您访问数组中未初始化的元素,结果为EXEC_BAD_ACCESS error.

clusterIndex is 2 and you access an uninitialized element in the array, which results in the EXEC_BAD_ACCESS error.

我希望这有助于进一步调查!

I hope this helps for further investigation!

这篇关于如何在C ++中使用OpenCV检测多个对象?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆