检测图像中的杂交 [英] Detecting crosses in an image

查看:126
本文介绍了检测图像中的杂交的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的工作程序,以检测探测装置的技巧和探测期间分析颜色变化。输入/输出机制到位,更多或更少。我现在需要的是对事物的实际肉:检测提示

I am working on a program to detect the tips of a probing device and analyze the color change during probing. The input/output mechanisms are more or less in place. What I need now is the actual meat of the thing: detecting the tips.

在下面的图像中,提示是在杂交的中心。我心里有些threshold'ing申请后,向BFS图像,但当时卡住,不知道如何着手。然后我转过身来,OpenCV的阅读,它提供了图像特征检测之后。不过,我对大量的概念,在这里,再次利用技术,茫然不知如何进行淹没。

In the images below, the tips are at the center of the crosses. I thought of applying BFS to the images after some threshold'ing but was then stuck and didn't know how to proceed. I then turned to OpenCV after reading that it offers feature detection in images. However, I am overwhelmed by the vast amount of concepts and techniques utilized here and again, clueless about how to proceed.

我是不是看它的正确方法?你能给我一些指点?

Am I looking at it the right way? Can you give me some pointers?


图像从短片中提取

Image extracted from short video


二进制版本与阈值设定为95

Binary version with threshold set at 95

推荐答案

下面是一个简单的<一个href=\"http://opencv.itseez.com/modules/imgproc/doc/object_detection.html?highlight=matchtemplate#void%20matchTemplate%28InputArray%20image,%20InputArray%20temp,%20OutputArray%20result,%20int%20method%29\">matchTemplate解决方案,即类似盖伊Sirton提到的方法。

Template Matching Approach

Here is a simple matchTemplate solution, that is similar to the approach that Guy Sirton mentions.

模板匹配将工作,只要你没有太多的缩放或旋转你的目标出现。

Template matching will work as long as you don't have much scaling or rotation occurring with your target.

下面是我使用的模板:

Here is the template that I used:

下面是code我用来检测几个通畅十字架的:

Here is the code I used to detect several of the unobstructed crosses:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>

using namespace cv;
using namespace std;

int main(int argc, char* argv[])
{
    string inputName = "crosses.jpg";
    string outputName = "crosses_detect.png";
    Mat img   = imread( inputName, 1);
    Mat templ = imread( "crosses-template.jpg", 1);

    int resultCols =  img.cols - templ.cols + 1;
    int resultRows = img.rows - templ.rows + 1;
    Mat result( resultCols, resultRows, CV_32FC1 );

    matchTemplate(img, templ, result, CV_TM_CCOEFF);
    normalize(result, result, 0, 255.0, NORM_MINMAX, CV_8UC1, Mat());

    Mat resultMask;
    threshold(result, resultMask, 180.0, 255.0, THRESH_BINARY);

    Mat temp = resultMask.clone();
    vector< vector<Point> > contours;
    findContours(temp, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE, Point(templ.cols / 2, templ.rows / 2));

    vector< vector<Point> >::iterator i;
    for(i = contours.begin(); i != contours.end(); i++)
    {
        Moments m = moments(*i, false);
        Point2f centroid(m.m10 / m.m00, m.m01 / m.m00);
        circle(img, centroid, 3, Scalar(0, 255, 0), 3);
    }

    imshow("img", img);
    imshow("results", result);
    imshow("resultMask", resultMask);

    imwrite(outputName, img);

    waitKey(0);

    return 0;
}

这将导致该检测图像:结果

This results in this detection image:

这code基本设定阈值的交叉峰从图像的其余部分分开,然后检测所有这些轮廓。最后,计算每个轮廓的形心检测十字的中心

This code basically sets a threshold to separate the cross peaks from the rest of the image, and then detects all of those contours. Finally, it computes the centroid of each contour to detect the center of the cross.

下面是使用三角检测的另一种方法。这似乎并不为 matchTemplate 办法是准确的,但可能是你可以玩的替代品。

Here is an alternative approach using triangle detection. It doesn't seems as accurate as the matchTemplate approach, but might be an alternative you could play with.

使用 findContours 我们检测图像,这将导致以下的所有三角形:

Using findContours we detect all the triangles in the image, which results in the following:

然后我注意到所有的三角形顶点附近集群十字中心,然后让这些集群被用来质心如下所示的交叉中心点:

Then I noticed all the triangle vertices cluster near the cross center, so then these clusters are used to centroid the cross center point shown below:

最后,这里是code,我以前做的:

Finally, here is the code that I used to do this:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <list>

using namespace cv;
using namespace std;

vector<Point> getAllTriangleVertices(Mat& img, const vector< vector<Point> >& contours);
double euclideanDist(Point a, Point b);

vector< vector<Point> > groupPointsWithinRadius(vector<Point>& points, double radius);
void printPointVector(const vector<Point>& points);
Point computeClusterAverage(const vector<Point>& cluster);

int main(int argc, char* argv[])
{
    Mat img   = imread("crosses.jpg", 1);
    double resizeFactor = 0.5;
    resize(img, img, Size(0, 0), resizeFactor, resizeFactor);

    Mat momentImg = img.clone();

    Mat gray;
    cvtColor(img, gray, CV_BGR2GRAY);

    adaptiveThreshold(gray, gray, 255.0, ADAPTIVE_THRESH_MEAN_C, THRESH_BINARY, 19, 15);
    imshow("threshold", gray);
    waitKey();

    vector< vector<Point> > contours;
    findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);

    vector<Point> allTriangleVertices = getAllTriangleVertices(img, contours);

    imshow("img", img);
    imwrite("shape_detect.jpg", img);
    waitKey();

    printPointVector(allTriangleVertices);
    vector< vector<Point> > clusters = groupPointsWithinRadius(allTriangleVertices, 10.0*resizeFactor);
    cout << "Number of clusters: " << clusters.size() << endl;

    vector< vector<Point> >::iterator cluster;
    for(cluster = clusters.begin(); cluster != clusters.end(); ++cluster)
    {
        printPointVector(*cluster);

        Point clusterAvg = computeClusterAverage(*cluster);
        circle(momentImg, clusterAvg, 3, Scalar(0, 255, 0), CV_FILLED);
    }

    imshow("momentImg", momentImg);
    imwrite("centroids.jpg", momentImg);
    waitKey();

    return 0;
}

vector<Point> getAllTriangleVertices(Mat& img, const vector< vector<Point> >& contours)
{
    vector<Point> approxTriangle;
    vector<Point> allTriangleVertices;
    for(size_t i = 0; i < contours.size(); i++)
    {
        approxPolyDP(contours[i], approxTriangle, arcLength(Mat(contours[i]), true)*0.05, true);
        if(approxTriangle.size() == 3)
        {
            copy(approxTriangle.begin(), approxTriangle.end(), back_inserter(allTriangleVertices));
            drawContours(img, contours, i, Scalar(0, 255, 0), CV_FILLED);

            vector<Point>::iterator vertex;
            for(vertex = approxTriangle.begin(); vertex != approxTriangle.end(); ++vertex)
            {
                circle(img, *vertex, 3, Scalar(0, 0, 255), 1);
            }
        }
    }

    return allTriangleVertices;
}

double euclideanDist(Point a, Point b)
{
    Point c = a - b;
    return cv::sqrt(c.x*c.x + c.y*c.y);
}

vector< vector<Point> > groupPointsWithinRadius(vector<Point>& points, double radius)
{
    vector< vector<Point> > clusters;
    vector<Point>::iterator i;
    for(i = points.begin(); i != points.end();)
    {
        vector<Point> subCluster;
        subCluster.push_back(*i);

        vector<Point>::iterator j;
        for(j = points.begin(); j != points.end(); )
        {
            if(j != i &&  euclideanDist(*i, *j) < radius)
            {
                subCluster.push_back(*j);
                j = points.erase(j);
            }
            else
            {
                ++j;
            }
        }

        if(subCluster.size() > 1)
        {
            clusters.push_back(subCluster);
        }

        i = points.erase(i);
    }

    return clusters;
}

Point computeClusterAverage(const vector<Point>& cluster)
{
    Point2d sum;
    vector<Point>::const_iterator point;
    for(point = cluster.begin(); point != cluster.end(); ++point)
    {
        sum.x += point->x;
        sum.y += point->y;
    }

    sum.x /= (double)cluster.size();
    sum.y /= (double)cluster.size();

    return Point(cvRound(sum.x), cvRound(sum.y));
}

void printPointVector(const vector<Point>& points)
{
    vector<Point>::const_iterator point;
    for(point = points.begin(); point != points.end(); ++point)
    {
        cout << "(" << point->x << ", " << point->y << ")";
        if(point + 1 != points.end())
        {
            cout << ", ";
        }
    }
    cout << endl;
}

我在previous实施修正了一些错误,并清洗了code了一下。我还与各大小调整因素测试它,它似乎表现得非常好。不过,我达到了四分之一的规模后,开始有正确检测三角形的麻烦,所以这可能不是非常小的十字架工作。此外,它似乎有在时刻的错误功能作为它返回(-NAN,-NAN)位置的一些有效的集群。所以,我认为,精度好位提高。它可能需要一些更多的调整,但总体来说,我认为它应该是你一个很好的起点。

I fixed a few bugs in my previous implementation, and cleaned the code up a bit. I also tested it with various resize factors, and it seemed to perform quite well. However, after I reached a quarter scale it started to have trouble properly detecting triangles, so this might not work well for extremely small crosses. Also, it appears there is a bug in the moments function as for some valid clusters it was returning (-NaN, -NaN) locations. So, I believe the accuracy is a good bit improved. It may need a few more tweaks, but overall I think it should be a good starting point for you.

我觉得我的三角检测会工作更好,如果周围的三角形黑色边框有点厚/更清晰,如果有在三角形本身少的阴影。

I think my triangle detection would work better if the black border around the triangles was a bit thicker/sharper, and if there were less shadows on the triangles themselves.

希望帮助!

这篇关于检测图像中的杂交的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆