OpenCV的Andr​​oid中图像比较 [英] OpenCV image comparison in Android

查看:635
本文介绍了OpenCV的Andr​​oid中图像比较的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我设计了一些code的图像比较。匹配的部分仍然是一个有点瑕疵的,我也喜欢一些assitance。该项目可以在这里找到 - 。 GitHub上

I have devised some code for image comparison. The matching part is still a bit flawed and I would love some assitance. The project can be found at - GitHub.

我有这两个图像:

IMG1

Img1

IMG2

and Img2

当我在OpenCV中使用下面的命令

When I use the following command in openCV

Mat img1 = Highgui.imread("mnt/sdcard/IMG-20121228.jpg");
Mat img2 = Highgui.imread("mnt/sdcard/IMG-20121228-1.jpg");

try{
    double l2_norm = Core.norm( img1, img2 );
    tv.setText(l2_norm+"");
} catch(Exception e) {
    //image is not a duplicate
}

我获得l2_norm的双重价值。这种双重价值变化的重复图像对。但是,如果图像是不同的,那么将引发异常。这是我如何找出重复的图片?还是有更好的方法?我已经广泛地用Google搜索,但没有找到一个真正令人信服的答案。我想code和解释如何我比较两个图像,并获得根据图像。

修改

Scalar blah= Core.sumElems(img2);
    Scalar blah1=Core.sumElems(img1);

    if(blah.equals(blah1))
    {
        tv.setText("same image");
    }
    }

我想这一点,但如果条件是永远不会满足。我假设有一些差别,但没有功能。我该怎么办?

I've tried this, but the if condition is never satisfied. I'm assuming there are a few differences, but there is no compare function for Scalar. What do I do?

修改

try{
    Scalar blah= Core.sumElems(img2);
    Scalar blah1=Core.sumElems(img1);
    String b=blah.toString();
    String b1=blah1.toString();
    System.out.println(b+" "+b1);
    double comp=b.compareTo(b1);
    tv.setText(""+comp);
    }

此方法又存在缺陷。虽然可以使用同一个像样的精度比较图像,失败时的图像的大小不同。

This method is again flawed. Although it can be used to compare images with a decent accuracy, it fails when images are of different sizes.

在图像的大小不同,我打印的标值我得到这样的:

When images are of different sizes and I print the scalar values I get this:

[9768383.0,1.0052889E7,1.0381814E7,0.0] [1.5897384E7,1.6322252E7,1.690251E7,0.0]

相比时相同大小的图像进行比较,以在第二和第三个数字虽然不多之间的变化是相当大的。然而,第一个数字受损最严重的变化。

The variation between the second and third numbers although not much is quite large compared to when the images of same size are compared. The first number however suffers the most change.

什么是比较两个图像的内容最好最快的方法?

What would be the best fastest way to compare the contents of two images?

我使用的是code,我发现<一href="http://stackoverflow.com/questions/10691521/surf-description-faster-with-fast-detection">here.

I'm using the code I found here.

什么我不能弄清楚是如何初始化 MatOfKeyPoint 变量关键点 logoKeypoints 。这是我的code片断:

What I'm not able to figure out is how to initialize the MatOfKeyPoint variables keypoints and logoKeypoints. Here's my code snippet:

           FeatureDetector detector = FeatureDetector.create(FeatureDetector.SURF);
        //FeatureDetector detector = FeatureDetector.create(FeatureDetector.FAST);
        //Imgproc.cvtColor(img1, img1, Imgproc.COLOR_RGBA2RGB);
        //Imgproc.cvtColor(img2, img2, Imgproc.COLOR_RGBA2RGB);

        DescriptorExtractor SurfExtractor = DescriptorExtractor
        .create(DescriptorExtractor.SURF);


        //extract keypoints
        MatOfKeyPoint keypoints, logoKeypoints;
        long time= System.currentTimeMillis();
        detector.detect(img1, keypoints);
        Log.d("LOG!", "number of query Keypoints= " + keypoints.size());
        detector.detect(img2, logoKeypoints);
        Log.d("LOG!", "number of logo Keypoints= " + logoKeypoints.size());
        Log.d("LOG!", "keypoint calculation time elapsed" + (System.currentTimeMillis() -time));

        //Descript keypoints
        long time2 = System.currentTimeMillis();
        Mat descriptors = new Mat();
        Mat logoDescriptors = new Mat();
        Log.d("LOG!", "logo type" + img2.type() + "  intype" + img1.type());
        SurfExtractor.compute(img1, keypoints, descriptors);
        SurfExtractor.compute(img2, logoKeypoints, logoDescriptors);
        Log.d("LOG!", "Description time elapsed" + (System.currentTimeMillis()- time2));

我显然不能初始化变量关键点 logoKeypoints 来空因为我会收到一个空指针异常即可。我该如何初始化呢?

I obviously can't initialize the variables keypoints and logoKeypoints to null cuz I'll receive a null pointer exception then. How do I initialize them?

推荐答案

您应该明白,这不是一个简单的问题,你有不同的概念,你可以遵循。我只点了两解决方案,无需源 - code。

You should understand that this is not a simple question and you have different concepts you could follow. I will only point out two solution without source-code.

  1. 柱状图对比:您可以转换这两种图像转换成灰度进行直方图的范围[0,...,255]。每个象素值将被计算。然后用这两个直方图进行比较。如果像素强度分布等于或高于一些treshold(大约90%的像素),你可以考虑此图片为重复。但是:这是最简单的解决方案之一,它是不稳定的,如果任何图象有均匀分布
  2. 兴趣点探测器/ -Descriptors :看看SIFT / SURF图像探测器和描述符。探测器将尝试确定图像中的强度的独特keypoits。描述符将在此位置I(X,Y)进行计算。用暴力破解,方法和欧氏距离正常的匹配可以使用他们的描述符合这些图像。如果图像是重复给定的匹配率应该很高。该解决方案是很好的实施,有可能是关于这个话题足够的教程。
  1. Histogram comparison: You could convert both images into grey-scale make a histogram in the range of [0,...,255]. Every pixel-value will be counted. Then use both histograms for comparison. If the distribution of pixel-intensities equals or is above some treshold (perhaps 90% of all pixels), you could consider this images as duplicates. BUT: This is one of the simplest solutions and it isn't stable if any picture has an equal distribution.
  2. Interest-Point-Detectors/-Descriptors: Take a look at SIFT/SURF image-detectors and descriptors. A detector will try to determine unique keypoits of intensities in an image. A descriptor will be computed at this location I(x,y). A normal matcher with a bruteforce-approach and euclidean distance can match these images using their descriptors. If an image is a duplicate the rate of given matches should very high. This solution is good to implement and there could be enough tutorials regarding this topic.

我希望这会有所帮助。请询问如果你有问题。

I'll hope this helps. Please ask if you have questions.

[更新-1] A C ++ - 教程:<一href="http://morf.lv/modules.php?name=tutorials&lasit=2#.UR-ewKU3vCk">http://morf.lv/modules.php?name=tutorials&lasit=2#.UR-ewKU3vCk

[UPDATE-1] A C++-tutorial: http://morf.lv/modules.php?name=tutorials&lasit=2#.UR-ewKU3vCk

有些JavaCV-教程: HTTP://$c$c.google.com/p/ javacv / W /列表

Some JavaCV-tutorials: http://code.google.com/p/javacv/w/list

[更新-2] 这里是SIFT-检测和整理,描述使用默认参数的例子。 RANSAC-阈值单应是65,再投影误差(小量)为10,交叉验证功能。你可以试着算匹配。如果内联离群,比过高,你可以看到这双为重复。 例如:这些图像产生180关键点在IMG1和IMG2 198。匹配描述符是163,其中只有3是异常值。因此,这给了一个很好的比例,只可能意味着,这些照片可能是重复的。

[UPDATE-2] Here is an example with SIFT-Detector and SIFT-Descriptor using default parameters. RANSAC-Threshold for homography is 65, reprojection-error (epsilon) is 10, cross-validation enabled. You could try to count the matched. If the Inliner-Outlier-Ratio is too high you could see this pair as duplicates. For example: These images produce 180 keypoints in IMG1 and 198 in IMG2. The matched descriptors are 163 of which only 3 are outliers. So this gives a really good ratio which only could mean that these images could be duplicates.

[更新-3] 我不明白,为什么你可以初始化MatOfKeypoints。 我读过的API 并有一个公共构造方法。和:您可以使用要分析的图像垫。这是非常好的。 =)

[UPDATE-3] I don't understand why you can initialize the MatOfKeypoints. I've read the API and there's a public constructor. AND: You can use the Mat of the image you want to analyse. This is very nice. =)

MatOfKeyPoint reference = new MatOfKeyPoint(matOfReferenceImage);

有关配套使用 BRUTEFORCE_SL2描述,匹配器导致您需要的欧氏距离对于SURF或SIFT。

For Matching use a BRUTEFORCE_SL2 Descriptor-Matcher cause you will need the euclidean distance for SURF or SIFT.

这篇关于OpenCV的Andr​​oid中图像比较的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆