自相关矩阵的特征值在图像处理中的意义是什么? [英] What is the significance of the eigenvalues of an autocorrelation matrix in image processing?

查看:4387
本文介绍了自相关矩阵的特征值在图像处理中的意义是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用哈里斯角点检测算法找到角点。我正在阅读的参考文献建议我检查自相关矩阵的特征值。然而,我不明白角点如何与自相关矩阵的特征值相关。

I am working on finding corner points using the Harris corner detection algorithm. A reference paper that I am reading suggested me to examine the eigenvalues of the autocorrelation matrix. However, I don't understand how corner points are related to the eigenvalues of the autocorrelation matrix. What is this relationship between them?

推荐答案

自相关矩阵的特征值告诉你什么是功能你正在看。您计算的自相关是基于您在图像中查看的图像补丁。

The eigenvalues of the autocorrelation matrix tell you what kind of feature you are looking at. The autocorrelation you are computing is based on an image patch you are looking at in the image.

实际上,您计算的是结构张量。哈里斯角检测器算法通常将该矩阵称为自相关矩阵,但是它实际上只是平方差的和。结构张量是在同一图像内的两个图像块之间的平方差的和的2×2矩阵。

Actually, what you're computing is the structure tensor. The Harris corner detector algorithm commonly refers to this matrix as the autocorrelation matrix, but it is really just a sum of squared differences. The structure tensor is a 2 x 2 matrix of sum of squared differences between two image patches within the same image.

这是您如何计算哈里斯角检测器中的结构张量:

This is how you'd compute the structure tensor in a Harris corner detector sense:

给定一个图像补丁您的图像, Ix Iy 表示图像补丁在水平和垂直方向的偏导数。您可以使用任何标准卷积运算来实现这些偏导数图像,例如使用Prewitt或Sobel运算符。

Given an image patch in your image, Ix and Iy represent the partial derivatives of the image patch in the horizontal and vertical directions. You can use any standard convolution operation to achieve these partial derivative images, like using a Prewitt or Sobel operator.

计算此矩阵后,有三种情况需要以观察哈里斯角检测器中的自相关矩阵。注意,这是一个2×2的矩阵,所以这个矩阵有两个特征值。

After you compute this matrix, there are three situations that you need to take a look at when looking at the autocorrelation matrix in the Harris corner detector. Note that this is a 2 x 2 matrix, and so there are two eigenvalues for this matrix.


  1. 如果两个特征值都接近0,那么在您正在查看的图像补丁中没有感兴趣的特征点。

  2. 如果其中一个特征值较大,另一个接近0,则会告诉您位于边缘

  3. 如果两个的特征值都很大,这意味着我们正在查看的特征点是

  1. If both eigenvalues are close to 0, then there is no feature point of interest in the image patch you're looking at.
  2. If one of the eigenvalues is larger and the other is close to 0, this tells you that you are lying on an edge.
  3. If both of the eigenvalues are large, that means the feature point we are looking at is a corner.

然而,已经注意到,计算特征值是一个非常计算昂贵的操作,即使它只是一个2 x 2的矩阵。因此,哈里斯想出了兴趣点测量,而不是计算特征值来确定某事是否有趣。基本上,当你计算这个度量,如果它超过一些设置的阈值,那么你有一个角点在这个补丁的中心。如果没有,则没有角点。

However, it has been noted that calculating eigenvalues is a very computationally expensive operation, even if it's just for a 2 x 2 matrix. Therefore, Harris came up with an interest point measure instead of computing the eigenvalues to determine whether or not something is interesting. Basically, when you compute this measure, if it surpasses some set threshold, then what you have is a corner point within the centre of this patch. If it doesn't, then there is no corner point.

Mc 图像补丁看看我们是否有一个角点。 det 是矩阵的行列式,只是 ad-bc ,假设你的2 x 2矩阵 [ab; cd] trace 只是对角线或 a + d ,因为矩阵具有相同的形式: [ab; c d] kappa 是一个可调参数,通常范围在0.04到0.15之间。您设置的阈值,以确定我们是否有一个有趣的点或边缘高度取决于你的形象,所以你必须玩弄这个。

Mc is the "score" that is for a particular image patch to see if we have a corner point. det is the determinant of the matrix, which is just ad - bc, given that your 2 x 2 matrix is in the form of [a b; c d], and the trace is just the sum of the diagonals or a + d, given that the matrix is of the same form: [a b; c d]. kappa is a tunable parameter that usually ranges between 0.04 and 0.15. The threshold that you set to see whether or not we have an interesting point or an edge highly depends on your image, so you'll have to play around with this.

如果你想避免使用 kappa ,还有另一种方法来估计使用来宝的角度度量计算特征值:

If you want to avoid using kappa, there is another way to estimate calculating the eigenvalues using Noble's corner measure:

epsilon 是一些小常数,例如 0.0001
再次,要弄清楚你是否有一个有趣的点取决于你的形象。找到图像中的所有角点后,人们通常执行非最大抑制以抑制误报。这意味着你检查围绕特定角点中心的角点的邻域。如果此中心角点没有最高得分,则会舍弃此角点。这也被执行,因为如果你要使用滑动窗口方法检测角点,很有可能在只有一个或几个的情况下,

epsilon is some small constant, like 0.0001. Again, to figure out whether or not you have an interesting point depends on your image. After you find all of the corner points in your image, people usually perform non-maximum suppression to suppress false positives. What this means is that you examine a neighbourhood of corner points that surround the centre of a particular corner point. If this centre corner point does not have the highest score, then this corner point is dropped. This is also performed because if you were to detect corner points using a sliding window approach, it is highly probable that you would have multiple corner points within a small vicinity of the valid one when only one or a few would suffice.

基本上,查看特征值的目的是检查您是否正在查看边缘,角点或根本没有。

Basically, the point of looking at the eigenvalues is to check to see whether or not you are looking at an edge, a corner point, or nothing at all.

这篇关于自相关矩阵的特征值在图像处理中的意义是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆